Keen AMP analytics integration lightning bolt

Keen - AMP: New Integration for Publishers to Show Authors Engagement Data

With smartphones and tablets now the go-to devices for web browsing, readers and shoppers are often frustrated by slow page load times. The open-source Accelerated Mobile Pages (AMP) project, spearheaded by Google, was intended to solve this problem, providing a framework that allows websites and ads to load fast on all devices.

The drawback for publishers, however, was that the lack of AMP analytics created a blind spot in the user-facing metrics and analytics that they could provide to their authors. After collaborating closely with the AMP open-source community, we are proud to announce that The Keen AMP analytics integration has officially launched!

Keen plus AMP Analytics

The Keen AMP analytics integration means that all our customers now have the ability to:

  • Easily track events on AMP pages with our out-of-the-box config that simply requires dropping in a piece of code
  • Create a custom config using AMP-powered triggers and variable substitutions
  • Ensure snappy website loads thanks to data requests sent by the Beacon API

This article will detail how we worked with the AMP community to overcome the challenge of harvesting AMP metrics, two ways that you can implement Keen AMP analytics code on your website and what this integration means for your business. We’ll also offer a look toward the future.

Collaborating with the AMP Community to Innovate

Originally, AMP analytics would send POST requests without a body contained inside. AMP developers knew about this issue from feedback from the dev community. Unfortunately, the limitation meant that there was no way to send data back to the Keen API.

At Keen, our collective focus is to be as close to the leading edge of technology as possible. Our development team stays abreast of web trends and open-source initiatives, and clearly, AMP’s commitment to speeding up the browsing experience for users is exciting, so we collaborated with AMP developers as they worked to fix this issue. We are incredibly thankful for all of the work those folks inside the AMP project have done to enable Keen to now provide AMP analytics!

Implementing Keen for AMP Analytics

Keen customers can immediately take advantage of our AMP analytics integration by implementing a small amount of code. But first, you need to decide whether to implement Keen’s standard predefined config file or create your own custom integration. Let’s review each option, steps for integration and some of the benefits and drawbacks:

1. Keen’s Standard AMP Analytics Config File

This is the simplest and most straightforward way to integrate with AMP. Using our predefined config file, you will automatically begin receiving analytics from your AMP website for a number of items, including browser type, device type, language, and URL tracking parameters. The entire list of variables included in Keen’s standard AMP analytics config file—along with the meaning of each variable—can be viewed on GitHub.

First, you will make sure that you have a Project ID and Write Key from Keen. You can sign up for an account with Keen or log in to an existing account to get this information.

Next, you’ll replace "YOUR_PROJECT_ID" and "YOUR_WRITE_KEY" with your own values in the following code (view it on GitHub):

After doing that, you will place the script tag to load amp analytics .js in the header of your site. Configuration should be in the body:

Note that while this is the easiest way to get started, you may be returning data that you don’t need—the extra time it takes to get variables that you don’t care about could adversely impact your page load time. If you need only a select few variables, you may want to consider doing a custom integration.

2. Custom Keen AMP Analytics Config

Creating a custom Keen AMP analytics integration means that you, as a developer, will have full control over the data you gather. The major advantage of a custom config is that you can eliminate the variables that you aren't interested in monitoring. This will reduce the size of the POST request, resulting in faster execution. Plus, sending the smallest possible number of variables is the right thing to do if you're concerned about privacy issues.

Again, you will need to start by getting a Project ID and Write Key from Keen. You will put them in the following code (this can also be found on GitHub):

You can then place as many variables as you want to track inside the extraUrlParams property. All of the strings, integers, arrays and objects that you put inside the extraUrlParams will be sent to Keen's API and saved in a database. While this implementation may take a little more time, it will return only the data that matters to you and may further speed up your site.

Impacts of AMP Analytics for Publishers

Delivering analytics on websites with AMP implementation is essential: publishers are looking to provide a better understanding to their writers of how their content performs. And since such a high percentage of traffic now comes from mobile devices it’s important to note that:

  • AMP sites can load much faster for mobile visitors, which can result in lower bounce rates and increased customer satisfaction
  • Google has recently begun to reward AMP sites with higher placement in organic search

Since the AMP project was started in 2015, we’ve noticed a few key trends. With the proliferation of digital advertising—including audience retargeting and autoplaying videos—many websites have become bloated. This results in a very poor experience for users, especially those on mobile devices that may have less-than-optimal speeds.

The goal of AMP was to solve this issue, attempting to strip out many of the scripts and other add-ons that were slowing down the web. The project did this to amazing effect, but cutting down to the bare bones came with a cost: while consumers were happy, businesses and publishers were unable to gather audience insights.

Today, we see the AMP project beginning to come into harmony with both consumer and business needs. After all, what good is a technology that can snappily serve web pages if it lacks analytics?

Looking toward the future, we see this continuous push-and-pull between consumers, businesses, privacy and data analytics. And with our team at Keen focused on building user-facing metrics solutions for developers, we will be innovating as the industry evolves.

What is AMP - Keen

What is Google AMP and How Is It Affecting Webpage Development?

Mobile search surpassed desktop search for the first time in 2015, and in the same year Google announced an open-source project called AMP (Accelerated Mobile Pages) meant to improve the mobile web by allowing mobile website content to render nearly instantly. AMP is a framework to create a bare-bones version of a site’s pages, essentially stripping out any custom JavaScript, most CSS, widgets, scripts, and other add-ons.

The premise is that faster load times lead to more time on site and better engagement, which means reduced bounce rates, higher conversions, and increased search rankings. Sounds great, right? Let’s dive in to learn more about some of the benefits and challenges of using AMP.

Reasons to use AMP

The hype is real. AMP pages are lightning fast, typically loading much faster than most regular web pages. Yes, there are many other ways to make your pages fast. Controversy aside, when you click on an AMP page from Google Search it loads instantly and this makes it very good at providing a consistent user experience. For slower cellular networks this is especially important where typical web pages can take ages to load.

For prominence in search, AMP results appear in the top stories carousel above all other results in Google. This carousel is horizontal, allowing users to scroll side-to-side through the results without having to scroll down. But like any other search feature, Google may decide to change and the AMP carousel may not be around forever.

While AMP pages may not be directly connected to better rankings, Google has hinted in the past that AMP might one day become a search ranking signal.

Downsides and challenges of using AMP

Analytics Complications
Surprisingly, tracking from AMP is not as easy as you might expect. It takes special effort and resources. For starters, AMP does not support JavaScript by default which includes Google Analytics. If you already use Google Analytics on your site and decide to use AMP, you will need to set up a different tag and implement across all AMP pages. While basic metrics like visitors and engagement will be available, you won’t have the same data that you would from a standard Google Analytics implementation.

AMP isn’t a new type of technology to make your pages lightning fast. What it does is serve up pre-loaded cached versions of your AMP-enabled pages whenever visitors access them. The pages that appear in search results are housed by Google, which means that you’re showing a cached version of your content. For some, it’s a thorny subject to have their content so reliant on Google.

Ads and Conversions
While AMP pages load quickly, external content on the page is likely to lag behind. This can be a problem when it comes to hosting advertisements, as visitors are likely to scroll past an ad before it has a chance to load which can destroy any chance at conversion. Additionally, AMP only supports limited types of ad formats.

Certainly, there are many other pros and cons of using AMP. Overall, the effectiveness comes down to how well it is implemented and proper implementation takes time for analytics setup and for page optimization. For example, Google excludes pages from the AMP carousel if the content on the AMP page is not substantially similar to the corresponding responsive mobile page.

For Keen, we’ll be watching closely to see how attempts to speed up the web challenge developers with their approach analytics and data. We’d like to hear from you; what has been your experience with AMP so far?

Your users want insights Keen embedded analytics on laptop

Your Users Want Insights - An Intro to Metrics for End-Users

We all want our users to get results.

We want them to spend more time on our platform and see data that reinforces the value they are receiving each time they log in.

Have you ever wondered if user-facing metrics could help?

At Keen we’ve spent a lot of time thinking about how user-facing metrics can enrich the end-user experience for our customers’ platforms.

It's common to confuse user-facing metrics with embedded analytics, which can be a form of metrics for end users, but are typically more internal facing. An excerpt from Gartner defines embedded analytics as, “The use of reporting and analytic capabilities in transactional business applications. These capabilities must be easily accessible from inside the application, without forcing users to switch between systems.” 

All analytics provide data, but in a world drowning in data the data itself is not necessarily valuable. It’s relevant and contextual information in the form of personalized user dashboards and other visual representations that ultimately allow users to analyze, answer questions, and improve results without having to leave the app.

So why user-facing metrics, and why now?

The need for user-facing metrics, is steadily expanding — one study found that nearly 90% of UK and US application decision makers are planning on investing in embedded analytics in the next 12 months. And the business implications are massive. In a similar study, 90% of the app teams surveyed reported a reduction in customer churn and 91% reported improving win rates due to embedded analytics. Additionally, 68% said that they can charge more for their product because of the added value that the metrics bring.

Additionally, users have begun to expect personalized data when they engage with consumer and business applications. Any well-meaning mother could tell you we’re attached to our apps — we’ve grown accustomed to instant access to huge amounts of information at the touch of a button on any laptop, tablet, or mobile device. Details related to our online behaviors have become commonplace with the infinite rise of social media and this has driven us to expect companies to observe our preferences, our actions, and to tailor a personal experience.

Some companies are using user-facing metrics to enhance the usage of their services. Pixlee helps huge brands like Marriott and Levi’s curate & display customer content from their biggest fans. They show increased shopping cart conversion among other benefits through their user-facing metrics.

Pixlee’s user-facing metrics

Next Big Sound (NBS) studies the popularity of musicians by tracking data on their popularity from various platforms, like social media, radio, or streaming services. They use this data to help their customers, like advertisers or record labels, understand why certain songs are played more than others, and to help cultivate future musical successes. NBS recently launched a partnership with Spotify that extends these services to artists, who can use this data to understand how impactful their music is and for their own promotion.

In IoT, devices like FitBit or other health trackers owe their success in part to effective data visualization. When you wear one they track several aspects of fitness activity like the steps you take, your rate of recovery and aspects of your sleep habits, such as oxygen intake. Having instant access to the historical and present data concerning your health is a major reason to wear such a device.

Similarly, the app MapMyFitness tracks where you’ve gone by tracing a map on your mobile device or computer. It tells you things like how many calories you burned, which hills you climbed and how steep they were, how you rank compared to other runners or bikers, lets you connect to them, and find new routes, among other things. And there are hundreds and thousands of other apps all tracking and reporting back usage and results anytime a user logs in.

For these reasons embedding analytics and metrics into your applications is no longer an option and it’s beyond being expected. Instead, it’s become a source of competitive advantage. Imagine the reactions of your users when you begin providing them with the information and insights they’ve been craving — we think it may look something like this:

We’d love to hear how you use analytics to improve applications for your users. Tweet us @keen_io.

Keen and the EU General Data Protection Regulation (GDPR)

Update on Keen and GDPR Compliance

Keen is deeply committed to doing our part to ensure that personal data is adequately protected. As such, we are actively reviewing the requirements of EU Regulation 2016/679 (more commonly referred to as “GDPR”) and how they affect us and our customers. In this blog post we’ll try to provide as much information and guidance as possible for you to remain in GDPR compliance with Keen.

Our Data Protection Philosophy

Keen stores two different classes of data: (a) the account information of our direct customers, as provided to us via accounts on the website and/or through support channels such as e-mail or chat; and (b) data about our customers’ customers in the form of events submitted to our streams API.

We have designed our system to be resistant to attack against either class of data, but the second category (Keen’s customers’ event data) is more complicated due to the fact that we allow highly flexible content and cannot directly control what information is included or how personally identifiable or sensitive the information or data might be. For this reason we always recommend against the storage of any Personally Identifiable Information (PII) or otherwise sensitive data in event properties.

We believe that most use cases for Keen do not inherently rely on personal data and such data can be anonymized, pseudonymized, or omitted entirely without losing value. As such it is more valuable for our customer base as a whole for us to focus our engineering effort on other aspects of the product, rather that building high-assurance security protections that most customers do not need.

That said, we strive to be as secure as possible, and will continue to improve our security posture. We also recognize that some customers do have legitimate use cases for storing some amount of low-sensitivity PII (such as e-mail or IP addresses, for example), and those require a somewhat more rigorous data protection strategy than what we have in place now. So over the coming months we are making investments to move in that direction.

How Keen Secures Data Today

Our data protection strategy spans several dimensions: technology, people, and processes.


The most direct way that we protect data is by limiting access to it using standard industry best practices. All data is stored on hardware in Amazon’s AWS cloud, using a VPC to isolate all servers from the outside internet. These systems can only be accessed via a set of bastion hosts which are regularly updated with the latest security patches, and which can only be connected to using SSH channels secured by a select group of Keen employees’ cryptographic access keys. We’ve also adopted strict requirements around access to the AWS environment itself, including mandatory Multi-Factor Authentication (MFA) and complex passwords.

This structure makes direct access to our internal systems quite difficult for an unauthorized person, but it cannot protect the public-facing endpoints such as (i.e. our website) or We secure these via the access keys available in each Keen Project or Organization, which adhere to cryptographic best practices.

(Please note that we currently do not encrypt traffic between various internal services within our VPC, nor do we encrypt data at rest. Up to this point we have not felt that there was much value in doing so, since the only practical exploit of this would require direct physical access to Amazon infrastructure. However we do plan to enable basic data-at-rest encryption soon; see roadmap below.)


The Keen web UI includes a mechanism by which authorized Keen employees can view customer data directly. This is used to help investigate and address any issues or questions reported to us by customers, as well as occasionally by our operational engineering team to diagnose and mitigate degradation of service. The mechanism is password-protected and limited to those who require it to provide customer support or to fulfill other responsibilities.

We also adhere to a policy of only using this access when it is necessary, and will seek permission before viewing customers’ raw event data. (In rare circumstances where the need is urgent, such as a system-wide outage, we may skip this step — but only as a last resort.)

Currently this “root” access is all or nothing and we rely on our hiring and training processes to mitigate the risk of unnecessary access by a Keen employee. The build out of a granular access control system is on our roadmap (see below).


We adhere to the following processes to help ensure that data is kept safe:

  • Access management: when a Keen employee leaves the company, we follow a checklist to ensure that all of their permissions are revoked.
  • Design and code reviews: all changes to the system are reviewed carefully by senior engineers, as well as tested in an isolated staging environment prior to deployment to production.
  • Threat modeling: periodically we review the threat model and try to identify gaps, assess risk, and determine what mitigations (if any) should be prioritized.
  • Automated backups: all data is automatically backed up to Amazon S3 to allow us to recover in the event of a catastrophic loss, whether due to malicious attack or other unexpected events. These backups age out over time, so any data which is removed from the source will eventually no longer appear in the backups. (We currently can’t offer any guarantees about how long it will be for any specific piece of data.)
  • Data retention: Keen stores data for as long as it is necessary to provide services to our customers and for an indefinite period after a customer stops using Keen. In most cases, data associated with a customer account will be kept until a customer requests deletion. (There is also a self-service delete API which is suitable for removing small amounts of data.)

Our Security and Privacy Roadmap

We will be making improvements to all of the above according to the following roadmap.

What we are intending to deliver by the GDPR deadline

GDPR goes into effect on May 25, 2018. Prior to that time Keen intends to:

  • Appoint a Data Protection Officer and a data protection working team
  • Build a formal data map
  • Perform internal threat modeling and gap analysis (and set up a recurring schedule)
  • Adopt and/or formalize written policies around core areas, including (but not necessarily limited to): data protection, data backup, data retention, access management, and breach management and reporting
  • Institute formal data protection training for all Keen employees
  • Encrypt data at rest
  • Schedule annual security audit with a 3rd party auditor (however the audit may not be completed until later in 2018)

We also intend to do the necessary legal paperwork to be able to confirm that our Data Sub-processors (primarily Amazon) are GDPR-compliant, and to be able to offer a Data Sub-processor Addendum to the contracts of customers who request it.

What we hope to improve over time

The following are examples of additional security enhancements that will not be addressed by the May 25 deadline:

  • More granular access controls, allowing Keen employees to be granted access according to the Principle of Least Privilege
  • Full data access audit history
  • Lockdown of Keen employee devices, and/or limiting access to customer data to certain approved devices
  • Integration with an intrusion detection system/service
  • Industry certifications

In addition, we expect that threat modeling and gap analysis (both our own and those done by a 3rd party auditor) will identify opportunities to further harden the system and provide redundant layers of risk mitigation. Those will be prioritized and incorporated into our roadmap as appropriate.

Next Steps

Ultimately our goal is to make Keen as valuable as possible to all of our customers. We appreciate your understanding, and also greatly value your input. If you have questions, concerns, or feedback about our approach or how it will affect your own GDPR compliance efforts, please reach out to us at!


Order and Limit Results of Grouped Queries (Hooray!)

Greetings Keen community! I’d like to make a quick feature announcement that will (hopefully) make many of you happy 😊

At Keen IO we’ve created a platform for collecting and analyzing data. In addition to the ability to count the individuals who performed a particular action, the API includes the ability to group results by one or more properties of the events (similar to the GROUP BY clause in SQL). For example: count the number of individuals who made a purchase and group by the country they live in. This makes it possible to see who made purchases in the United States versus Australia or elsewhere.

This grouping functionality can be very powerful, but there’s one annoying drawback: if there are many different values for your group_by property then the results can get quite large. (In the example above note all of the tiny slivers representing countries with only a handful of purchases.) What if I’m only interested in the top 5 or 10? Until now the only option was to post-process the response on the client (e.g. using Python or JavaScript) to sort and then discard the unwanted groups.

Today I’m excited to announce that, by popular demand, we’ve made this much easier! We recently added a feature called order_by that allows you to rank and return only the results that you’re most interested in. (To those familiar with SQL: this works very much like the ORDER BY clause, as you might expect.)

The order_by parameter orders results returned by a group_by query. The feature includes the ability to specify ascending (ASC) or descending (DESC) ordering, and allows you to order by multiple properties and/or by the result of the analysis.

Most importantly the new order_by feature includes the ability to limit the number of groups that are returned (again, mirroring the SQL LIMIT clause). This type of analysis can help answer important questions such as:

  • Who are the top 100 game players in the US?
  • What are the top 10 most popular article titles from last week?
  • Which 5 authors submitted the most number of articles last week?
  • What are the top 3 grossing states based on sum purchases during Black Friday?

order_by can be used with any Keen query that has a group_by, which in turn can be used with most Keen analysis types. (limit can be used with any order_by query.) For more details on the exact API syntax please check out the order_by API docs.

There is one important caveat to call out: using order_by and limit in and of itself won’t make your queries faster or cheaper, because Keen still has to compute the full result in order to be able to sort and truncate it. But being able to have the API take care of this clean-up for you can be a real time saver; during our brief internal beta I’ve already come to rely on it as a key part of my Keen analysis toolbox.

I’d like to extend a huge thanks to our developer community for all the honest constructive feedback they’ve given us over the years (on this issue and many others). You’re all critical in helping us understand where we can focus our engineering efforts to provide the most value. On that note: we have many more product enhancements on the radar for 2018, so if you want to place your votes we’re all ears! Feedback (both positive and negative) on the order_by feature is also welcome, of course. Please reach out to us at any time 🚀

Kevin Litwack | Platform Engineer

Keen is Joining the Scaleworks family

Today we’re excited to share that we’re starting a new chapter and joining the Scaleworks family.

Keen set out to empower developers with a custom analytics platform and the most seamless SaaS tool out there for data-handling. We created a periscope into user activity that we’re really proud of. We’ve helped companies easily build and embed all sorts of analytics for teams and customers, and we often power the dashboards in your favorite SaaS tools. It has been fulfilling knowing end-users rely on us for insights and that we help our customers make better decisions and build better products.

The Scaleworks team lives and breathes growing SaaS and has a great track record with businesses at our stage. They bring a ton of collective experience, and a focus on strategic direction, identifying and scaling efficiencies, specific innovation around market/customer demand, and business fundamentals. Given where we are and the path ahead, the combination just makes sense. There might be some things you’re wondering about. Yes, we’re going to continue to invest in product development, platform performance, and service levels. Our ethic around customer success remains as strong as ever, and is a core principle of Scaleworks’ also. Please let us know if you have any questions.

We’re thankful to the founding team that set the vision and got us here, and to our customers, and we now have our eyes on the future to take Keen to the next level. With an appreciation for what got Keen to where we are, we’re excited to fulfill Keen’s potential going forward.

Tracking GitHub Data with Keen IO

Today we’re announcing a new webhook-based integration with one of our favorite companies, GitHub!

We believe an important aspect of creating healthy, sustainable projects is having good visibility into how well the people behind them are collaborating. At Keen IO, we’re pretty good at capturing JSON data from webhooks and making it useful, which is exactly what we’ve done with GitHub’s event stream. By allowing you to track and analyze GitHub data, we’ve made it easy for open source maintainers, community managers, and developers to view and discover more information to quantify the success of their projects.

This integration records everything from pushes, pull requests, and comments, to administrative events like project creation, team member additions, and wiki updates.

Once the integration is setup, you can use Keen IO’s visualization tools like the ExplorerDashboards, and Compute API to dig into granular workflow metrics, like:

  • Total number of first-time vs. repeat contributors over time
  • Average comments per issue or commits per pull request, segmented by repo
  • Pull request additions or deletions across all repositories, segmented by contributor
  • Total number of pull requests that are actually merged into a given branch
Number of comments per day on Keen IO’s JavaScript library repos
Number of pull requests per day merged in Keen IO’s repos, “false” represents not merged
Percentage of different author associations of pull request reviews

Ready to try it out?

Assigning webhooks for each of these event types can be a tedious process, so we created a simple script to handle this setup work for you.

Check out the setup instructions hereWith four steps, you will be set up and ready to rock in no time.

What metrics are you excited discover?

We’d love to hear from you! What metrics and charts would you like to see in a dashboard? What are challenges you have had with working with GitHub data? We’ve talked to a lot of open source maintainers, but we want to hear more from you. Feel free to respond to this blog post or send an email to Also, if you build anything with your GitHub data, we’d love to see it! ❤

Announcing Hacktoberfest 2017 with Keen IO

It’s October, which you probably already know! 👻 But more importantly, that means it is time for Hacktoberfest! Keen IO is happy to announce we will be joining Hacktoberfest this year.

What is Hacktoberfest?

Digital Ocean with GitHub launched Hacktoberfest in 2014 to encourage contributions to open source projects. If you open four pull requests on any public GitHub repo, you get a free limited edition shirt from Digital Ocean. You can find issues in hundreds of different projects on GitHub using the hacktoberfest label. Last year, 29,616 registered participants had opened at least four pull requests to complete Hacktoberfest successfully, which is amazing. 👏

Hacktoberfest with Keen IO

If you have ever seen our Twitter feed, you know at Keen IO we love sending our community t-shirts. So, we have something to sweeten the deal this year. If you open and get at least one pull request merged on any Keen IO repo, we will send you a free Keen IO shirt and sticker too.

You might wonder… What kind of issues are open on Keen IO GitHub repos? Most of them are on our SDK repos for JavaScript, iOS/Swift, Java/Android, Ruby, PHP, and .NET. Since we value documentation as a form of open source contribution, there’s a chunk of them that are related to documentation updates. We labeled issues with “hacktoberfest” that have a well-defined scope and are self-contained. You can search through them here.

Some examples are…

If you have an issue in mind that doesn’t already exist, feel free to open an issue on a Keen IO repository and we can discuss if it is an issue that is a good fit for Hacktoberfest.

Now, how do you get your swag from Keen IO?

First, submit a pull request for any of the issues labeled with the “hacktoberfest”. It isn’t required, but it is also helpful to comment on the issue you are working on to say you want to complete it. This prevents other people from doing duplicate work.

If you are new to contributing to open source, this guide from GitHub is super helpful. We are always willing to walk you through it too. You can reach out in issues and pull requests, email us at, or join our Community Slack at

Then, once you have submitted a pull request, go through the review process, and get your PR merged, we will ask you to fill out a form for your shirt.

Also, don’t forget to also register at for your limited edition Hacktoberfest shirt from Digital Ocean if you complete four pull requests on any public GitHub repository. They also have more details on the month long event.

These candy corns are really excited about Hacktoberfest

Thank you! 💖

We really appreciate your interest in contributing to open source projects at Keen IO. Currently, we are working to make it easier to contribute to any of the Keen IO SDKs and are happy to see any interest in the projects. There’s an issue open for everyone from someone wanting to practice writing documentation to improving the experience of using the SDKs. Every contribution makes a difference and matters to us. At the same time, we are happy to help others try contributing to open source software. Can’t wait to see what you create!

See you on GitHub! 👋


P.S. Keen IO has an open source software discount that is available to any open source or open data project. We’d love to hear more about your project of any size and share more details about the discount. We’d especially like to hear about how you are using Keen IO or any analytics within your project. Please feel free to reach out to for more info.

SendGrid and Keen IO have partnered to provide robust email analytics solution

Today we’re announcing our partnership with SendGrid to provide the most powerful email analytics for SendGrid users.


SendGrid Email Analytics — Powered by Keen IO

Connect to Keen from your SendGrid account in seconds. Start collecting and storing email data for as long as you need it. No code or engineering work required!

The SendGrid Email Analytics App operates right out-of-the-box to provide the essential dashboards and metrics needed to compare and analyze email campaigns and marketing performance. Keen’s analytics includes capabilities for detailed drill down to understand users and their behavior.

Keen IO’s analytics with SendGrid enables you to:

  • Know who is receiving, opening, and clicking emails in realtime
  • Build targeted campaigns based on user behavior and campaign performance
  • Find your most or least engaged users
  • Extract lists of users for list-cleaning and segmentation
  • Drill in with a point-and-click data explorer to reveal exactly what’s happening with your emails
  • Keep ALL of your raw email event data (No forced archiving)
  • Build analytics for your customers directly into your SaaS platform
  • Programmatically query your email event data by API



SendGrid Email Analytics — Powered by Keen IO

The solution includes campaign reports, as well as an exploratory query interface, segmentation capabilities, and the ability to drill down into raw email data.

Interested in learning more? Check out the Keen IO Email Analytics Solutionon SendGrid’s Partners Marketplace.

.NET Summer Hackfest Round One Recap

We kicked off the .NET Summer Hackfest with the goal of porting our existing Keen IO .NET SDK to .NET Standard 2.0, and I’m excited to say that we just about accomplished our goal! Our entire SDK, unit tests, and CI builds have been converted to run cross-platform on .NET Standard. All there is left to do is a little bit of clean up and some documentation updates that are in the works.

There are some big benefits to adopting .NET Standard 2.0, here are some highlights:

  • The Keen .NET SDK can be used with .NET Core, which means it can be included in apps deployed on Linux, Mac OS, and cool stuff like Raspberry Pi
  • Mono-based projects will be officially supported in their next version, which may or may not have worked before, but now it’ll for sure work. This also means Unity can use the new .NET Standard library!
  • We can multi-target and to reduce the size of the codebase and complexity
  • All the Xamarin variations will be supported in their next version

Everyone who contributed during this event was open, collaborative, and ready to learn and teach. We were very happy to be a part of this and look forward to future ‘hackfests’.

I’d like to give a special shoutout to and thank our community contributors that jumped in on the project: Doni Ivanov & Tarun Pothulapati

I’d also like to thank Justin & Brian from our team, Jon & Immo from Microsoft, & Microsoft MVP Oren for all their work and support during our two week sprint.

9 Projects Showcased at Open Source Show and Tell 2017

The 4th annual Open Source Show & Tell wrapped up and we had such a great time experiencing and seeing some cool open source projects.

We got interactive with Ashley going on a journey building smart musical IoT plushies, and were wowed by Beth’s talk on unifying the .NET developer community.

Joel walked us through the inner workings of software development (the good, bad, and ugly), and show us how the purely functional and open source package manager, Nix, can help with package and configuration management. Zach took us on a journey into why the open source project Steeltoe was built, and showed us how developers can write in .NET and still implement industry best practices when building services for the cloud.

We learned from Josh at Algolia how you can scale a developer community by creating webhooks for community support, and Sarah (image left) took us along a journey understanding open source’s role in cloud computing at companies like Google.

Julia presented about internationalizing if-me, an open source non-profit mental health communication platform maintained by contributors from many backgrounds and skill sets.

There were lots of other excellent talks about open source project like Babelfish a self-hosted server for source code parser presented by Eiso, and Nicolas’s talk about helping people build better APIs following best practices via

Check out all of the topics and talks here.

Big thanks to GitHub, Google, and Microsoft for co-organizing and hosting. Looking forward to seeing you at Open Source Show and Tell next year!

We ❤ open source. We’d love to hear more about your project and share it with others. To help with any analytics needs, Keen IO has an open source software discount available to any open source or open data project. Please feel free to reach out to for more info.

Just Announced: Customize Extractions + Better Funnel Data

We’re excited to announce a couple of key updates to the Keen IO Data Explorer — Keen’s point and click tool for analyzing and visualizing data. Want to get started? Log-in to your Keen IO account.

What’s new?

Customize your extraction fields

At Keen, we believe you should be able to do what you want with your data, which is why we support extractions. We’ve made extractions even easier by enabling you to select which fields you would like to view in an extraction. This makes the extraction of your most important metrics and KPIs painless and clean. Want to check it out? Use the Data Explorer to select the fields you want to extract from an event collection.

Use the Data Explorer to select the fields you want to extract

Get the ‘actors’ from any step in a behavior funnel

Funnels are a powerful tool for understanding user-behavior flows and drop-off rates. We’ve added the ability to get the ‘actors’ from any step in a funnel, which will enable you to see who performed each step. An example use-case of this might be:

  • Which users made it all the way to the purchase form?
  • Which users watched our promotional video all the way to the end?

To use this feature, just check off the “with actors” box in the step of your funnel query in the Data Explorer.

Ready to get started? Log-in to your Keen IO account or create a free account.

Questions? Feature requests? Reach out to us on Slack or @

Happy Exploring!

Keen Cached Datasets: A Primer

I’m going to assume if you’re reading this, you’re interested in cached datasets. Maybe you’re trying to figure out if Keen is the right tool for your data project. Maybe you already use Keen, and are wondering if this will help you get the most out of your implementation. Maybe you’ve just watched the movie Primer, and have stumbled here looking for an explanation of what the heck just happened. In that case, I will refer you (SPOILER ALERT) here.

What are Cached Datasets used for?

Essentially, Cached Datasets are a way to pre-compute data for hundreds or thousands of entities at once. Once setup, you can retrieve results for any one of those entities instantly.

We use it internally for usage reporting, billing, monitoring, and our customer-success dashboard.

Ok, that’s wonderful, but can you give me an example?

Let’s say you are tracking e-book downloads for lots of different authors in a storefront like Amazon. You want to create a report for each author showing how many people are viewing their books, like this:



To create this, you would first need to track events each time someone viewed a book like this:

book_download = {
   title: "On the Pulse of Morning",
   author: "Maya Angelou",
   timestamp: "2017-05-18T21:23:49:000Z"

Now, we could create a query for each author, where each one would have a filter for that specific author’s name. Here’s one that would retrieve data for Margaret Atwood’s dashboard.

$ curl \
     -H "Authorization: READ_KEY" \
     -H 'Content-Type: application/json' \
     -d "{
         \"event_collection\": \"download_book\",
         \"timeframe\": \"this_10_months\",
         \"filters\": [
             \"property_name\" : \"author\",
             \"operator\" : \"eq\",
             \"property_value\" : \"Margaret Atwood\"
         \"interval\": \"monthly\",,
         \"group_by\": \"title\",

But that would require a lot of overhead. For one, you would have to get the results fresh each time an author requested their dashboard, causing them to have to wait for the dashboard to load. Lastly, the administrative cost is high, because you’d have to create a new query anytime you needed a dashboard for a new author.

Enter Cached Datasets! With Cached Datasets, we can show every author a dashboard that lets them see how each of their books are performing over time, all with a single Keen data structure. It will also just stay up to date behind the scenes, including when new authors are added, and scaling to thousands of authors.

That’s pretty neat, how do I set one up?

Step 1: First, define the query. The basics of the query would be simply counting these book_download events. You will also need to provide an indextimeframe, and interval.

  • index: In our example, the index would be author, since that’s the property we want to use to separate the results. (In an actual implementation, you would probably have both the author’s name, and some kind of ID, in which case, you would index on the ID, to keep your data cleaner. But, for the sake of keeping this example simple, we’re just going to use the name.)
  • timeframe: This is going to bound the results you can retrieve from the dataset. It should be as broad as you expect ever needing. (Anything outside this timeframe will never be retrievable out of this dataset). In our example, we’re going to go back 24 months.
  • interval: This defines how you want your data to be bucketed over time (ex: minutely, hourly, daily, monthly, yearly). In our example, we’re going to do monthly.

Step 2: Make an API request to kick off that query. Once you do that, Keen will run the query and update it every hour.

Here’s an example request that would create a dataset based on those book_download events. The group_by is what allows us to separate out the views by book title. The PROJECT_ID and all keys will be provided when you create a Keen account.

$ curl \
    -H "Authorization: MASTER_KEY" \
    -H 'Content-Type: application/json' \
    -X PUT \
    -d '{
    "display_name": "Book downloads for each author",
    "query": {
        "analysis_type": "count",
        "event_collection" : "book_download",
        "group_by" : "title",
        "timeframe": "this_24_months",
        "interval": "monthly"
    "index_by": ["author"]

Step 3: Get lightning fast results. Now you can instantly get results for any author. For example, here’s the query you would use to retrieve results for Maya Angelou’s dashboard, showing the last 2 months of downloads:

$ curl"Maya Angelou"&timeframe=this_2_months

The results you’d get would look like this (This query was run on April 19th, so the most recent “month” is only 19 days long)

  "result": [
      "timeframe": {
        "start": "2017-03-01T00:00:00.000Z",
        "end": "2017-04-01T00:00:00.000Z"
      "value": [
          "title": "I Know Why the Caged Bird Sings",
          "title": "And I Still Rise",
          "title": "The Heart of a Woman",
          "title": "On the Pulse of Morning",
      "timeframe": {
      "value": [
          "title": "I Know Why the Caged Bird Sings",
          "title": "And I Still Rise",
          "title": "The Heart of a Woman",
          "title": "On the Pulse of Morning",

And that’s it! This query, with a timeframe of this_10_months (and different authors) is exactly what was used (along with keen-dataviz.js) to create those awesome dashboards. Here they are again, in case you forgot:



For more information, and some implementation details, check the docs here.

Lastly, we’re still building out monitoring for this Early Release feature. If you have any questions, or run into any issues while using this feature, please drop us a line.

What’s up in OSS in 2017

We are very excited here at Keen IO to announce the fourth annual Open Source Show and Tell! The event is Friday, June 9th from 1pm to 5pm in downtown San Francisco. Grab a ticket here.

We come together to learn and share about open source software and what people are working on. The event allows members of the community to submit talks about anything open source.

This year we are partnering with Google Cloud (hosted at the Google Launchpad in downtown SF), GitHub, and Microsoft + Open Source. If you’re curious, just getting into open source, or have been a community member for a long time, please come and participate. All are welcome.

Here’s some of the open community talks from 2015 and 2016:

Abstracts from 2015

Heather Rivers talking at the 2015 OSSAT

Abstracts from 2016

Submit your own talk!

The free 4-hour event consists of multiple speakers and 20-minute talks on all sorts of interesting open source topics and technologies. And of course, there’s social hour afterwards.

Grab a ticket and come learn what’s going on in OSS in 2017.

Feel free to reach out to me with questions. :D

Twilio Partners with Keen IO to Provide Contact Center Analytics

Exciting news at the Twilio Signal Conference this week! Twilio announced they have partnered with Keen IO to provide out-of-the-box contact center reporting and analytics. Now contact centers have the essential dashboards and metrics needed to run a contact center. The add-on seamlessly integrates with Twilio TaskRouter to provide immediate visibility into contact center usage.

Interested in learning more? The Add-on includes out-of-the-box standard contact center reports, as well as an exploratory query interface and the ability to drill down into raw task data.

Login to install Keen IO Contact Center Analytics Add-on in Twilio’s marketplace, or checkout the product screenshots below!

Out of the box reporting on queues and channels
Agent performance dashboards
Historical performance heat maps
Full access to raw data and exploratory anaylsis

The Contact Center Analytics Add-on enables you to:

  • Monitor your contact center with out-of-the-box dashboards and email reporting
  • Explore deeper insights in your data with a point-and-click visual explorer — no code or knowledge of SQL required
  • Connect other data sources, like CSAT or revenue, to customize your reports
  • Share sets of metrics and unified KPIs across every team, project, and department
  • Automate workflows and build extensions on a well-documented API
Twilio Announcing the new Add-on in the SIGNAL Keynote

Happy analyzing! We can’t wait to see what you build! ❤

Login to install Keen IO Contact Center Analytics Add-on in Twilio’s marketplace.