Keen is Joining the Scaleworks family

Today we’re excited to share that we’re starting a new chapter and joining the Scaleworks family.

Keen set out to empower developers with a custom analytics platform and the most seamless SaaS tool out there for data-handling. We created a periscope into user activity that we’re really proud of. We’ve helped companies easily build and embed all sorts of analytics for teams and customers, and we often power the dashboards in your favorite SaaS tools. It has been fulfilling knowing end-users rely on us for insights and that we help our customers make better decisions and build better products.

The Scaleworks team lives and breathes growing SaaS and has a great track record with businesses at our stage. They bring a ton of collective experience, and a focus on strategic direction, identifying and scaling efficiencies, specific innovation around market/customer demand, and business fundamentals. Given where we are and the path ahead, the combination just makes sense. There might be some things you’re wondering about. Yes, we’re going to continue to invest in product development, platform performance, and service levels. Our ethic around customer success remains as strong as ever, and is a core principle of Scaleworks’ also. Please let us know if you have any questions.

We’re thankful to the founding team that set the vision and got us here, and to our customers, and we now have our eyes on the future to take Keen to the next level. With an appreciation for what got Keen to where we are, we’re excited to fulfill Keen’s potential going forward.

Announcing: .NET Summer Hackfest

Keen IO is excited to be the first featured project in the first ever .NET Summer Hackfest! It’s coming up next Monday, July 24 and wraps up Friday, August 4th.

.NET Summer Hackfest is a six week community run open source hackfest. Teams are get together to contribute to open source projects for a two week session. It’s an opportunity to get involved in open source.

The main project is to work together to port our .NET SDK to .NET Standard 2.0. But we will also list various SDK improvements and example code projects if porting to .NET Standard 2.0 doesn’t sound like your cup of tea. We have projects and issues outlined in our GitHub repo along with contribution guidelines and information on how to get started. There’s also a slack channel, #dotnetsummer, in our Community Slack Chat ( and we’ll be there to hang out and talk shop the whole two weeks.

Check out the .NET Foundation’s announcement about our project!

There will be a kickoff post on the July 24th, and we’ll update this post with that info.

FYI* Contributors don’t need to commit to working the whole two weeks. We are designing this to be collaborative and hope that it’s educational and useful to anyone who gets involved. Please feel free to reach out on slack or twitter (@keen_io) with questions or suggestions.

Here’s a video walking through getting started setting up your .NET dev environment and contributing to the project:

Happy Hacking.

Keen IO .NET SDK on GitHub


Happy Data Hour with Readme and Keen IO

Free on July 20th? Keen IO and will be having a casual drinkup at from 5:30–7:30 PM, and we’d love you to join us!

Chat data, startups, and community, tell us what you really think about our services, or just swing by and hang a bit.

Please RSVP here if you’re coming. Let us know how many people to plan for. See you there!

Join us for Happy Data Hour!

Below is some nerdy JSON which probably looks terrible on your mobile device :)

   “event”: {
   “name”: “Happy Data Hour”,
   “type”: “meetup”,
   “pretty_timestamp”: “July 20th, 5:00pm-7:00pm (PST)”,
   “location”: {
      “street_address”: “”,
      “city”:”San Francisco”,
      “zip_code”: “94103”,
      “special instructions”: “ Featuring data and community members from Keen IO and ReadMe!”
   “beverages”: true,
   “host”: “Keen IO &”

Creative Code and APIs at Twilio's SIGNAL

Last week I attended SIGNAL, the developer conference by Twilio, with the Keen IO team. I’m happy to say that Twilio has figured out the art of conferences.

Developer conferences are a weird thing. They are a mystical form of art consisting of education, social interaction, and celebration. Some are amazing, others are just good, and some gain whispers across sponsors for how bad they are yet somehow still manage to happen every year.

I was impressed by how Twilio created a conference for everyone, and as a developer I felt right at home.

Why was SIGNAL awesome?

I could talk about a lot of different things: the live coding, great conversations, hackable badges, generally awesome talks around communication and code that were relevant to anyone, and much more. But I want to focus on two ideas that I saw a lot of at the conference:

  • Code is creative
  • How does this relate to APIs’ role in this rapidly changing world?

These were ideas that Jeff Lawson, CEO and co-founder, brought up very early on:

As Lawson said, the Hollywood narrative of a “hacker” is broken. It ignores that there is more than just math to coding. Coding is an art. It wasn’t just all talk too. I continually saw it in talks at the conference.

Code is creative

It was really hard to pick just a few talks that I thought highlighted how code is creative, but here’s a few of them:

Rachel Simone Weil’s talk “Hertzian Tales: Adventures in “Useless” Hacking”

Rachel really digs into a question that many of us have when we hack and build projects that will never be a “business.” While some of the hacking could be considered “useless,” she touches on the real benefits of creative projects from a critical design lens.

Jenn Schiffer’s talk “What If Twilio Could…” A Tale of Glitch, Twilio, and the Power of Friendship”

If you haven’t gotten to listen to a talk from Jenn, you should! In this talk, Jenn and a friend worked on coming up with a bunch of creative and random ideas based the prompt “What if Twilio Could…” They were able to be more creative by relying on APIs and not worrying about if it was technically possible. Jenn digs into some critical questions, such as, how do we keep the “a-ha moment” going when we have these creative ideas? And how do we work with people while building and learning new things?

Lauren Leto’s talk “At-Home Batphone: The Future of Phone Numbers and Noble MVNOs”

Can you imagine a world where you never need to check your phone and the messages that need to get to you do? Lauren is building for that future with APIs, like Twilio. I love this one quote from the talk:

“When anyone can do it, there’s more chance for creativity.
Andrew Reitano’s talk “NESpectre: The Massively Multi-Haunted NES” and Dan Gorelick’s talk “Crowdsourcing Music via WebSockets: Using Scalable Technologies to Enable Musical Expression”

These last two talks really highlighted interdisciplinary creativity. While I strongly believe all code is interdisciplinary, these two took it to another extreme. Also, I played games and music with 60+ other people in the room while watching these talks, which was freaking awesome!

One other thing about these two talks is that they had nothing to do with Twilio, which I thought was great. Twilio put developers first by choosing to educate attendees on creative and interesting uses of technology over their own API.

How do APIs come into play with this creativity?

As some of the speakers touched on, APIs open up opportunities. When you get to focus on being creative instead of on whether something is technically challenging or possible, what you make with the code is more creative.

Lawson asked questions in the keynote like, how many business problems are there that we could be solving if we had the right APIs first to solve them? And why isn’t that an API?


When dealing with inflexible legacy systems, we don’t always get to solve the most creative problems. APIs allow us to apply our creative energy on a whole new set of problems waiting to be solved.

Lawson also asked, “How big can this economy get?” This really turns into a question for developers. When we are creative, what are the limits with building with APIs? At Keen IO, we are still pushing those limits today while we explore the possibilities that are part of an Unstoppable API Era.

It is common to say that software is eating the world, but in many ways APIs are eating the world.

Our own API stories

Many of us have our own API stories. Suz Hinton mentioned in her talk about immersive experiences with Chat Bots +IRL Bots that many of us, including herself, have a “Twilio Story.” This idea came up constantly at the conference.

For example, my own “Twilio Story” is that the Twilio API Docs helped me get interested in web development. Previously, I had been writing Java and C++ programs that were completely disconnected from the Internet. The Twilio API Docs helped me setup my first web server, a Python Flask server, in order to send a text message for a project I was working on. This was a life changing experience for me.

Paul Fenwick’s first Hello World experience with Twilio turned into building the National Rick Astley Hotline and then he gave an awesome talk about it at Signal! Basically, he knew nothing about the technology, but because the APIs and technology existed, he was able to focus on the creative use case first.

A conference for everyone

Lastly, the last part of conference is usually the “celebration.” Twilio calls their conference after party, $bash. (Feel free to insert your own bash jokes here.) I’d say that Twilio sprinkled celebration into a lot of parts of the conference, but this is where it was at its greatest.

Photo booths + Face paint artists = 💖

I was definitely unsure about this celebration though. When someone tells you that there are going to be “coding challenges” and “puzzles” in a dark environment with lasers, loud music, and alcohol in a warehouse on a pier in San Francisco, you can’t blame me for being slightly pessimistic about how “fun” it will be.

I quickly realized that there was really something for anyone at $bash. If “coding challenges” weren’t your thing, there was half a dozen other things you could do instead. That’s why $bash was special to me. As an introvert, who really likes conferences yet is exhausted at the end of them, after parties aren’t always my idea of “fun.” It even got two of our co-founders to stick around and try out the coding challenges.

As Kyle Wild, our CEO, said:

“Signal was like a case study in how to make a conference for introverts. I ❤❤❤❤ it and want to go back every year.”

Congrats on finding the right mixture of mystical conference art, Twilio! See you next year.

P.S. At SIGNAL, we also announced our partnership with Twilio to provide Contact Center Analytics. Check out our blog post about analytics with the TaskRouter API:

Twilio’s Al Cook using multiple APIs including TaskRouter API and Keen IO to build Contact Center Analytics, see the talk here.

P.P.S. I highly recommend going to check out some of the talks I shared. If you loved those, here are a few more favorites:

From left to right: The Democratization of State: How exposing real-time state can improve your business, Lucky: Examining the Barriers to Contributing to Open Source
From left to right: Coding for a Cause: SMS for Voter Registration, and Build Twilio Apps That Scale to the Moon

If you think “Code is Creative” or the talks that were included are awesome, consider clicking the ❤ below!

Join Us at Twilio for Happy API Hour


Twilio’s Signal Conference 2017 is just around the corner! We’re excited to meet thousands of fellow developers in San Francisco for 2 days of talks, panels, events, knowledge sharing, and fun. Come visit us at booth number i2 to say hi, get some sweet swag, give us a hi-five, ask questions about APIs, or get a sneak peak of our latest product collaboration with Twilio.📊

On the evening of Wednesday, May 24th Keen IO is co-hosting a Happy API Hour with our friends at Auth0 and Algolia. Join us for an evening of networking and hanging out with plenty of food, beer, wine, and refreshments. Space is limited so help us out by making sure to register early with the link below:



Happy Hour Event Details

Where: Rogue Ales Public House, 673 Union Street, San Francisco, CA 94133

When: May 24, 2017 at 7:30pm — 9:30pm

What: Happy Hour/afterparty, space is limited!

How: RSVP on Eventbrite to reserve your spot


About our Co-Sponsors

Algolia helps developers connect their users with what matters most. Their hosted search API powers billions of queries for thousands of websites & mobile applications every month, delivering relevant results in an as-you-type search experience in under 100ms anywhere in the world. Algolia’s full-stack solution takes the pain out of building search; we maintain the infrastructure & the engine, and we provide extensive documentations to our dozens of up-to-date API clients and SDKs with all the latest search features, so you can focus on delighting your users.

Auth0 provides frictionless authentication and authorization for developers. The company makes it easy for developers to implement the most complex identity solution for their web, mobile and internal applications. Ultimately Auth0 allows developers to control how a person’s identity is used with the goal of making the internet safer.

About ourselves — Over 60,000 developers use Keen IO APIs to capture, analyze, and embed event data into their tools and products. Thousands of customers rely on Keen’s event data platform to white label data applications in media, e-commerce, adTech, gaming, IoT and retail. Keen’s customers query trillions of data points daily. Keen IO also values and promotes empathy, introspection, distributed innovation, continuous learning, playing to your strengths, and patching your weaknesses with diverse collaborators.


shhh… we may or may not have VIP wristbands for the happy hour event at our booth i2!

See you there at the afterparty + Twilio Signal!

The Future of History

Brahe & Kepler

When people ask me why we wanted to start Keen, I tell them this story of two scientists.

Tycho Brahe by Eduard Ender (1822–1883)

Tycho Brahe was a Danish astronomer in the 16th century. You probably haven’t heard of him. He wasn’t a great physicist or mathematician, but he had a really important insight, which was that the astronomers of the day were spending a lot of time working with really bad data. The data was longitudinally large (hundreds of years of star charts, handed down through the ages), but crappy. Brahe realized that he couldn’t draw useful conclusions from poor data, so his first step was to re-instrument the universe with the right data model. In each of his nightly “data snapshots,” Brahe wanted to document the position, motion, and brightness of every single celestial object in the sky.

To accomplish this, Brahe built several versions of his own observatory from scratch. He built all his own instruments, and he kept such extensive records that he even built his own paper mill just to keep up — data storage wasn’t quite as cheap back then as it is today. And after sacrificing much of his fortune (plus over 30 years of nightlife), he suddenly died.

Tycho Brahe’s Uraniborg from Brahe’s book Astronomiae instauratae mechanica (1598)

Luckily he had an assistant named Johannes Kepler. You probably have heard of him. Kepler spent over 20 years running calculations on Brahe’s data. In the end, he devised the laws of planetary motion, which inspired the work of Newton and Einstein and the foundation of modern physics.

According to NASA, Kepler’s work “launched the scientific revolution.”

Historical data not only helps us study the past, but it allows us to figure out the laws of our universe, which provide our only glimpses into the future. Powerful stuff, that historical data!

Left: Brahe’s star data. Right: NASA’s modern view of space.

The Digital Observatory

Brahe was always the person in this story that stood out to me. He wasn’t a scientific genius and isn’t remembered by very many people, but in my book he was every bit as impactful as the famous Kepler (moreso, even — many Kepler-level thinkers had probably come and gone, but it wasn’t til one worked with Brahe that things fundamentally changed).

Brahe, armed with a custom-built observatory, was just a really practical, methodical, large-scale record-keeper — with high standards for precision and a very broad data model.

That’s pretty much what Keen is, too.

Keen is the modern observatory for the digital universe. The trillions of data points our customers are collecting will be the foundation for their future discoveries.

Cerro Tololo Inter-American Observatory in Chile. Credit: Abbott and NOAO/AURA/NSF

Observatory Design Considerations

The most important factors in building a good observatory are:

  1. the field of view
  2. the precision of the instrumentation
  3. the breadth of the data model

Beyond that, the most important factors for making those future discoveries more likely are the portability and the accessibility of the data.

Humanity was lucky that Kepler happened to work onsite with Brahe, because physical reams of paper aren’t all that portable.

Fortunately, thanks to the Internet, a modern observatory could transcend such physical barriers. Imagine a cloud API that modern Brahe-like people can use to measure their worlds. And every one of those Brahe-like people would be able to expose their data to far more than one Kepler-like person.

Logical map of ARPANET, which eventually became the The Internet


These concepts of portability and accessibility are the reason we took an API-first approach to designing Keen.

In data systems generally:

Portability is high if I can get all or some of the data into other systems easily, and higher still if the form of those other systems can be arbitrary. The highest level of portability in a data system would be an API for extracting full-resolution chunks of the data, combined with an API for streaming the data out wholesale.

Accessibility of the data is higher through a Twilio-inspired REST API like Keen Compute versus, say, a giant and arcane data lake stored in SAP or Oracle or HBase or Teradata. In the latter, the data is guarded by a department full of data adepts, who require lots of meetings before maybe choosing to spend months of magician-time to give you an answer. In those sorts of systems, the API to get to that data is meetings + corporate politics + waiting around. The API into Keen data is, errr, an actual API.

This is nice for humans who want answers, but it’s even nicer for applications that want answers, since applications can’t have meetings or play politics. All they can really do is use APIs.

To put it another way, a well-abstracted REST API provides extremely high data accessibility to anything connected to the internet. When developers use those APIs to build data into software, we can achieve an even higher level of information accessibility. The beauty of making data-powered applications is that developers can put data into the hands of ordinary people. They don’t have to be Keplers.

Software Developers

As Marc Andreessen famously wrote, software is eating the world. What he really meant is that software is taking over the economy. Yes, leave it to a VC to accidentally conflate the economy with the whole world — but still, his point has merit.

Building of the Golden Gate Bridge in San Francisco. Source.

If he’s right, then that means software developers will have an enormous responsibility in architecting the future economy. Developers will be its carpenters, its plumbers, its masons, its general contractors, its architects. They will build its bridges, its cities, its transportation systems, and its parks. Developers will build an increasing percentage of everything, and build into everything.

And increasingly, developers will choose which tools & technologies they build it with.

The things (products, apps, businesses) they build will be increasingly intelligent, in a variety of ways. One of those ways is that these things will have a sense of history. You might call them “history-aware products” or “history-aware applications” or “history-aware businesses.”

A Sense of History

Imagine you’re a software application.

You’re a bunch of bits of code and logic, creating and consuming an intricate symphony of information.

Got it? Good.

Now, what would it mean for you to have a sense of history? To be history-aware?

In my view, it doesn’t merely mean that you have a good memory, although that’s important. A good memory means you’re aware of — and can recall — what’s happened in the past. A perfect memory would allow you to instantly recall arbitrary facts in perfect detail.

Imagine you (the application) are running on a Fitbit in the year 2020. Every Fitbit in the world is running one of your brothers or sisters. Perfect memory would mean you know the answer to questions like these: What was the 90th percentile heart-rate of audience members during that one perfect scene in that one amazing horror movie? Was it different depending on time of day? On which theater they were watching in? On how much alcohol they had consumed beforehand?

Perfect memory alone is clearly powerful. And being able to access your entire tribe’s perfect memory just by issuing an API call is a superpower. As an application, calling an API is as trivial for you as it would be for a human to speak a word of English.

But to be history-aware isn’t just about knowing what happened in the past. More broadly, it means you’re aware that history exists as a concept.

The 10,000 year clock. Source: The Long Now Foundation

What would it mean for you, the software application, to be history-aware in this way?

Not only would you remember all the events that happened previously and be able to run calculations across them, but that you would record your own observations in detail (like Brahe), so that in the future, other history-aware applications (including yourself in a future timeline!) can possess that same kind of perfect memory. For its members to have perfect memory, the entire tribe has to diligently record events as they pass.

Being a smart citizen means being aware of the history that’s already occurred. Being a good citizen means being history-aware in this broader context.

If you were a software application, this is a superpower you would probably find compelling.

Looking Forward

Much of the future economy will be built by developers. With great data tools and a sense of history, developers will not only make us contextually smarter, they will lay a foundation for ongoing discovery.

At Keen, our mission is to provide a ready-made digital observatory so that they don’t have to spend years building one like Brahe did.

The future is a place where anyone can study their universe or, like NASA did when they took Keen for a one-day spin, discover a pattern that has been eluding them for over a decade.

(And I think Kepler would find that pretty rad.)

Thanks to Michelle Wetzler, Micah Wolfe, Patrick Woods, Seth Bindernagel, Ursula Ayrout, inconshreveable, Sunil Dhaliwal, Ryan Spraetz, and Elias Bizannes for editing help.

Interview with Michael Greer (CTO, TAPP TV, The Onion)

We recently interviewed Michael Greer, former CTO of The Onion and now Co-Founder and CTO of TAPP TV. We wanted to hear how he navigated the decision to build or buy analytics infrastructure.

In our CTO’s Guide to Getting Data Strategy Right white paper, we discuss the limitations of off-the-shelf analytics solutions, as well as the risks of building custom solutions with expensive internal resources. As we continue to navigate these discussions with our clients at Keen, we wanted to share some of their stories. TAPP decided to build their analytics capabilities in-house using Keen’s APIs, and they have been a Keen customer for several years.

According to Mike, his team of engineers has tested a variety of approaches including combinations of Segment, KissMetrics, and Google AdWords. “The reason we ended up increasingly relying on Keen was our ability to influence the metrics we were tracking with Keen — it turned out to be more engineer-friendly than anything else on the market,” says Greer.

TAPP uses a video content management system and a subscription system to allow their team to manage different video sites. These systems are also used for various internal dashboards and reporting on key business metrics. For example, reports embedded within the CMS help employees identify the most popular content, compare subscription rates across time or make revenue projections. “We run correlations, track whether users are more or less likely to subscribe when they look at a particular content piece, and much more,” explains Mike.

When asked how Mike would explain Keen’s API platform he says,

“Keen is the perfect 80% solution. It’s not turnkey and doesn’t give developers anything out of the box, but rather offers 80 percent of what’s needed and allows a company to build what they need, quickly.”

TAPP’s team also found Keen’s engineers and customer success team to be extremely helpful.

“I simply contact Keen’s customer service via chat. Engineers send us back example code which is extremely high quality. I’ve also reached out directly to the engineers who maintain the JavaScript library, so we could really see what was happening.”

Mike Greer found Keen’s pricing and platform to be easy to scale with the company’s needs.” TAPP currently has over 30 people across the company consuming data in a variety of custom dashboards and reports specific to their workflows, all of which is maintained part-time by a small team of three.

Another consideration for the executive team was the investment risk inherent in choosing a technology for such a foundational, business-critical need (and in particular one that touches many parts of the business). Two factors influenced their decision here: Keen’s high data-portability reduced their lock-in risk, and the flexibility of the platform meant they weren’t married to a single prescriptive way of doing analytics.

“Keen is a platform that’s been created by builders for builders.”

Mike cited a few additional factors that made the choice to build his analytics infrastructure on Keen the most viable for TAPP:

  • Keen has great JavaScript SDKs so it works well with their stack
  • Emergent questions from company stakeholders are very easy to answer: “Keen is sufficiently flexible for us to always be able to offer additional capabilities”;
  • A much lighter burden for the engineering team: TAPP runs their entire analytics stack with no full-time headcount dedicated to analytics infrastructure and scalability.
  • New dashboars can be added on demand. This makes it easy to add and remove key metrics as needed.

Download our latest white paper to learn more about the “Build vs. Buy” debate. Keen IO helps companies accelerate deployment of intelligent data applications and embed intelligence throughout their business.

Announcing our new podcast: Data Science Storytime!

We’re excited to announce the debut of Data Science Storytime, a podcast all about data, science, stories, and time.

In Episode 1, Kyle Wild (Keen IO Co-founder and CEO) and I brainstorm the concept of the show, debate the difference between data science and non-data science, and recount the story of the action-hero data scientist who skipped a meeting with Kyle to rescue a little girl trapped on a mountain (or so he assumes).

Tune in for all this and plenty more as we consider the many ways data shapes our lives and activates our imagination, today and in the future.

If you like what you hear, make sure to subscribe to get a new episode every two weeks. And follow us on Twitter @dsstorytime. Thanks, and enjoy the show!

An Open Source Conversation with OpenAQ


Last month, I sat down with Christa Hasenkopf and Joe Flasher from OpenAQ, one of the first open, real-time, air quality data platforms to talk about open environmental data, community building, analytics, and open source. I hope you enjoy the interview!

Taylor: Could you both tell me a little bit about yourselves, and how y’all got interested in environmental data?

Christa: I’m an atmospheric scientist, and my background for my doctoral work was on ‘air quality’ on a moon of Saturn, Titan. As I progressed through my career, I got more interested in air pollution here on Earth, and realized I could apply the same skills I’d gained in my graduate training to do something more Earth-centric.

That took Joe, my husband, and I to Mongolia, where I was doing research in one of the most polluted places in the world: Ulaanbaatar, Mongolia. As a side project, Joe and I worked together with colleagues at the National University of Mongolia to launch a little open air quality data project that measured air quality and then sent out the data automatically to Twitter and Facebook. It was such a simple thing, but the impact of that work felt way more significant to me than my research. It also seemed more impactful to the community we were in, and that experience led us down this path of being interested in open-air quality across the world. As we later realized, there are about 5–8 million air quality data points produced each day around the world by official or government-level entities in disparate and sometimes temporary forms but that aren’t easily accessible in aggregate.

Joe: I was a trained as an astrophysicist but then I quickly moved into software development and so when Christa and I were living in Mongolia, I think we just sort of looked around and saw things that didn’t exist that we could make, we went ahead and did that. Open data was always something that seemed like the right thing to do. Especially when it’s data that affects everyone, like air quality data. I think we have the tools together: I had the software development skills and Christa with atmospheric science to put things in place that could really help people.

Taylor: That’s awesome. Could you tell me more about the OpenAQ Project?

Christa: Basically what we do is we aggregate air quality data from across the world and put it one format in one place, so that anyone can access that data. And the reason we do that is because there is still a huge access gap between all of the real-time air quality data publicly produced across the world and the many sectors for the public good that could use these data. Sectors like: public health research or policy applications, or an app developer who wants to make an app of global air quality data. Or say even a low cost-sensor group that wants to measure indoor air quality and also know what the outdoor air quality is like so you know when to open your windows if you live in a place like Dhaka, Bangladesh or Beijing, China. And so by putting the data in this universal format, many people can do all kinds of things with them.

Joe: Yeah, I think we’re just focused on two things. One is getting all the underlying air quality data collected in one place and making it accessible, and the main way to do that is with an API that people can build upon. And then we also have some of these other tools that Christa mentioned to help groups examine the data and look at the data, but meshing that with tools built by people in the community. Because I think the chances of building the best thing right away is very small. What we’re trying to do is make the data openly available to as many people as possible. Because a lot of these solutions are based in local context in a community.

Taylor: That’s really cool. I have heard from other organizations that when you open up the data, you democratize the data because it’s available for the people.

I read the Community Impact document for the project and you had mentioned that some researchers from NASA and NSF and UNICEF are using the data from OpenAQ. I was wondering, what are some other cool applications of the data that you are seeing?

Christa: I think when we first started the project it was all about the data. It was all about collecting the data, getting as much data as we could. And as we went on, we realized, pretty quickly, it’s actually about the community we are building around it and the stuff that people are building. And so there are a few different pieces.

One thing we have seen is a journalist taking OpenAQ-aggregated data to analyze air quality data in their local communities. There is a journalist in Ulaanbaatar, Mongolia, who has published a few data-driven articles about air quality in Ulaanbaatar relative to Beijing. There are some developers who have built packages that make the data more accessible to people using different programming languages.

There is a statistician in Barcelona, Spain, who has built a package in R that makes the data very accessible in R and makes cool visualizations. This person made a visualization where she analyzed fireworks across the US on the Fourth of July. She did a time series, and you could see a map of the US, and as 9pm rolled around in the various time zones you can see air quality change across the US as the fireworks went off.

There is a developer in New Delhi, India, who has made a global air quality app and Facebook bot that compares air quality in New Delhi to other places or will send you alerts. We feel these usages point to the power of democratizing data. No one person or one entity can come up with all the possible use-cases themselves, but when it’s put out there in a global basis, you’re not sure where it’s going to go.

Joe: We have also been used by some commercial entities to do weather modeling, pollution forecasting. Christa, there was an education use case right… Was it Purdue?

Christa: Yeah, a professor there is using it for his classroom to bring in outdoor air quality data to indoor air quality models. Students pick a place around the world. They use outdoor quality data from there to model what indoor air quality would look like, so they are not just modelling air quality data in Seattle, which is pretty good air quality. But they are also pulling in places like Jakarta or Dhaka, to see what air quality would be like indoors, based on the outdoor parameters.

Low cost sensor groups have contacted us because they are interested in getting their air quality data shared on our platform. These groups would like their data to be accessible in universal ways so that more people can do cool stuff with it too. Right now, for our platform, we have government-level data, some research-grade data, and a future direction we are hoping to move is low-cost sensors, too.

Taylor: As you have touched on, I read that OpenAQ has community members over four continents and aggregated 16 million data points from 24 countries. I am curious, how were you able to grow the project to have all that data coming in?

Christa: We have a couple ways of getting the word out about OpenAQ and getting people interested in their local community and to engage with the OpenAQ global community. One way is we do in-person. We visit places that are facing what our community calls “air-inequality” — extremely poor air quality in a given location — and we have a workshop that will convene various people, not just scientists, not just software developers, but also artists, policy makers, people working in air quality monitoring within a given government, and educators. We focus on getting them all in the same room, working on ways they can use open data to advance fighting air inequality in their area.

So far, we’ve held a workshop in Ulaanbaatar, and we have had meetups in San Francisco and DC, since that’s where we’re based. We have also done presentations in the UK, Spain, and Italy. We are about to have our next work shop in Delhi in November. We’re getting the word out through the workshops, the meetups, on Twitter, we have a slack channel. Participation in the OpenAQ Community has been growing organically in terms of participation. Whether it’s in terms of the development end, pulling in more data, or in the application of the data. We tend to get more people interested in using the data once they are aggregated rather than in those helping to build ways to add in more data, which makes sense. We are always in need of more people helping on helping build and improve the platform.

Joe: In the beginning it was very interesting how we decided to add in new sources — there are so many possible ones to add from different places. You could look at a map and see where we had been, because whenever we would go somewhere to give a presentation we would want to make sure we had local air quality data. So before we would give a presentation in the UK, we would make sure we had some UK data. Data has been added like that and according to interest for particular locations in the community.

An interesting thing that we are able to do now with the Keen analytics, is that we can look at what data people are requesting most, and even if we don’t have the data, they might still be requesting it. So we can see from the analytics where we should potentially focus on bringing in new data. So it has been a very helpful way for us to be more data-driven when looking at what data to bring in.

Taylor: When you have a project that is an open source or an open data platform, your time becomes very valuable. You want to put your resources where they are needed most.

Joe: We want to be as data-driven as possible. And it’s hard for us to talk directly to all of the people who are using the data. I think we have a similar problem to anyone who opens up data completely. We don’t require anyone to sign up for anything. We have a lot more people using the data than we know about. We can see just from how many times the data is getting grabbed that it is popular. The analytics really help us, sort of tell something about those use cases, even if we don’t know of them specifically.

Taylor: Could you explain your use of Keen for everyone so they can understand how you are figuring that out?

Joe: The API is powered by a Node.js application that includes the Keen library. Every request that comes in goes to Keen and so we have a way to sift through it.

We don’t track any use, any sign ups, any API keys or anything at the moment. We don’t see addresses that come in from the requests, they are anonymous. But we do get tons of data that we can look through. And that was super-helpful. It gave me two lines of code that go into my API and then all my requests come into Keen and I can handle all the queries there.

We do all the normal things that you would do: total counts of requests that are coming in, we look at our end points usage statistics. This is also very interesting, we were looking at this the other day, not all our endpoints are equal and our system has some that are much heavier computationally and have taken a lot more work to create. It’s interesting to look at how much they are getting hit versus how much effort we put into making them. We can see the most popular endpoints that we have, and then we can also see ones that aren’t used as much. This helps me figure out what and how to prioritize efforts. We have a very database request heavy system. Knowing specifically the sort of queries that are coming in really helps us optimize the database to get the most out of it and make it most cost efficient.

Taylor: That’s interesting that you were able to gauge how much effort you put into some of those endpoints and then look their usage. When you don’t have that data, you are just guessing. It can also help you see that maybe there should be more education on some endpoints.

Why was it important to y’all for this platform to be open source?

Christa: So one of the major reasons we built this platform and made it open source is that we noticed a few of the groups who were gathering this sort of data and the data themselves weren’t open, nor was it clear how they were gathered. There was a few efforts, some commercial, some unclear if they were commercial or public, there were some researchers who do this. And everyone was doing it in a different way or wasn’t entirely clear how it was being done. We saw a lot of efforts having to duplicate what another effort was doing because their work wasn’t open. So we thought if someone just makes the data open and also the platform itself open source and transparent, so it’s clear how we’re grabbing the data — that’s a huge reason to do it. The other reason we chose, was that when we first started this, there was just two of us in our little basement apartment. It’s a big project, and we knew we would need help. So making it open source was an obvious route to find folks interested in helping us around the world.

Joe: I think the other piece here is that open source and free aren’t the same thing. But they are often times lumped together. Beyond just open source, I think what we wanted to be was freely available, because air pollution disproportionately affects people in developing countries. They are the ones that would generally have to pay for this data or don’t have access to them at all. And so we wanted to break down that barrier and let everyone have access to the data, making tools, and not have that be a roadblock.

Taylor: To end things, what is the most exciting thing about the project to each of y’all?

Christa: I think for me it’s definitely interacting with people in specific communities and sharing the data in the open. I love that, it’s the best.

Joe: For me it is definitely having people build something on top of it. As a developer, that’s the best feeling. In fact the first workshop we did in Mongolia, there was a developer who, just over the weekend, built an interface, like a much better exploration interface for the data than what I had initially made. Which was great, right? So I think we used that, and pointed people to that over and over and over again, because I think it took us probably, I don’t know, six months until we finally rolled out sort of a different exploration interface for the data. And that was just made by one community member and that was awesome.

I wanted to thank Christa and Joe for taking the time to talk to me about OpenAQ. I don’t know about you, but I learned a lot! It is a wonderful project that you should definitely check out.


Keen IO has an open source software discount that is available to any open source or open data project. We’d love to hear more about your project of any size and share more details about the discount. We’d especially like to hear about how you are using Keen IO or any analytics within your project. Please feel free to reach out to for more info.

Six Steps to Building Successful Customer Relationships

Analytics is a complex beast. When you think about everything you want to build and track, it’s easy to get carried away. That’s why at Keen, we created a API-based platform with tools that make creating something from nothing much easier. Instead of doing it alone, let’s build together!

Last year we expanded from a self-service tool to an enterprise product serving large organizations with increasingly complex needs. In addition to advanced features, we also added hands-on help and organizational planning with the Customer Success Team.

As we’ve worked with bigger and bigger organizations, we’ve found the following framework to be really valuable.

1. Evaluate your customer’s needs

This may seem obvious but asking the most basic questions is really important. Before you take a step forward, take the time to ask meaningful questions about your customer’s business.

At root, what are you trying to accomplish? And how can you work together to meet these goals?

It’s easy to get carried away with all the things that are possible, but getting mired in details that are not mission critical takes attention away from the primary goals. Remember to stay focused on why your product is needed.

This shift in the conversation from the “nice-to-haves” to primary business goals helps identify which tasks might not be effective/good uses of effort. By using this focus, you can drive home and highlight the core value proposition of your service. In our case at Keen IO: analytics.

By making sure our customers understand what we’re good at — delivering a stable platform for data ingestion, analysis, and visualization — we can continue to showcase the progress they’ve accomplished in a short amount of time, the good work they’re continuing to accomplish, and then coincidentally how good of a job we are doing as their platform provider.

2. Stay Organized

When we work on customer integrations, staying organized is crucial. A Gantt Chart or Project Plan is my preferred method of staying on top of my delivery timeline and critical deadlines. This shared view helps the customer stay abreast of expected timelines, and avoid any misunderstandings in communication.


Sample Project Overview

Don’t skip this important step: Communicate timelines for when you are going to do something.

Nothing is more frustrating that making a request to a team and getting no response. Respond quickly if even just to say “I’m looking into it.” Some answers may take longer than others to reply to, and some customer requests may not be completed overnight. Providing a timeline is providing a reasonable expectation, and communicating this information in a structured way is key.

3. Maintain regular communication

When your product evolves to include new features that may be helpful, remember to let your customers know.

I tend to write frank and friendly emails to describe new feature sets and partnerships, and at times I also send along blog articles written by our team. It always feels good to find out about a change to something you’re used to or something completely new. Feature announcements have been a way for our customers to share excitement for what is upcoming for Keen.

Don’t be shy about advocating your own product’s features and regularly. By recommending use of unused or newer features, you can help your customer figure out how they can be using your technology better. There may be a more efficient or effective way to do what they’re doing.

Plus, by keeping in touch and knowing what our customers are trying to do, we can share lessons from customers doing similar things. It gives us the opportunity to reassure customers, remind them that they’re not alone in thinking about a problem in a particular way, and that they’ve chosen the right approach and technical solution for their project.

I make it a point to share helpful solutions with customers even when they don’t directly involve our product. A conversation like this has the added benefit of letting your customer know you’re available to bounce ideas off of.

Even when there’s bad news, we’ve found that customers are very appreciative when they receive a message directly from us. They appreciate the personal message and helps to receive a heads-up from you before finding out about a patch or planned downtime from anywhere else.

An added bonus of staying in touch and updating customers on your product’s best practices is that you can protect your operational teams from suboptimal customer usage patterns that can become stressful or expensive in the long term.

4. Periodically recheck goals

To keep abreast of your customers’ day-to-day operational needs and stay relevant to their business, it helps to establish periodic check-ins. At Keen we do these on a quarterly basis.

Each quarter we run an in-depth integration analysis and spend time doing a business review with our customers. Sometimes, this turns out to be a big investment of time. So why do we do this?

For one thing, it feels good to help customers succeed. But also, our customers’ success is our success. In the long run, we’ve found that the customers we’ve helped attain their own success tend to recommend Keen IO to others.

When we help customers achieve their current business goals, we build trust with them. It makes the organizations and companies we work with more likely to continue the relationship and build more integrations on top of Keen.

5. Include the customer in the product feedback loop

We share our product roadmap with our customers and actively ask what our platform’s current limitations are. Including our customers in the Product Roadmap, and allowing their input to drive the future of Keen’s product is core to our methodology for success!

As members of the Customer Success team, we become experts on the best ways to use Keen. In the process of collecting genuine customer input and bringing our customers up to speed on what’s next, we’ve gained valuable data on how to help future customers undergoing integrations too!

At times, building a requested piece of technology to help one specific customer’s goal in our platform has led to creating new product that has allowed all of our customers realize benefits.

In these Product Roadmap Sessions with our Chief of Product, our customers have even shared with us amazing tools they’re proud of and have spent time building on top of our open source toolsets. These have become fantastic opportunities to cross-promote and build a partnership, and deeply meaningful moments to learn from the people and their use cases we originally built the platform for.

6. Be yourself.

You can’t forget or unlearn how to be yourself, so bring your full self and best traits of yourself to work. If you’re fun, quirky, funny, or clumsy (I may or may not be some of these things 😜 ), show that side of who you are. Because if you are being your true self, it shows. Your genuine feelings and empathy, the moments you’re happy or sorry that you let your customer down conveys your message in the most clear and honest way possible.

Thinking about ways to do right by your customers? Do you have some of your own tips or style of working with users to share? Please leave a comment or start a conversation over DM, you’ll find me on Twitter as @jandwiches. 🍞

Rocking Customer Success from a Segway in Portland

There are many reasons why Customer Success is important to a healthy business — from growth to retention to referrals to product development — but I do it not for those reasons at all…

I do it because I love it!

I was in the middle of a Segway tour in Portland this past weekend and we stopped for drinks (yeah, apparently it’s legal to drink and ride a Segway there), and as soon as I found out that one of the other Segwayers was interested in adding analytics into his company, I started quizzing him about how he was looking to grow his business and talking about what he could do with Keen. I just couldn’t help myself. It’s like a puzzle he and I could work together to solve and then hop back on our Segways feeling refreshed!

The big difference between Customer Success and Customer Support

Before the Customer Success team existed, we had a team dedicated to helping customers, but it was reactive. If a customer wanted help modeling data or had a question on how to create a dashboard, we would help them. And sometimes we got to learn about what they were doing. We prided ourselves on being customer oriented, but it wasn’t really customer success. It was customer support.

Customer success is about preemptively helping a customer before they have even really asked for it. I am a people pleaser by nature, and if I can help a customer before they even know the need it, then I feel great!

I went to a talk the other day about Consciousness Hacking and they talked about how there are studies to measure whether people can tell what image they are going to see before they actually see it. I am still processing the talk, but I love that idea. If I could apply that to knowing what the customer is going to have questions about, and helping them before they even ask, that would be amazing.

Fortunately for me, it’s a little easier to predict customers’ behaviors than determine whether we can really predict which future image we will see. (By the way, Robert Krulwich from Radiolab has an interesting commentary on that subject)


Did Alice know what was behind that curtain?

Understanding customers’ needs before they feel the pain

The customer may not always know the best way to achieve their goals. By getting an overarching understanding of how they want to grow their business, we can figure out how they can get the most value out of Keen.

These conversations help us avoid the potential pain points a customer might have with our product today and also help us understand how our product needs to grow to support them in the future. We can now align our product roadmap based directly from an understanding of how our customers would like to expand. I love this! I get to help the customer by being their advocate at Keen and I get to help Keen by making sure customers are taking advantage of new features and capabilities we add.

And, I get to hear about really cool projects that people are working on!

One customer,, used the metadata of where they placed different news articles and advertisements on their webpage to provide information to their editorial staff which could then optimize how many articles a user was likely to read.

Another customer, Net-a-Porter, was able to use Keen to monitor web performance, which they displayed in their common room to alert them when the network went down.

Another customer,, built their own desktop analytics right on top of Keen] and used that to provide information to their clients about user engagement with their own application.

And I even learned about a customer, Whitesmith, that used us to measure happiness in their workplace. How cool is that?


Customer Success is a win for everyone

I feel like Keen has really stepped up the growth phase ever since we started the Customer Success team. We have become proactive instead of only reactive. We have gotten a much better understanding of our customers’ growth plans and how to provide direct value as they scale. Most importantly, we turned on the faucet to enable a constant stream of communication between the customer and our product team. Now we are aligning our growth to the growth of our customers.

And I get to be in the middle of all that, helping customers even before they are our customers, just riding around on a Segway.

If you’d like to talk more about Customer Success or building analytics with Keen or Segway safety tips, I’d love to chat! Feel free to drop me a line at

Hello, Community.

Hi! My name is Tim Falls, and I’m the author of this blog post. I’m writing this to introduce myself to you, the reader of this blog post ;) But, not only do I want to introduce me; I want to introduce we.

We are the community team at Keen IO. We are Justin (aka, JJ, elof), Sarah Jane (aka, SJ), Taylor (aka, ATX’s best), and Tim (aka, a guy who just referred to himself in the third person.)

But enough about us; this is actually about you! “You” are our readers and followers, our customers and partners, our investors and advisors and mentors and inspirations, our family and friends, our people — our community.

Our team exists because our community exists; we’re extremely grateful for that fortunate reality.

Community Roots

Justin was the first to be hired at Keen IO with an explicit focus on community-building, joining as the seventh employee and with the title of Developer Evangelist (or was it dev advocate? oh well, tomayto/tomahto.) When he arrived, community was already baked into Keen’s core.

Our founders and earliest employees depended on support from the people around them each time they collectively cleared a hurdle or broke through the ribbons of the company’s initial milestones. Being the thoughtfully reflective bunch they are, they recognized that the symbiotic relationships they’d forged with the humans around them in fact represented the Keen community in it’s infancy; and they realized that Keen’s success was virtually impossible without the contributions of that community.

Today, each and every employee at Keen — whether they spend most of their time writing code or writing website copy or writing the paychecks — can recognize and appreciate the powerful, positive impact that our community has on the company, and vice versa. That’s because everyone has had the pleasure of engaging with our community members and experiencing first hand the magical good vibes that abound in the presence of you all. And in so doing, each individual has detected, with her very own senses, that our community is not just a bunch of people outside our company that like us and/or use our product; it’s not only the smaller number of people who build/support/sell that product; our community is all of those people (and many more) together as one living, breathing, thriving organism.

And we won’t stop

The four of us on the Keen community team have the humbling responsibility of ensuring that our recognition and appreciation for the value of community never dwindle. We get to focus our daily efforts toward supporting and enabling the community around our people, product, and brand. We dedicate our time and creative bandwidth to crafting the most inclusive, valuable, and uplifting place possible for the humans who choose to join in and help make our community what it is today and what it will be tomorrow.

Obviously, we constantly think about this responsibility and continually ask ourselves:

“What can we do to make our community a happy place?”

Sometimes a “zoom out” is helpful in effectively answering this question. In light of my one year anniversary of employment at Keen (Sept 8), I recently un-zoomed to the max. My findings were strikingly obvious…and equally important:

We can forge stronger relationships, by getting to know our fellow community members even better and by making a concerted effort to help you get acquainted with us — as people and professionals.

The first step toward doing just that is, well, this!

We want you to feel more than welcome in this community, and one of the best ways to feel at home is to be in the presence of friendly faces.

Of course, as I alluded to earlier, you can consider everyone at Keen your go-to person for whatever your needs may be, because we all community. But, it’s good to know who is consciously focusing on you at all times, and that’s this little group of people =)

If you haven’t already given one of us a high-five, then we should fix that ASAP. In fact, an internet high-five is just a few clicks away, so we’ll wait here while you do that… If our relationship has moved beyond high-five-land, cruised through hug-ville, and is already at the serious level of direct messages in Slack, then thanks for being here — we ❤ you, too!

Regardless of our current relationship status, we want to take this opportunity to share more of our “personal” details, in hopes that it will help you better understand us as a group of people, why we fervently strive for your happiness, and what you can expect from us.

Under the hood

At Keen, our organizational and operational structure is defined by a document that has been created through the collective efforts of all employees. We call this document the Keen Operating System, and it exists as a Github repo (it’s private now, but we’re considering opening it up to the world if/when it’d be valuable.) Anyone at Keen can contribute to this document (via pull request) at their discretion, and thus everyone is empowered to have meaningful influence on the operational model in which they work.

Reason for being

One section of the document lists and describes the various teams at Keen. Each team is responsible for creating their own page so that everyone in the company understands their mission, strategy, roles, responsibilities, etc. Community is one of those teams, and we’ve copied our team page into a public repo, so you can see for yourself how we represent ourselves to our fellow Keenies. We hope that, in sharing this, we facilitate your deeper understanding of how other teams at Keen see the Community team and how we think about working with you.

Measuring success

Like all business units, we place an importance on measuring our team’s performance. Indeed, community-building can be a tricky thing to gauge — but, even more certainly, it’s not impossible. A lot of our performance tracking comes in the form of intuition — i.e., “ya just know it’s working.” This is a really human thing, and we’re all humans interacting with each other, and those interactions are filled with signals that only our brains and spirits can interpret. So, thanks for sending the vibes — we’re picking up what you’re putting down!

But, also with all business units, “the feels” only go so far in determining whether or not an investment of resources is returning tangible, meaningful benefits to the company.

So, how do we start to quantify the impact of our efforts?

Well, quite frankly we’re still figuring that out. And guess what — we’d love your help! We’ve started to develop our practices in the open, and we’re inviting anyone/everyone to collaborate with us. If you’re a community member, or if you’re building a community of your own, or if you’re just an interested person with a perspective to share, please join the conversation. Beyond serving as a mechanism for receiving your input, we extend this invitation in hopes that it helps to further familiarize community members with our team’s activities, values, and approach to our work.

See you out there!

I hope this rather long-winded introduction to “we” is helpful for you, or at least it was vaguley entertaining. It’s been fun for me, and I’m sooooo glad to finally get a post onto this here weblog, which I’ve admired since way before I joined the Keen family.

If you fancy the chance to reciprocate the sentiment, please get in touch with us through any number of communication channels (see ALL THE THINGS listed on our community page), or just leave a comment below. We’d love to learn about who you are, what you’re working on, why you do what you do, and how we can help you achieve your loftiest goals.

Ta Ta For Now!

Introducing the Keen IO Community Code of Conduct

A few weeks ago we sent an email to the whole company introducing the Keen Community Code of Conduct. This blog post includes most of that email with a few more things added that we wanted to share with our community.

A few months ago the work began on the Keen IO Community Code of Conduct. We’re very excited to announce v1.0 of the Keen IO Community Code of Conduct is now public. 🎉

This Code of Conduct applies to all Keen IO Community spaces, such as the Community Slack group, open source projects, Keen IO meetups, Happy Data Hours, and more! It will be added over the next few weeks to different projects and other community spaces.

It is the product of many meaningful conversations and advice from many Keenies and other humans from outside of Keen IO. To anyone that contributed to this Code of Conduct, thank you. The process of creating a document like this isn’t easy, and we have so much respect for anyone who has done it before.

The Code of Conduct is a living document. This is only v1.0. It will grow and change with Keen IO and its community. This is why it is on Github. Issues can be created to help with revisions and updates. There is also a feedback form, which can be filled out anonymously. Feedback is always appreciated. It will also help guide training and more internal procedures for the Community Code of Conduct.

Lastly, we’re looking forward to making it even clearer to our community that we are dedicated to providing a safe, inclusive, welcoming, and harassment-free space and experience for all community participants, which will help grow our community in amazing ways. We hope this Code of Conduct clearly states what behavior is expected and not tolerated as well as establishes a path for community members to report possible incidents and seek help.

Please feel free to ask me any questions! I would be more than happy to have a larger conversation about it and its existence. 😀

hack.guides() Tutorial Contest!

Are you obsessed with building Dashboards? We are excited to sponsor a $500 prize for the best guide using Keen IO to power dashboards in their apps.

Over next six weeks, you can submit tutorials and collaborate with hack.guides() developer community on the best practices, hacks, and tricks using keen, RethinkDB, and other partners in production.

Submit your post here and the share your posts with us on Twitter to spread the word.

4 DataEngConf Talks We're Most Excited About

DataEngConf SF is around the corner and we can’t wait! The Data Engineering and Data Science communities have really been taking off over the last few years as companies look to build self-serve data tools and extract real-time insights from the massive amount of data at their fingertips.

Here are 4 of the talks we’re really excited about:

  • Bridging the Gap Between Data Engineering and Data Science — Josh Wills, Director of Data Engineering, Slack

We’re excited to hear Josh talk about these important and interdependent functions. There is still a great deal of misunderstanding about the boundaries between the roles and the different constraints that each is operating under.

  • Beginning with Ourselves: Using Data Science to Improve Diversity at Airbnb — Elena Grewal, Data Science Manager, Airbnb


Airbnb used data to change the composition of their team from 15% women to 30%, all while maintaining high employee satisfaction scores across the team. Diversity and inclusivity is important to us at Keen, and we’re thrilled to see a company like Airbnb leading the charge in using data for good.

  • Running Thousands of Ride Simulations at Scale — Saurabh Bajaj, Tech Lead, Data Platform, Lyft

How does Lyft power features like Lyft Line and driver dispatch so effortlessly? Luckily, Lyft has tons of data they can rely on to run simulations at scale to ensure the rider has a seamless experience every time.

  • Unifying Real-Time and Historical Analytics at Scale Using the Lambda Architecture — Peter Nachbaur, Platform Architect, Keen IO


We’re excited that Peter will be talking about how we’ve scaled our analytics platform at Keen to process trillions of events per day for thousands of customers. He’ll share how we’ve evolved our custom query engine to unify real-time and historical analytics at scale using Cassandra, Apache Storm, and the Lambda Architecture.

You can check out check out all of the talks here.

If you want to hang out at DataEngConf with us, you can register for 20% offwith the code “KEEN20X”. Hope to see you there!