Tracking GitHub Data with Keen IO

Today we’re announcing a new webhook-based integration with one of our favorite companies, GitHub!

We believe an important aspect of creating healthy, sustainable projects is having good visibility into how well the people behind them are collaborating. At Keen IO, we’re pretty good at capturing JSON data from webhooks and making it useful, which is exactly what we’ve done with GitHub’s event stream. By allowing you to track and analyze GitHub data, we’ve made it easy for open source maintainers, community managers, and developers to view and discover more information to quantify the success of their projects.

This integration records everything from pushes, pull requests, and comments, to administrative events like project creation, team member additions, and wiki updates.

Once the integration is setup, you can use Keen IO’s visualization tools like the ExplorerDashboards, and Compute API to dig into granular workflow metrics, like:

  • Total number of first-time vs. repeat contributors over time
  • Average comments per issue or commits per pull request, segmented by repo
  • Pull request additions or deletions across all repositories, segmented by contributor
  • Total number of pull requests that are actually merged into a given branch
Number of comments per day on Keen IO’s JavaScript library repos
Number of pull requests per day merged in Keen IO’s repos, “false” represents not merged
Percentage of different author associations of pull request reviews

Ready to try it out?

Assigning webhooks for each of these event types can be a tedious process, so we created a simple script to handle this setup work for you.

Check out the setup instructions hereWith four steps, you will be set up and ready to rock in no time.

What metrics are you excited discover?

We’d love to hear from you! What metrics and charts would you like to see in a dashboard? What are challenges you have had with working with GitHub data? We’ve talked to a lot of open source maintainers, but we want to hear more from you. Feel free to respond to this blog post or send an email to opensource@keen.io. Also, if you build anything with your GitHub data, we’d love to see it! ❤


Announcing Hacktoberfest 2017 with Keen IO

It’s October, which you probably already know! 👻 But more importantly, that means it is time for Hacktoberfest! Keen IO is happy to announce we will be joining Hacktoberfest this year.

What is Hacktoberfest?

Digital Ocean with GitHub launched Hacktoberfest in 2014 to encourage contributions to open source projects. If you open four pull requests on any public GitHub repo, you get a free limited edition shirt from Digital Ocean. You can find issues in hundreds of different projects on GitHub using the hacktoberfest label. Last year, 29,616 registered participants had opened at least four pull requests to complete Hacktoberfest successfully, which is amazing. 👏

Hacktoberfest with Keen IO

If you have ever seen our Twitter feed, you know at Keen IO we love sending our community t-shirts. So, we have something to sweeten the deal this year. If you open and get at least one pull request merged on any Keen IO repo, we will send you a free Keen IO shirt and sticker too.

You might wonder… What kind of issues are open on Keen IO GitHub repos? Most of them are on our SDK repos for JavaScript, iOS/Swift, Java/Android, Ruby, PHP, and .NET. Since we value documentation as a form of open source contribution, there’s a chunk of them that are related to documentation updates. We labeled issues with “hacktoberfest” that have a well-defined scope and are self-contained. You can search through them here.

Some examples are…

If you have an issue in mind that doesn’t already exist, feel free to open an issue on a Keen IO repository and we can discuss if it is an issue that is a good fit for Hacktoberfest.

Now, how do you get your swag from Keen IO?

First, submit a pull request for any of the issues labeled with the “hacktoberfest”. It isn’t required, but it is also helpful to comment on the issue you are working on to say you want to complete it. This prevents other people from doing duplicate work.

If you are new to contributing to open source, this guide from GitHub is super helpful. We are always willing to walk you through it too. You can reach out in issues and pull requests, email us at opensource@keen.io, or join our Community Slack at keen.chat.

Then, once you have submitted a pull request, go through the review process, and get your PR merged, we will ask you to fill out a form for your shirt.

Also, don’t forget to also register at hacktoberfest.digitalocean.com for your limited edition Hacktoberfest shirt from Digital Ocean if you complete four pull requests on any public GitHub repository. They also have more details on the month long event.

These candy corns are really excited about Hacktoberfest

Thank you! 💖

We really appreciate your interest in contributing to open source projects at Keen IO. Currently, we are working to make it easier to contribute to any of the Keen IO SDKs and are happy to see any interest in the projects. There’s an issue open for everyone from someone wanting to practice writing documentation to improving the experience of using the SDKs. Every contribution makes a difference and matters to us. At the same time, we are happy to help others try contributing to open source software. Can’t wait to see what you create!

See you on GitHub! 👋

 


P.S. Keen IO has an open source software discount that is available to any open source or open data project. We’d love to hear more about your project of any size and share more details about the discount. We’d especially like to hear about how you are using Keen IO or any analytics within your project. Please feel free to reach out to opensource@keen.io for more info.


Creative Code and APIs at Twilio's SIGNAL

Last week I attended SIGNAL, the developer conference by Twilio, with the Keen IO team. I’m happy to say that Twilio has figured out the art of conferences.

Developer conferences are a weird thing. They are a mystical form of art consisting of education, social interaction, and celebration. Some are amazing, others are just good, and some gain whispers across sponsors for how bad they are yet somehow still manage to happen every year.

I was impressed by how Twilio created a conference for everyone, and as a developer I felt right at home.

Why was SIGNAL awesome?

I could talk about a lot of different things: the live coding, great conversations, hackable badges, generally awesome talks around communication and code that were relevant to anyone, and much more. But I want to focus on two ideas that I saw a lot of at the conference:

  • Code is creative
  • How does this relate to APIs’ role in this rapidly changing world?

These were ideas that Jeff Lawson, CEO and co-founder, brought up very early on:

https://twitter.com/twilio/status/867430443930578945/photo/1?ref_src=twsrc%5Etfw%7Ctwcamp%5Etweetembed%7Ctwterm%5E867430443930578945&ref_url=https%3A%2F%2Fblog.keen.io%2Fmedia%2Fa57be90b9da9f48f33797119c6025f0b%3FpostId%3Db0f0271e517f

As Lawson said, the Hollywood narrative of a “hacker” is broken. It ignores that there is more than just math to coding. Coding is an art. It wasn’t just all talk too. I continually saw it in talks at the conference.

Code is creative

It was really hard to pick just a few talks that I thought highlighted how code is creative, but here’s a few of them:

Rachel Simone Weil’s talk “Hertzian Tales: Adventures in “Useless” Hacking”

Rachel really digs into a question that many of us have when we hack and build projects that will never be a “business.” While some of the hacking could be considered “useless,” she touches on the real benefits of creative projects from a critical design lens.

Jenn Schiffer’s talk “What If Twilio Could…” A Tale of Glitch, Twilio, and the Power of Friendship”

If you haven’t gotten to listen to a talk from Jenn, you should! In this talk, Jenn and a friend worked on coming up with a bunch of creative and random ideas based the prompt “What if Twilio Could…” They were able to be more creative by relying on APIs and not worrying about if it was technically possible. Jenn digs into some critical questions, such as, how do we keep the “a-ha moment” going when we have these creative ideas? And how do we work with people while building and learning new things?

Lauren Leto’s talk “At-Home Batphone: The Future of Phone Numbers and Noble MVNOs”

Can you imagine a world where you never need to check your phone and the messages that need to get to you do? Lauren is building for that future with APIs, like Twilio. I love this one quote from the talk:

“When anyone can do it, there’s more chance for creativity.

https://www.youtube.com/watch?v=-FPczYPXlRw
https://www.youtube.com/watch?v=llIiw3scRak&disable_polymer=tru
Andrew Reitano’s talk “NESpectre: The Massively Multi-Haunted NES” and Dan Gorelick’s talk “Crowdsourcing Music via WebSockets: Using Scalable Technologies to Enable Musical Expression”

These last two talks really highlighted interdisciplinary creativity. While I strongly believe all code is interdisciplinary, these two took it to another extreme. Also, I played games and music with 60+ other people in the room while watching these talks, which was freaking awesome!

One other thing about these two talks is that they had nothing to do with Twilio, which I thought was great. Twilio put developers first by choosing to educate attendees on creative and interesting uses of technology over their own API.

How do APIs come into play with this creativity?

As some of the speakers touched on, APIs open up opportunities. When you get to focus on being creative instead of on whether something is technically challenging or possible, what you make with the code is more creative.

Lawson asked questions in the keynote like, how many business problems are there that we could be solving if we had the right APIs first to solve them? And why isn’t that an API?

 

When dealing with inflexible legacy systems, we don’t always get to solve the most creative problems. APIs allow us to apply our creative energy on a whole new set of problems waiting to be solved.

Lawson also asked, “How big can this economy get?” This really turns into a question for developers. When we are creative, what are the limits with building with APIs? At Keen IO, we are still pushing those limits today while we explore the possibilities that are part of an Unstoppable API Era.

It is common to say that software is eating the world, but in many ways APIs are eating the world.

Our own API stories

Many of us have our own API stories. Suz Hinton mentioned in her talk about immersive experiences with Chat Bots +IRL Bots that many of us, including herself, have a “Twilio Story.” This idea came up constantly at the conference.

For example, my own “Twilio Story” is that the Twilio API Docs helped me get interested in web development. Previously, I had been writing Java and C++ programs that were completely disconnected from the Internet. The Twilio API Docs helped me setup my first web server, a Python Flask server, in order to send a text message for a project I was working on. This was a life changing experience for me.

Paul Fenwick’s first Hello World experience with Twilio turned into building the National Rick Astley Hotline and then he gave an awesome talk about it at Signal! Basically, he knew nothing about the technology, but because the APIs and technology existed, he was able to focus on the creative use case first.

A conference for everyone

Lastly, the last part of conference is usually the “celebration.” Twilio calls their conference after party, $bash. (Feel free to insert your own bash jokes here.) I’d say that Twilio sprinkled celebration into a lot of parts of the conference, but this is where it was at its greatest.

Photo booths + Face paint artists = 💖

I was definitely unsure about this celebration though. When someone tells you that there are going to be “coding challenges” and “puzzles” in a dark environment with lasers, loud music, and alcohol in a warehouse on a pier in San Francisco, you can’t blame me for being slightly pessimistic about how “fun” it will be.

I quickly realized that there was really something for anyone at $bash. If “coding challenges” weren’t your thing, there was half a dozen other things you could do instead. That’s why $bash was special to me. As an introvert, who really likes conferences yet is exhausted at the end of them, after parties aren’t always my idea of “fun.” It even got two of our co-founders to stick around and try out the coding challenges.

As Kyle Wild, our CEO, said:

“Signal was like a case study in how to make a conference for introverts. I ❤❤❤❤ it and want to go back every year.”

Congrats on finding the right mixture of mystical conference art, Twilio! See you next year.


P.S. At SIGNAL, we also announced our partnership with Twilio to provide Contact Center Analytics. Check out our blog post about analytics with the TaskRouter API:

Twilio’s Al Cook using multiple APIs including TaskRouter API and Keen IO to build Contact Center Analytics, see the talk here.

P.P.S. I highly recommend going to check out some of the talks I shared. If you loved those, here are a few more favorites:

From left to right: The Democratization of State: How exposing real-time state can improve your business, Lucky: Examining the Barriers to Contributing to Open Source
From left to right: Coding for a Cause: SMS for Voter Registration, and Build Twilio Apps That Scale to the Moon

If you think “Code is Creative” or the talks that were included are awesome, consider clicking the ❤ below!


Keen Instant Analytics Dashboards for Web

The Problem: Time to First Hello World

As developers, the faster the “Time to First Hello World” is, the better the experience. Whether you are a developer just learning how to code or have 20+ years of experience, the first “Hello World” with a new API, framework, or language is an amazing feeling.

When using custom analytics tools, it can take some time to setup analytics across your entire website. We wanted to help solve this problem.

The Solution: Auto-Collector Tracking & Dashboard

The Auto-Collector is a drop-in JavaScript tracking library that allows you to automatically begin collecting key site interactions like pageviews, clicks, and form submissions.

You can instantly get usage analytics like:

  • How many people in Mexico visited your site last week compared to this week?
  • What is your most popular page this month?
  • How many pageviewers converted to filling out your contact form?
  • Where are your pageviews being referred from? (Google, Twitter, internal link, another website, etc)

Today, I’m excited to share an instant analytics dashboard for the Auto-Collector. It takes advantage of the existing event data models that the Auto-Collector uses.

When using the Auto-Collector Dashboard, you don’t have to think about what your data model is. It’s a great starting point to hit the ground running quickly, and of course you can always customize and enrich your data and your data views later.

How you can use it: Up and Running in 60 Seconds

  1. First, create a free Keen IO account (unless you already have one)
  2. If you don’t already have Auto-Collector installed, drop in this snippet(with your PROJECT_ID and WRITE_KEY) into your website’s <head> code and start seeing web events flow in within seconds.
  3. Next, if you go to this link, you can “remix” the dashboard to create your own version. All you need to do is insert your PROJECT_ID and READ_KEYfrom your existing Keen IO project that has data streaming to it from the Auto-Collector.

By the way, I used Glitch, a free collaborative code editor and hosting system, because it’s amazing and you can have a working dashboard in seconds. It’s also helpful for running experiments and trying out new technologies. The team at Fog Creek Software, who built Glitch, are super helpful and awesome humans.

This is just one tab of the dashboard. There are two more!

You can see an example dashboard right now running on Glitch because I have included an example Project ID and Read Key in the project.

You can also check out the code on Github here. If you like it, please give it a star! 🌟 I’ve also included instructions there to help you run the dashboard locally if Glitch isn’t your thing.

Conclusion

This dashboard is just a start with Keen IO. There’s tons of awesome things that can be done with this data and the Keen IO Dataviz Library. If you want to track custom events beyond pageviews, clicks, and form submissions, you can easily access our core Tracking Library, which this SDK uses under the hood, to send custom events to Keen.

The data collected by the Auto-Collector nicely compliments other user behavior data, like signups, logins, purchases, powerups, upgrades, errors, swipes, favorites, impressions, etc.

We’d love your suggestions, feedback, bug reports, stars, and other contributions in the Github repo. We’ve been known to send out some fun swag for contributions of all sizes. *wink wink*

I’d love to hear what you think about it and what we can build next to help you try out Keen IO in response below. Happy hacking!


Tracking Gmail Reply Data with Keen IO and IFTTT

Ever want to track the number of replies to an email in your Gmail inbox? Last week, I wanted to track email replies to the “Invitation to Pair” email Keen IO users get from my teammates. If you’re a Keen IO user, you might have seen an email come from Joe, Maggie, or me offering to pair program with you. The emails are sent with Hubspot. Although we know how many emails are sent and how many links are clicked, we aren’t able to track the number of replies. It’s important to know this data since we each reply to many of these emails.

We’re an analytics company, we should be able to do this!

💡💡💡

Then we got an idea!

Let’s try to use IFTTT. IFTTT (If This Then That) is a free web-based service that allows you to create chains of different services, called Applets. An Applet is triggered by changes that occur within web services, like Gmail. Basically, if this happens then that happens.

You can create Applets that send data to Keen IO with the Maker service as the Action. The Maker service allows “you can create Applets that work with any devices or apps that can make or receive a web request (aka webhooks).”

Luckily, we recently wrote a guide to help you send data to Keen IO with webhooks.

Photo by the amazing #WocInTechChat

The Setup

For this example, we are going to show you how to send data from Gmail to Keen IO, but you can really send data from any IFTTT Triggers.

If you don’t already have a Keen IO account, sign up for a free one here. You will need this in a few minutes.

If you don’t already have a IFTTT account, sign up for a free one here. After you have logged into your IFTTT account, go to https://ifttt.com/my_appletsand click “New Applet” to create a new applet.

Next, you need to select “this” so we can select a service for our trigger. For this example, we are going to use Gmail for the service.

After you have selected “Gmail,” then we are going to use the “New email in inbox from search” Trigger so we can see when a specific email is replied to.

For our example, what we are searching for is going to be pretty simple:

Gmail’s search operators are super helpful for creating a search query. Our example’s is pretty simple, but you can make them more complex.

After we select “Create trigger,” we select “that” next.

This is where the Maker service comes into play, like searching for “Gmail” earlier, we search for “Maker.” Once the Maker service is selected, we choose the “Make a web request” action.

Now it is time to add your webhook that will stream data to Keen IO. The URL format will look like this:

https://api.keen.io/3.0/projects/PROJECT_ID/events/EVENT_COLLECTION?api_key=WRITE_KEY

You will need to replace PROJECT_IDEVENT_COLLECTION, and WRITE_KEY with your own values from the Keen IO project you want to use.

You can find PROJECT_ID and WRITE_KEY in the “Access” section in your project’s “Settings” tab. Your EVENT_COLLECTION can be whatever name you would like it to be. In this example, a good event collection name would be “reply.”

Your Method will be POST since you are posting data to Keen.

Now that we have set up where to send the data, we can set what data will be sent.

You will want to set the Content Type to application/json, then you will want to craft the body aka the data that will be sent to Keen.

You can use their +Ingredient button to see what all you can send from the Gmail trigger. We will be using the FromAddress that Gmail + IFTTT provides to us.

Remember: You need to send valid JSON to Keen or the event will fail and no data will be sent. You can double check to make sure your JSON is valid by copy and pasting it into JSONLint.

For the purpose of the example: We are manually including the email address for the Gmail account the Applet is connect to.

After the Body is set, time to “Create action!”

Then you can review your applet and select, “Finish.”

Testing

As we mentioned before, making sure the JSON is valid is really important. If it is not, the request to Keen will fail.

I would recommend testing the applet by sending yourself an email with the exact subject line that the Applet will be searching for to your_email@gmail.com then going to your Keen IO project to see if the data was received.

Visualization

Here’s an example graph from this Applet:

This is tracking replies to the Invitation to Pair email. It is a 'count' with group_by: [ "to" ], timeframe: 'this_14_days', interval: 'daily'.

Now we have data on emails that we never knew before! For example: February 20th was a really popular day for replies for all three of us. Awesome, right?

You can send data from any trigger that IFTTT has. With hundreds of different triggers, the possibilities are pretty great. If you think of a helpful one, let us know! We’d love to share it.

Some ideas we had were to track data about Android SMS or Phone Calls, home automation data with devices like Nest Cam or WeMo Motion, or productivity data with Todoist or Toodledo.

Share your Applets sending data to Keen IO with us on Twitter.

Next week, I am going to try this out with Zapier too and write about it. Stay tuned!


What We Learned When We Surveyed Our Developer Community

In December we sent out a 40 question survey to learn more about the developers we build products for. We’re pretty proud to have over 50,000 people on our platform, but there are an estimated 18M developers out there, so there are plenty more to reach. We believed data on our existing base could help us focus our efforts this year.

Diversity and inclusion are also things we think a lot about, and we wanted to get a benchmark for some of our community demographics.

Methodology

Developer Community expert Sarah-Jane Morris was the mastermind behind our survey. She canvassed the company to learn what we wanted to know, put together a great set of questions that went through several review cycles, then shipped it out using Typeform. We distributed the survey request through our usual community channels: an email to our base of 50k signups, tweets to our 30k Twitter friends, and multiple messages to our public Community Slack. Without offering any incentive, we got over 400 responses, which we then filtered to exclude employees and trolls.

It’s probably worth noting the inherent bias in the survey: It’s filled out by people who had the time, energy, and inclination to fill out a Keen IO Developer Community survey (thank you!).

Results

Below you’ll find highlights of our findings. Some matched our instincts, but others surprised us! We share this to be helpful to other developer communities, and hope that some of this might apply to other developer-focused products as well. 😄

We’ve put the results in 5 categories:

  • Demographics
  • Tools and Frameworks
  • Open Source Contributions
  • How Welcome Do You Feel in Our Community?
  • Community Events

Demographics

0_E72EDQXoDevk7C4d.png

0_e5iRIvZd50XHVSUG 2.png

We weren’t shocked to learn that the largest demographic in our developer community are white males. One of the main motivations for running the survey was a desire to make our community more diverse and inclusive. We needed a way to benchmark where we are right now. Project Include has lots of great recommendations to help tracking these metrics.

We have noticed our local community at Keen events in San Francisco is more diverse than the larger worldwide Keen community. We will be working on ways to include more people online as well.

0_AHOpnWZmKaez_ebA.png

The age demographics of our community were a tad surprising. After all, aren’t there far more developers in their 20s than there are developers in their 30s? One idea that stands out is that community begets like community. The founding team and most of the early employees at Keen IO are in their thirties, and our community seems to have naturally spread predominantly to developers with similar demographics.

0_uGtJBM2uvG0Ydk8D.png

Wow! We found this statistic very interesting. It made us wonder, “How could we communicate better?”, especially as an API company that relies heavily on documentation. We don’t only mean communicate in someone’s first language, but also, “How can we communicate better in English?” Sometimes phrases in another language can be nuanced and cause confusion.

Tools and Frameworks

0_d8alo032-1psf6lB.png

As you can see, Node.js beat all the other frameworks by far with AngularJS coming in second. Depending on the type of developer community, this can be really helpful for a product team to know. Is it easy to start using our API with Node.js and other popular frameworks? Is the “time to first hello world” under 5 minutes? Do we have documentation to help support users using popular frameworks?

0_364GNCp7G3QAawhY.png

We confirmed our suspicion that Stripe, Twilio, and SendGrid have wide adoption in our developer community. This helps us reinforce our investments in integrations, tutorials, and collaborations with these companies.

Surprisingly, lots of respondents said they were using no developer tools in production. Perhaps this question was confusing? Do most developers considered APIs to be “developer tools?” This is something to think about for future surveys.

0_ygkGH9Wghk67qaCL.png

We don’t blame you if your first thought when looking at this image is, “A word cloud in Comic Sans, really?!” Even though word clouds are silly, this one was interesting and is useful to anyone at an API or developer focused company. Developers want Documentation to have more examples!

We have taken this feedback to heart and have been adding much more examples to our docs over the last couple of months, like these examples visualization examples with JSFiddlesdatetime enrichment examples, and video tracking code examples.

Open Source

0_lbnW2q6-vHwVv1s8.png

It’s really awesome that over half of the Keen IO community contributes to an open source project. We made sure to ask this question to not only developers, but everyone who took the survey. It’s important to remember that anyone can be an open source contributor, not just developers. A copy edit, documentation, a logo, bug reports, community management, project management, mockups, and marketing material are all forms of contribution. It’s also important to note that open source contribution is not for everyone. It takes time and other privileges to be able to do so. I gave a talk about it at OSCON London in October.

0_94oF3N1xUZtLXKa-.png

In most open source community research, the number of female contributors ranges from 1.5 to 10%. We were shocked to see out of those who identify as female, 31.15% contribute to an open source project. It would be interesting to see what this is for other communities.

Some suggestions to increase this percentage in other communities is to donate to organizations like Outreachy and Rails Girls Summer of Code who are working to increase participation from underrepresented groups in open source.

It will be interesting to see the results of The Open Source Survey, which is being designed by GitHub in partnership with the Open Source Initiative and researchers at Carnegie Mellon University. Once it is completed, it can help give us more insight into open source communities.

How Welcome Do You Feel in Our Community?

0_NLDU7VZ5VsQJvDta.png

We found this number very low. About 8 months ago, we announced the Keen IO Community Code of Conduct. After its release, we promoted it heavily to bring awareness to it. We still do, but as we learned from the survey we could do more.

Currently, we:

  • Have it as a requirement for any new Keen IO open source project
  • Mention it in event invitations for office events and during events
  • Have it as a handy Slack command in our Keen Community Slack

Some ideas to promote it more could be:

  • Including it on posters around the office when we hold events
  • Adding it to a welcome message to anyone joining Community Slack
  • Making sure 100% of our open source project include it alongside the Contributors guide!

0_kcOyi5YfEs11ARDA.png

We found that most people feel welcome in the Keen IO Community. One way we have tried to achieve this is our Community Code of Conduct. We also have a wide range of events in the office on everything from mental health in tech to communication labs. By having events like this we hope to create a community which has the same values of Introspection, Continuous Learning, Personal Agency, Honesty, and Empathy as we do internally. We have found that this welcomes in a larger group of people into our community than the standard developer events.

In other places like Keen IO Community Slack, we have other community members helping each other as well as Keenies helping users. We aren’t available at all hours, but when we are we try to be as welcoming as possible. There are other small ways to be welcoming like sending out stickers, shirts, or cute little animals.

 

0_VyecyXa0v7KKEqO0.jpgRight now, when you sign up for Keen IO, you get an email that invites you to pair with a developer at Keen IO. Our “Invitation to Pair” email also encourages users to ask questions and join other places like Community Slack for more help. You have to find what works for you to create a welcoming community.

Respondents also suggested that we could strengthen the community by communicating more about what other developers are doing with Keen.

Community Events

0_ZF-A60LahraEtZhS.png

We found that only about 16% of people have been to a Keen event. Also, 16.63% of respondents live in the San Francisco/Bay Area, so we might be doing a great job of bringing in locals for the events in the Keen IO office. But what about the other 83%? In the past, Keen has done events all over the world. If you have ever done national or international events that involve travel, you know how time consuming and costly they can be. Live streaming events can possibly bring in a larger audience to be a part of your events when attendees are limited by location.

0_8EwINn2hqPxiUW6x.png

We thought the list of kinds of events respondents would like to see more of would also be helpful to others too. Some of the ideas are outside the normal realm of events. For example: Events working more with locals.. This could manifest itself in a few different ways. You could do technical events with organizations like Hack the Hood or you could choose to do an event where attendees volunteer their time to a local non-profit organization.

0_oxAOnfVNOxM9Wv_2.png

0_1_j2Aicyz83OSdTl.png

We found about a quarter of respondents say they never go to developer conferences or meetups. If we were only focused on conferences and meetups, we would never get to interact with these developers.

Thanks to everyone who generously took time out of their day to complete our survey. You can check out the full length of our survey results by downloading them here.


An Open Source Conversation with OpenAQ

0_MVi610gsYWCx-Bc8.png

Last month, I sat down with Christa Hasenkopf and Joe Flasher from OpenAQ, one of the first open, real-time, air quality data platforms to talk about open environmental data, community building, analytics, and open source. I hope you enjoy the interview!


Taylor: Could you both tell me a little bit about yourselves, and how y’all got interested in environmental data?

Christa: I’m an atmospheric scientist, and my background for my doctoral work was on ‘air quality’ on a moon of Saturn, Titan. As I progressed through my career, I got more interested in air pollution here on Earth, and realized I could apply the same skills I’d gained in my graduate training to do something more Earth-centric.

That took Joe, my husband, and I to Mongolia, where I was doing research in one of the most polluted places in the world: Ulaanbaatar, Mongolia. As a side project, Joe and I worked together with colleagues at the National University of Mongolia to launch a little open air quality data project that measured air quality and then sent out the data automatically to Twitter and Facebook. It was such a simple thing, but the impact of that work felt way more significant to me than my research. It also seemed more impactful to the community we were in, and that experience led us down this path of being interested in open-air quality across the world. As we later realized, there are about 5–8 million air quality data points produced each day around the world by official or government-level entities in disparate and sometimes temporary forms but that aren’t easily accessible in aggregate.

Joe: I was a trained as an astrophysicist but then I quickly moved into software development and so when Christa and I were living in Mongolia, I think we just sort of looked around and saw things that didn’t exist that we could make, we went ahead and did that. Open data was always something that seemed like the right thing to do. Especially when it’s data that affects everyone, like air quality data. I think we have the tools together: I had the software development skills and Christa with atmospheric science to put things in place that could really help people.

Taylor: That’s awesome. Could you tell me more about the OpenAQ Project?

Christa: Basically what we do is we aggregate air quality data from across the world and put it one format in one place, so that anyone can access that data. And the reason we do that is because there is still a huge access gap between all of the real-time air quality data publicly produced across the world and the many sectors for the public good that could use these data. Sectors like: public health research or policy applications, or an app developer who wants to make an app of global air quality data. Or say even a low cost-sensor group that wants to measure indoor air quality and also know what the outdoor air quality is like so you know when to open your windows if you live in a place like Dhaka, Bangladesh or Beijing, China. And so by putting the data in this universal format, many people can do all kinds of things with them.

Joe: Yeah, I think we’re just focused on two things. One is getting all the underlying air quality data collected in one place and making it accessible, and the main way to do that is with an API that people can build upon. And then we also have some of these other tools that Christa mentioned to help groups examine the data and look at the data, but meshing that with tools built by people in the community. Because I think the chances of building the best thing right away is very small. What we’re trying to do is make the data openly available to as many people as possible. Because a lot of these solutions are based in local context in a community.

Taylor: That’s really cool. I have heard from other organizations that when you open up the data, you democratize the data because it’s available for the people.

I read the Community Impact document for the project and you had mentioned that some researchers from NASA and NSF and UNICEF are using the data from OpenAQ. I was wondering, what are some other cool applications of the data that you are seeing?

Christa: I think when we first started the project it was all about the data. It was all about collecting the data, getting as much data as we could. And as we went on, we realized, pretty quickly, it’s actually about the community we are building around it and the stuff that people are building. And so there are a few different pieces.

One thing we have seen is a journalist taking OpenAQ-aggregated data to analyze air quality data in their local communities. There is a journalist in Ulaanbaatar, Mongolia, who has published a few data-driven articles about air quality in Ulaanbaatar relative to Beijing. There are some developers who have built packages that make the data more accessible to people using different programming languages.

There is a statistician in Barcelona, Spain, who has built a package in R that makes the data very accessible in R and makes cool visualizations. This person made a visualization where she analyzed fireworks across the US on the Fourth of July. She did a time series, and you could see a map of the US, and as 9pm rolled around in the various time zones you can see air quality change across the US as the fireworks went off.

There is a developer in New Delhi, India, who has made a global air quality app and Facebook bot that compares air quality in New Delhi to other places or will send you alerts. We feel these usages point to the power of democratizing data. No one person or one entity can come up with all the possible use-cases themselves, but when it’s put out there in a global basis, you’re not sure where it’s going to go.

Joe: We have also been used by some commercial entities to do weather modeling, pollution forecasting. Christa, there was an education use case right… Was it Purdue?

Christa: Yeah, a professor there is using it for his classroom to bring in outdoor air quality data to indoor air quality models. Students pick a place around the world. They use outdoor quality data from there to model what indoor air quality would look like, so they are not just modelling air quality data in Seattle, which is pretty good air quality. But they are also pulling in places like Jakarta or Dhaka, to see what air quality would be like indoors, based on the outdoor parameters.

Low cost sensor groups have contacted us because they are interested in getting their air quality data shared on our platform. These groups would like their data to be accessible in universal ways so that more people can do cool stuff with it too. Right now, for our platform, we have government-level data, some research-grade data, and a future direction we are hoping to move is low-cost sensors, too.

Taylor: As you have touched on, I read that OpenAQ has community members over four continents and aggregated 16 million data points from 24 countries. I am curious, how were you able to grow the project to have all that data coming in?

Christa: We have a couple ways of getting the word out about OpenAQ and getting people interested in their local community and to engage with the OpenAQ global community. One way is we do in-person. We visit places that are facing what our community calls “air-inequality” — extremely poor air quality in a given location — and we have a workshop that will convene various people, not just scientists, not just software developers, but also artists, policy makers, people working in air quality monitoring within a given government, and educators. We focus on getting them all in the same room, working on ways they can use open data to advance fighting air inequality in their area.

So far, we’ve held a workshop in Ulaanbaatar, and we have had meetups in San Francisco and DC, since that’s where we’re based. We have also done presentations in the UK, Spain, and Italy. We are about to have our next work shop in Delhi in November. We’re getting the word out through the workshops, the meetups, on Twitter, we have a slack channel. Participation in the OpenAQ Community has been growing organically in terms of participation. Whether it’s in terms of the development end, pulling in more data, or in the application of the data. We tend to get more people interested in using the data once they are aggregated rather than in those helping to build ways to add in more data, which makes sense. We are always in need of more people helping on helping build and improve the platform.

Joe: In the beginning it was very interesting how we decided to add in new sources — there are so many possible ones to add from different places. You could look at a map and see where we had been, because whenever we would go somewhere to give a presentation we would want to make sure we had local air quality data. So before we would give a presentation in the UK, we would make sure we had some UK data. Data has been added like that and according to interest for particular locations in the community.

An interesting thing that we are able to do now with the Keen analytics, is that we can look at what data people are requesting most, and even if we don’t have the data, they might still be requesting it. So we can see from the analytics where we should potentially focus on bringing in new data. So it has been a very helpful way for us to be more data-driven when looking at what data to bring in.

Taylor: When you have a project that is an open source or an open data platform, your time becomes very valuable. You want to put your resources where they are needed most.

Joe: We want to be as data-driven as possible. And it’s hard for us to talk directly to all of the people who are using the data. I think we have a similar problem to anyone who opens up data completely. We don’t require anyone to sign up for anything. We have a lot more people using the data than we know about. We can see just from how many times the data is getting grabbed that it is popular. The analytics really help us, sort of tell something about those use cases, even if we don’t know of them specifically.

Taylor: Could you explain your use of Keen for everyone so they can understand how you are figuring that out?

Joe: The API is powered by a Node.js application that includes the Keen library. Every request that comes in goes to Keen and so we have a way to sift through it.

We don’t track any use, any sign ups, any API keys or anything at the moment. We don’t see addresses that come in from the requests, they are anonymous. But we do get tons of data that we can look through. And that was super-helpful. It gave me two lines of code that go into my API and then all my requests come into Keen and I can handle all the queries there.

We do all the normal things that you would do: total counts of requests that are coming in, we look at our end points usage statistics. This is also very interesting, we were looking at this the other day, not all our endpoints are equal and our system has some that are much heavier computationally and have taken a lot more work to create. It’s interesting to look at how much they are getting hit versus how much effort we put into making them. We can see the most popular endpoints that we have, and then we can also see ones that aren’t used as much. This helps me figure out what and how to prioritize efforts. We have a very database request heavy system. Knowing specifically the sort of queries that are coming in really helps us optimize the database to get the most out of it and make it most cost efficient.

Taylor: That’s interesting that you were able to gauge how much effort you put into some of those endpoints and then look their usage. When you don’t have that data, you are just guessing. It can also help you see that maybe there should be more education on some endpoints.

Why was it important to y’all for this platform to be open source?

Christa: So one of the major reasons we built this platform and made it open source is that we noticed a few of the groups who were gathering this sort of data and the data themselves weren’t open, nor was it clear how they were gathered. There was a few efforts, some commercial, some unclear if they were commercial or public, there were some researchers who do this. And everyone was doing it in a different way or wasn’t entirely clear how it was being done. We saw a lot of efforts having to duplicate what another effort was doing because their work wasn’t open. So we thought if someone just makes the data open and also the platform itself open source and transparent, so it’s clear how we’re grabbing the data — that’s a huge reason to do it. The other reason we chose, was that when we first started this, there was just two of us in our little basement apartment. It’s a big project, and we knew we would need help. So making it open source was an obvious route to find folks interested in helping us around the world.

Joe: I think the other piece here is that open source and free aren’t the same thing. But they are often times lumped together. Beyond just open source, I think what we wanted to be was freely available, because air pollution disproportionately affects people in developing countries. They are the ones that would generally have to pay for this data or don’t have access to them at all. And so we wanted to break down that barrier and let everyone have access to the data, making tools, and not have that be a roadblock.

Taylor: To end things, what is the most exciting thing about the project to each of y’all?

Christa: I think for me it’s definitely interacting with people in specific communities and sharing the data in the open. I love that, it’s the best.

Joe: For me it is definitely having people build something on top of it. As a developer, that’s the best feeling. In fact the first workshop we did in Mongolia, there was a developer who, just over the weekend, built an interface, like a much better exploration interface for the data than what I had initially made. Which was great, right? So I think we used that, and pointed people to that over and over and over again, because I think it took us probably, I don’t know, six months until we finally rolled out sort of a different exploration interface for the data. And that was just made by one community member and that was awesome.


I wanted to thank Christa and Joe for taking the time to talk to me about OpenAQ. I don’t know about you, but I learned a lot! It is a wonderful project that you should definitely check out.

0_-0A7toTf0PSxuT0k.gif

Keen IO has an open source software discount that is available to any open source or open data project. We’d love to hear more about your project of any size and share more details about the discount. We’d especially like to hear about how you are using Keen IO or any analytics within your project. Please feel free to reach out to opensource@keen.io for more info.


Introducing the Keen IO Community Code of Conduct

A few weeks ago we sent an email to the whole company introducing the Keen Community Code of Conduct. This blog post includes most of that email with a few more things added that we wanted to share with our community.

A few months ago the work began on the Keen IO Community Code of Conduct. We’re very excited to announce v1.0 of the Keen IO Community Code of Conduct is now public. 🎉

This Code of Conduct applies to all Keen IO Community spaces, such as the Community Slack group, open source projects, Keen IO meetups, Happy Data Hours, and more! It will be added over the next few weeks to different projects and other community spaces.

It is the product of many meaningful conversations and advice from many Keenies and other humans from outside of Keen IO. To anyone that contributed to this Code of Conduct, thank you. The process of creating a document like this isn’t easy, and we have so much respect for anyone who has done it before.

The Code of Conduct is a living document. This is only v1.0. It will grow and change with Keen IO and its community. This is why it is on Github. Issues can be created to help with revisions and updates. There is also a feedback form, which can be filled out anonymously. Feedback is always appreciated. It will also help guide training and more internal procedures for the Community Code of Conduct.

Lastly, we’re looking forward to making it even clearer to our community that we are dedicated to providing a safe, inclusive, welcoming, and harassment-free space and experience for all community participants, which will help grow our community in amazing ways. We hope this Code of Conduct clearly states what behavior is expected and not tolerated as well as establishes a path for community members to report possible incidents and seek help.

Please feel free to ask me any questions! I would be more than happy to have a larger conversation about it and its existence. 😀


Discover Gold with Keen IO and Popily

0_zsy4qS9hs4jwwyQ1.png

Have you ever been collecting a lot of awesome data, but you felt like it was a vast jungle of hidden gems and had no idea where to start exploring it? Popily + Keen IO are here to help.

Popily can instantly provide you tons of charts — you can pick out your favorites or ones that help you dig deeper into your data. Popily can impress your boss, team, customers, cats, and dogs. You can import data from anywhere into Popily — CSVs, databases, and more. With Keen IO, we’ve got you covered. No fancy data ninja skills are needed to import your event data from Keen IO into Popily. It’s as easy as three steps.

Find the gold in your data

Discover mind-blowing, meaningful insights in Popily. You can easily extract different data sources from Keen and other sources and merge them in Popily to get a better view into your data. Sometimes you won’t find what you are looking for right away — exploring Popily’s charts can help you dig deeper in your Keen IO data!

0_QyHwJIS6sFLYDoY5.gif

Share the gold you discover

When you find something interesting, Popily makes it super easy to communicate in a meaningful way. You can export their charts as images into PowerPoint, Excel, or embed them with a few lines of code. You can even embed them alongside your Keen IO data visualizations.

Need direct API access?

We’ve got you taken care of! You can build your own data collection and exploration engine with API access to both Keen IO and Popily. Collect and store data from users, websites, apps, and smart devices with the Keen IO API and SDKs, and then explore and visualize that data with Popily. You can see more about the Popily API on their blog and API Documentation.

Don’t miss out on loading your Keen IO data into your free Popily account today! Also, check out how exactly the Popily team sent UFO sighting data from Keen IO to Popily for a bunch of instant charts in their latest blog post.

0_-tae68eq33nf6mH8.gif


So You Want Developers to Love Your API

 

0_oMgLrMmFmtgc1Ntw.gifIn November, I gave a talk at the API Strategy and Practice Conference in Austin. It was my first time attending and speaking at the conference. I can’t say enough great things about it. I called the talk, “Building Community with Developer Love,” which was perfect for the conference because it was extremely welcoming and had a great community feel to it.

Some of the first things developer facing companies focus on is great documentation, code samples and sample projects, education through tutorials, how-to’s, and blog posts. If, as an API company, you aren’t, you might want to reconsider some of your decisions. These are all great things and very much needed, but sometimes there’s one thing missing from their focus… developer love! Developer love can help you grow your community, and in-turn, your platform.

The Sweet Spot

There’s a sweet spot that documentation, code, and education along with developer love creates. The more you develop the developer love circle, the easier it is to find the sweet spot.

A lot of times developer facing companies think, “I’m a developer, so I know how to relate to other developers to help grow a community.” While this is partially true, developer communities are made up of people from all kinds of backgrounds and experiences.

The great thing about developer love is that it can be universally applied within a community.

Two Way Street

0_mQiWZAa38QCd6_xO.jpg

People ask themselves two things when they join a community: Are they like me? Will they like me? To help answer these two questions, you have to be clear about what you stand for as a platform and as a community and make people feel welcome.

This is where developer love comes in. Developer love is both loving developers and developers loving your platform. It is a two way street: To get love, you must give love. 💕

Being able to engage, build relationships, effectively articulate, and be an active participant in developer communities is great. It’s important to remember, however, that it isn’t all about you. Don’t expect everyone to go crazy over your latest blog post just because it’s good content. Unless your community is already loving your blog and platform, they won’t be sharing it with their friends like you had hoped.

What Are Examples of Developer Love?

I find that it breaks down into four categories: Sharing, Face to Face, Snail Mail and Email, and Support.

See the video from the talk to see some great examples of all of these:

See the slides here.

Also, make sure to check out the rest of the videos from the Developer Experience session of the conference. There’s some extremely helpful and awesome ideas in the session.

Ask Why

Lastly, it’s important to always ask why. Developer Relations, Advocacy, Evangelism, and Community work should always be full of conscious decisions. Should you write a blog post highlighting a cool feature or send out an email to pair program with new users?* Always ask yourself, “Why am I doing this?”

*Hint: It is a good idea to share knowledge, not features.

Now, go spread some developer love!

0_h1FjiETC1OCsBRXR.gif


High Fives at API Strategy and Practice

Greetings from Austin, Texas

Next week, I’m super excited that the API Strategy & Practice Conference is coming to Austin. I could write a novel-sized blog post about how much I love Austin, but I’ll just settle with saying: It’s pretty great city, but I might be slightly biased since I’m a local and Austin is home.

Austin has a spectacular developer community. I started participating in the Austin developer community while studying Computer Science at The University of Texas at Austin. It has a small town feel for such a large and growing city. I would not be where I am today without it. I’m glad we can share a bit of it with the APIStrat community.

The API Strategy & Practice Conference and its organizers have a long history in the space and they have become one of the leaders in the field of APIs. From business models to API design and evangelism, there’s a wealth of knowledge being shared and collaborated on throughout the conference. Before I joined Keen IO, I remember watching a fireside chat from APIStrat 2014 of one of Keen’s founders, Kyle Wild. For me, the stage at APIStrat had a strong sense of authenticity.

A few of my favorites

I’d love to be a resource to those visiting for the conference, so here’s a few of my favorites:

Please let me know if you need anything else or other local favorites on Twitter or via here.

How to find me

If you’ll be around the conference and want to meet up, feel free to reach outto me if you want to chat about Keen IO, event data, analytics, community building, hackathons, or anything else. I will also have some sweet limited edition Keen IO Field Notes and stickers to share. I’d love to meet up!

You can also find me talking about “Developer Love” during the Developer Experience breakout at 2:45pm on Thursday, 19th and on the API Consumption Panel with a bunch of other great people at 10:00am on Friday, 20th. A bunch of developer evangelists and advocates, including myself, will also be meeting up on Thursday from 6:30–9pm at Waller Creek Pub House for a meetup organized by the Austin Developer Evangelists group. You can RSVP and get more info here. Feel free to join!

I hope to see y’all in Austin soon! 😃


Introducing the Community Projects Page

A few weeks ago one of my teammates asked if I had any examples of projects using a specific piece of hardware. Between hanging out on Keen Community Slack and Twitter, I had seen a project using it and sent my teammate a link. Although a lot of the Keen team is heavily involved in our own community, some of us see a lot more than others, so I decided to start a page of all the projects I see using Keen or assist in using Keen, like a helper library, to help out the rest of the team.

While making it, I realized how awesome all these projects were. I wanted to share it with everyone as a helpful resource, not just the Keen team.

So I made this file in our Community team repository: Community Projects

Currently, there’s 20+ projects on the page. I hope you find something interesting, helpful, or cool. I know I missed some projects. This is where I need your help.

0_fmlnIwOq49GV3Y6p.gif

Please send pull requests to add your projects, your coworkers’ projects, your friends’ projects, and any other projects you’ve seen using Keen. Blog posts are great but so is just sharing the source code in a repository. Even the smallest projects can help other community members, so don’t let the coolness or size of the project deter you from pull requesting it.

We will be sharing the page with new and current users, so it is a great way to get some exposure to your projects.

A couple of the projects I thought were especially helpful were:

Making a dashboard with Keen IO and SendGrid’s Event Webhook data by Heitor Tashiro Sergent

0_Fb8guNyWAECAVgC1.png

Cohort Visualization Tool built with Meteor and Keen IO by Ry Walker

 

0_nF5DMTYw5E_BgpWQ.gifAlso, I would like to give a shout out to everyone who has worked on these projects. Thanks for making the Keen IO community awesome!

Can’t wait for your pull request! 😃