This is Part Three of Three in a series that identifies opportunities for improving the experience of working with open data. In Part One we established that there are at least two primary opportunities. Part Two covers the first one: opportunities for improving the UX when working with API documentation. In this article we look at the second opportunity: improve the UX around the APIs themselves.
Our customer journey took us through the documentation for a few APIs as we build my new hot app, Kittinder! An app for setting up play dates for your cat. The APIs are from PetCensus, The ASPCE (American Society for Promoting Cat Emotions), and Tinder. As we look at the APIs, note the different facets of UX that could use some help, especially:
- Interactive Design
- Information Architecture
- Visual Design
Our main goal is to keep software engineers interested in the APIs as well as to provide good data. That means giving them an interactive experience that gets them up and running as quickly as possible.
Ultimately, software such as a web page or a mobile app will be “reading” the data from the API. But initially engineers will be looking directly at the data as they start working with it. The methods (or calls) for getting the data should be intuitive and the engineers should be able to anticipate what the APIs will return.
Requesting Data Should Be EZ-PZ
First things first, provide useful methods. Consider the PetCensus API: since I’m looking for the number of cats in a specific area then there should be a method for getting the count of particular cats by zip code. PetCensus comes through with
Without testing it out I already know that this will give me the number of cats in southeast Washington DC.
Now lets take this a step further and see if they’ve standardized their queries. Instead of zip code I make the call for a particular breed with
I’m not disappointed and PetCensus returns the number of Tonkinese throughout the U.S. Add back in the zip code and we should be able to predict what we’re going to get back. Should we bother reading the documentation?…
Another consideration is the metadata vs. the data itself. If it makes sense, create two APIs, one for the metadata and one for the data. That way when the engineer calls for the data, each call is not bogged down with redundant metadata. If it doesn’t make sense to split them into two APIs, then at least group all of the metadata together under one heading at the beginning of the results and group the data under a separate heading.
Upon opening the “package,” the data should be logically organized and follow a consistent architecture. When I make the count call to PetCensus they return the results by ascending zip code – very easy for me to read. Furthermore, the fields for each zip code are uniform:
- zip code
Again, easy to read through quickly and grok.
Conform Data Types
I mentioned in Part Two that any API that requires a controlled vocabulary or taxonomy should document it. Similarly, the APIs should use those same vocabularies consistently. When I make a call for American Shorthairs with one API, I should use that same term for calls to any of the APIs from that same organization. Don’t cut corners and use just Shorthair or Alley Cat for the other APIs.
When I called on the ASPCE for the various cat dimensions, I expect that they are also going to deliver the data consistently. For example, for weight they should give me the pounds + ounces whether ounces is greater than zero or not. If ounces=0 and the ASPCE decides to save on bandwidth and not store or return that value, then I’m probably going to think there’s a problem with the API, and not assume that the value was zero so they skipped it.
Finally, practice good usability and be helpful with your error codes. If I make a call outside the taxonomy tell me that Bassett Hound is not a cat rather than Unauthorized Error.
Insist on Happy UX
Help the product owners that manage an organization’s open data suite create a happy user experience for the developers that come to their site. Jump in early and don’t back down. They need to know that a good UX can make or break an open data project’s success. Remember from Part One that an organization opens their data to make money, to spread their brand, expand their reach, or all three. It makes no sense to spend resources opening data only to push away developers (the data customers) with bad UX.
Different people have a different stake in the UX of open data and you can’t ignore any of them:
- Data stewards maintain and provide the data that the organization is opening.
- The org’s developers build the APIs that they published.
- Documentation writers and web designers teach data customers how to use the data.
- Data customers build apps, tools, and software off the data – they read the documentation, work with the data, build stuff, and provide valuable feedback.
- Data customer’s users access the organization’s data through the data customer’s apps and tools.
When all of these people are happy then you have a well oiled machine that meets the organization’s needs.
Remember, organizations open up swaths of data with no mind to UX. As such, it may be daunting to make improvements when the problems can seem so overwhelming. Take heart. Start small and make incremental improvements. Iterate from there.
And one more parting thought: organizations with open data often host hackathons. Their goal is to encourage developers to use their data to build stuff. Turn the hackathons into your own usability test session. Walk around, see where the developers get stuck in the documentation, hear where they’re cursing out the APIs. Then introduce yourself and let them know that you’re there to make them happy.
Go get ‘em!