Encouraging public API consumers to use Hypermedia

I’ve had a lot of great discussions at API Strat this year and it has inspired me to get back into writing. In the near future I’m going to convert my API World talk into a blog post and discuss some of my upcoming projects. For now, I want to lay out my thoughts on encouraging public API consumers to use hypermedia. To do that, I need to walk through three quick questions.

1. What is the goal of hypermedia?

There are many different reasons to use hypermedia but I think the most immediate goal is to reduce an API consumers dependence on hard coding their API actions. By allowing your API consumer to find these actions in an API response the server can manipulate the availability and implementation of these actions on the fly.

2. How do developers discover new features on your developer portal?

Do they come to a developer site looking for the “/me/likes/{video_id}” endpoint or are they looking for “how to like a video”?

I see a lot of reference docs falling into this pattern and use a URL path as the title of a segment of documentation

PUT /me/likes/{video_id}

This endpoint likes a video
- Video_id: int, the id of a video

I think the documentation can be quickly improved with some organization and copy improvements.

Like a video

PUT /me/likes/{video_id}
 - Video_id: int, the id of a video

New users don’t know if your API is GraphQL, REST, GRPC or so on. New users have a problem they need to solve and the documentation explains how to solve that problem. We shouldn’t assume they have any knowledge of your URLs.

(Quick shout out to Keen.io who does this well: https://keen.io/docs/api/?shell#get-project)

3. If an API feature is not documented, does it exist?

I would argue no. A new user finds a feature by searching through documentation, they are not randomly testing URLs. If someone does find a hidden URL I don’t believe they should have any expectations of reliability or stability. If you want to treat these random-url-finders with a little more respect you can consider stricter access control protections such as URL signatures.

The Theory

Let’s recap

  • Hypermedia is about encouraging users to use in-response actions instead of hard-coded URLs
  • New developers learn about an API by researching the problem they are trying to solve and the actions they might take to solve that problem. These users do not go to your documentation and start looking for a specific URL
  • Undocumented features don’t exist

If we believe all of these ideas, let’s stop documenting any URL associated with a hypermedia control. Let’s treat those URLs like they don’t exist. Following this theory, what do our docs look like?

Like a video

1. First you need a Video representation. You can retrieve this representation from any of our video endpoints, e.g. “My uploaded videos”, “Get all of a channels videos”, “Video Search” and so on.

2. Once you have the video representation look in the JSON for the key “likes”. This key will either contain an object or "null".

    a. If the value of "like" is null, you are not able to like this video and you should not allow your users to attempt to like this video.
        {...
            "like": null
        ...}

    b. If the value of "like" is an object, that object is a hypermedia control. A like hypermedia control describes how you can make an HTTP request to like this video.
        {...
            "like": {"uri":"/me/likes/12345"}
        ...}

If you want to learn more about how to use hypermedia controls, read our guide here!

Or if we want the content to be a little lighter and focused on a quick reference it might look like this…

Like a video

The hypermedia control to like a video is located in the video representation under the key "like".

Read more about video representations here.
Read more about hypermedia controls here.

Real world examples

Vimeo has one (accidental) example of success with this strategy. Video uploads are documented here.

TL;DR:

  1. POST /me/videos, grab the upload_link and complete_uri from the response
  2. Send your video data to the upload_url
  3. When the upload is complete, make a DELETE request to to the complete_uri

By leaving the URLs associated with the upload_url or complete_uri completely undefined, we have been able to encourage this link driven workflow for all API consumers.

Has anyone attempted this? I would love to hear any thoughts or experiences in the comments or on Twitter.

Posted toTechon11/3/2017

September Conference Recap

Hey everyone,

In September I attended REST Fest and API World. Here’s a couple of presentations I enjoyed, and links to my own talks.

REST Fest

Adam Kliment showed a cool little project called “Restful JSON”. Restful JSON is a formalized naming convention for URLs in JSON. I’m a big fan of standardizing small, composable pieces of JSON. Maybe next we can standardize forms or pagination.

Joshua T. Kalis discussed LuxUI which tackles the idea of a generating a web UI from hypermedia controls. I love seeing more people explore this idea. Django and Rails have automatic administration tools for their frameworks, maybe this can bring us closer to a more generic API solution.

My talk this year discussed two specific API design strategies Vimeo uses for video privacy.

REST Fest is always full of great talks. I recommend digging into all of this years videos.

API World

Mike Stowe presented a project called REST API multiple-request chaining. This project aims to standardize a JSON format that combines many http requests into a single request. We experimented with something similar at Vimeo to speed up web pages and mobile apps, but this project is a much more mature solution. I’m going to watch it evolve, and learn from his findings.

I was unable to attend Dan Schafer’s presentation on GraphQL, but we spoke later in the day. It was great to cut through the hype and hear architectural opinions right from the source. Our discussion reinforced my belief that GraphQL is a great solution to some very specific problems, but the hype is encouraging misuse.

API World did not record the presentations this year, but you can watch a previous recording of Dan’s presentation here.

I also spoke at API World about developer experience, and how to manage your API as your company changes. My slides are available here.

API Strat

I will also be at API Strategy and Practice this week. Track me down and say hi, I want to know how you are using and building APIs.

If you are not attending this year, reach out to me on Twitter, I’m always happy to talk.

Posted toTechon11/1/2017

API World

At API World I spoke about managing change in APIs. You can see part of the talk below, see my slides here, or learn more about the conference here.

Posted toTechon9/27/2017

How Vimeo’s API Handles Complex Video Privacy

Or watch on Vimeo

Posted toTechon9/16/2017

When Hypermedia Saves the Day

Or watch on Vimeo

Posted toTechon9/17/2016

Designing APIs that change

Your API is a long-term promise.

I promise that my API actions, and how they behave will work as described in the documentation.

This promise is hard to uphold. Time isn’t explicitly defined in that agreement. The API creator may think they can change their API whenever it’s necessary. The API consumer probably thinks the API will remain the same forever.

These two assumptions directly oppose each other.

APIs that want to iterate and improve their product could email all of their consumers, asking them to update their integrations. But what if those companies don’t have full-time software engineers on staff? What if they have a 6-month turnaround on their engineering process? What if they don’t have anyone listening to the email address attached to their API account?

APIs that want to create a perfectly stable platform could write their API once, and never make any changes. But what if you need to deprecate a feature completely? What if your website grows and evolves, leaving your API as a confusing, inconsistent facet of your product? What if you notice a glaring oversight in your original design?

In the real world, it’s more complex. I’ve been working on a frequently changing API for many years now, and would like to share some of the techniques I’ve learned to achieve long term stability for your API.

  1. Understand the limitations of your API consumers.
  2. Design APIs with change in mind.
  3. Design and develop APIs with your support team in mind.
  4. Be creative and retain backwards compatibility.

I’m going to expand on all of these topics in future posts. Follow me, or check back later for more updates!

Posted toTechon8/6/2016

Coroutines

Let’s get started

If you are working with JavaScript, there’s a good chance that you have a ton of promises or callbacks nested over and over again. Promises helped me clean up the numerous callbacks, but coroutines really took it to the next level. Coroutines allow you to remove callbacks entirely, and write asynchronous code that looks completely synchronous. In a couple of quick steps, I’ll show you how to simplify your promise-based code by converting to coroutines.

Note: This article briefly talks about generators. If you would like a more thorough description, check out my article on generators!

Let’s start with an example

Here’s an example using only promises. I’ve made it a little complex to really show off how powerful coroutines are. Throughout the rest of this post I’ll walk you through the conversion process.

Note: The request method performs an HTTP GET request on a url, and returns a Promise.

Example 1

function GET() {
    // Make an HTTP GET request to http://www.dashron.com
    return request('http://www.dashron.com')
        .then(function (json) {
            // parse the response
            json = JSON.parse(json);
            // Request a couple more web pages in response to the first request
            return Promise.resolveAll([
                request('http://www.dashron.com/' + json.urls[0]),
                request('http://www.dashron.com/' + json.urls[1])
            ])
            .then(function (pages) {
                // Build the response object
                return {
                    main: json,
                    one: pages[0],
                    two: pages[2]
                };
            });
        })
        .catch(function (error) {
            // Handle errors
            console.log(error);
        });
}

// When the GET request is complete, log the response, which is a combination of all responses
GET().then(function (response) {
    console.log(response);
});

Create a coroutine

First you need to create a coroutine. I’ve written a library (roads-coroutine) that helps you build coroutines. This library exposes a function, which takes a generator function as its only parameter and returns a coroutine.

Example 2

var coroutine = require('roads-coroutine');

var GET = coroutine(function* GET() {
    // ... Removed for brevity
});

GET().then(function (response) {
    console.log(response);
});

Find your promises, and add yield statements

Find all your promises, and throw yield directly in front (without removing anything!). yield is a keyword that can only be used in generators. In the below example, it was added before request and Promise.resolveAll. When everything is done, yield acts like an asynchronous equals sign. It will wait until the promise is resolved, and pass the result to the left. If the promise is rejected, yield will throw the appropriate error.

Example 3

var coroutine = require('roads-coroutine');

var GET = coroutine(function* GET() {
    // Make an HTTP GET request to http://www.dashron.com
    return yield request('http://www.dashron.com')
    .then(function (json) {
        // parse the response
        json = JSON.parse(json);
        // Request a couple more web pages in response to the first request
        return yield Promise.resolveAll([
            request('http://www.dashron.com/' + json.urls[0]),
            request('http://www.dashron.com/' + json.urls[1])
        ])
        .then(function (pages) {
            // Build the response object
            return {
                main: json,
                one: pages[0],
                two: pages[2]
            };
        });
    })
    .catch(function (error) {
        // Handle errors
        console.log(error);
    });
});

// When the GET request is complete, log the response, which is a combination of all responses
GET().then(function (response) {
    console.log(response);
});

Kill your promise handlers

Now that yield handles your promise functions for you (by returning the result, and throwing the reject), you can just use normal variables and try/catch. In example 4 I remove all then and catch statements, and replace them with variables and a try/catch.

Example 4

var GET = coroutine(function* GET() {
    try {
        // Make an HTTP GET request to http://www.dashron.com
        var json = yield request('http://www.dashron.com');

        // parse the response
        json = JSON.parse(json);
        // Request a couple more web pages in response to the first request
        var pages = yield [
            request('http://www.dashron.com/' + json.urls[0]),
            request('http://www.dashron.com/' + json.urls[1])
        ];

        return {
            main: json,
            one: pages[0],
            two: pages[2]
        };
    } catch (error) {
        // Handle errors
        console.log(error);
    };
});

// When the GET request is complete, log the response, which is a combination of all responses
GET().then(function (response) {
    console.log(response);
});

Notice all nesting is gone. Instead of making more requests in the then of the first promise, yield will handle the waiting for you. The above code looks synchronous, and is much easier to read.

Aesthetics aside, this solves one major headache with promises. With promises, if you ever forget a catch, your error will be ignored, and lost forever (or caught by the hard-to-manage uncaughtRejectionHandler. With coroutines, your exceptions will be thrown as expected, and can be processed as you see fit.

Let’s make it official

ECMAScript 7 is adding two new keywords to support this feature natively. async functions instead of generators, and the await keyword instead of yield.

async function () {
    console.log(await request("http://dashron.com"));
}

There are some minor differences which make this system better (such as order of operations) but we have a while until it will be available for use. In the meanwhile, keep using roads-coroutine!

Posted toTechon5/17/2016

Generators

Generators were introduced in ES6, and are available on these platforms. While I have not used generators in the browser yet, I use them heavily in server-side iojs.

Why should I care?

In my opinion, the number one reason to use generators is to clean up asynchronous code. Generators can also be used to create array-like objects, but their interactions with promises are incredibly powerful. This article will explain generators; a future article will explain how it applies to cleaning up asynchronous code. For now, I want to take you through the unique ways in which generators differ from normal functions.

Overview

First things first, here is a quick overview of how generators work. Some of this might not make sense yet, so take a quick glance and then read the full tutorial below.

Generator

Example 1

function* doStuff(value) {
    var foo = yield value;
    return foo;
}

Some notes about the generator:

  • Must be declared as function* (with an asterisk).
  • Should contain one or more yield statements.
  • Returns an iterator, not the function’s return value.
  • Starts in a paused state until you call the next() method of the iterator.
  • yield will also pause execution of the generator until the iterator allows it to continue via the next() method.
  • To pass data out of the generator, you must yield or return your value. This value will be part of the object returned by the iterator’s next method.

Iterator

Example 2

var iterator = doStuff("banana");
while (!iterator.done) {
    iterator.next();
}

Some notes about the iterator returned by a generator:

  • The most important method is next(), which will resume execution of the generator until it hits the next yield statement, or the function has completed its execution.
  • To pass data into your generator (eg "banana"), provide it as a parameter to next() on the iterator. It will be the return value of a yield statement. This is optional.
  • Each call to next() returns an object with two properties, value and done.
  • value contains the current value of the iterator. In this case the yielded value.
  • done will be true if the function has completed execution.
  • If you want yield to throw an exception instead of returning a value, your iterator can use the throw() method.

Ok, tell me more

Generators are different from normal functions in four ways:

  • Generators must contain an asterisk (*) next to the function keyword (e.g. function* doStuff()). This defines the function as a generator, instead of a normal function.
  • Generators can contain yield statements. (e.g. var x = yield foo();).
  • Generators do not return your return value, they return an iterator.
  • Generators are not executed at the time they are invoked.

Yield

Before we go into why or how we use a yield statement, let’s just talk about the syntax. The following example is a fairly basic line of code. We will compare that line to one with a yield statement

Example 3

result = encodeURIComponent("http://www.dashron.com");

As you are probably aware, the above code is executed in two easy steps:

  1. The assignment operator (=) requires a value on the right, so encodeURIComponent is called with a parameter.
  2. The assignment operator then puts the return value of encodeURIComponent into the variable, result.

So, what happens if you add a yield statement?

Example 4

result = yield encodeURIComponent("http://www.dashron.com");

At this level, yield acts a bit like an assignment operator.

  1. The assignment operator requires a value on the right, so we have to process the statement yield encodeURIComponent("http://www.dashron.com").
  2. The yield statment also requires a value on the right, so encodeURIComponent("http://www.dashron.com") is executed with the string parameter.
  3. yield takes the return value of encodeURIComponent(), performs a little bit of magic (more on this later), and passes a value to the assignment operator.
  4. The assignment operator then puts the return value of the yield statement into the variable, result.

Note: Unlike the assignment operator, yield does not need a variable to its left. Like a function, you can use parenthesis to interact with the return value in place. For example, the following is valid:

Example 5

result = (yield encodeURIComponent("http://www.dashron.com")).length;

So what can yield do? A lot actually. It’s a little complicated, so let’s go over it step by step.

Now things get a little weird

yield pauses your function, and allows you to resume execution at any time. I want to get that out of the way first, because it’s not something you see outside of generators. In fact, you don’t even need to use yield, generators always start out paused. To see how this works, let’s check out a generator example without any yield statements:

Example 6

function* doStuff() {
    return "Noses on dowels";
}

var result = doStuff();
var nextResult = result.next();

In Example 6, result does NOT equal "Noses on dowels". result contains an iterator. This object is the “remote control” of your generator. It has a single method, next(). Every time you call next() on your iterator, the function will execute up until: (1) it encounters a yield statement; or (2) the function has finished execution. Here, result contains your iterator, and nextResult contains contains information about the current iteration.

Now let’s add a couple of yield statements into the mix:

Example 7

function* doStuff() {
    var catchphrase = yield "Didja get that thing I sent you";
    var finalphrase = yield catchphrase;
    return finalphrase;
}

var result = doStuff();
var nextResult = result.next().value;
var secondResult = result.next("Blackwatch Plaid");
var finalResult = result.next("Happy Cake Oven");

Each time you call next(), it executes part of the doStuff() function. Let’s break down Example 7 into each call to next().

The first call to next()

Any time you call next() it behaves identically, except for the first and last time. Let’s walk through each next() call in order, starting with var nextResult = result.next();. This call will execute the code shown in example 7.1.

Example 7.1

yield "Didja get that thing I sent you";

Notice that the code to the left of the yield statement (var catchphrase =) is not shown in Example 7.1, becasue it is not executed at this time. That’s because the yield statement pauses execution before it can happen! You must interact with your iterator to continue to the rest of the code. So let’s review the second next() call, var secondResult = result.next("Blackwatch Plaid");. This call will execute the code shown in Example 7.2.

The standard call to next()

Example 7.2

var catchphrase = yield
yield catchprase;

The first line of code in Example 7.2 needs to assign a value to the variable catchphrase. The assignment operator is expecting a value from the yield statement, and this value is provided by the iterator’s next() method. Example 7.2‘s code is executed when you call result.next("Blackwatch Plaid");, so yield returns "Blackwatch Plaid".

Example 7.2 above is important, and worth re-reading. This is the standard behavior of a iterator’s next() method. Every time you call next(), a chunk of your generator will be executed, until there is no code left to run. next() will fail if there is no code left, so you need to keep track of one more piece of information: the done parameter.

The third (and final) call to next()

Example 7.3 demonstrates the final code in this generator’s execution.

Example 7.3

var finalphrase = yield
return finalphrase;

This contains everything that is executed between the final yield and return statements. In example 7, code is run the third time next() is called. Calling next() a fourth time is not terribly useful, it will return the same value as the third, without executing any code. To make sure you don’t call next() unnecessarily you need to keep an eye on the return values of next(). Each time next() is called it returns an object with two properties.

  • value: This depends on the execution. If this is not the final next statement, it will contiain the yielded value. If this is the final next statement, it will contain the returned value
  • done: true if the generator has completed execution. false otherwise.

So if done is true, you should stop calling next().

Example 7 did not make use of the done property because it wasn’t necessary. done is used most commonly in more complex code, so let’s jump into our final example.

Yield with loops

Example 8

function* getTen() {
    for (var i = 0; i < 10; i++) {
        yield i;
    }
}

var gen = getTen();

Notice that the generator in Example 8 only has one visible yield statement. This does not mean that the function execution will only be paused once. Becuase the yield is inside a for loop, each iteration of the loop will reach the yield and pause execution. This specific function will pause execution 10 times, sending out a number each time (0 through 9).

To properly execute the generator you will need to call the next() method many times. I’m lazy, and I don’t want to copy the next() method over and over again. Instead, we can throw next() into a loop and check return value each time. next returns the object mentioned above (with example 7.3), so you should watch it’s done property. As long as it evaluates to false, we can continue to call this iterator’s next() method.

Example 9

var progress = null;
do {
    progress = gen.next();
    console.log(progress.value);
} while(!progress.done);

And now we’re done! Your generator will be processed completely, hitting every yield statement until the function is complete. But what does this have to do with asynchronous code and callbacks? I will be writing more on that in the near future, so check back soon!

Posted toTechon10/3/2015

Quick notes on setting up Amazon s3 CORS headers

It took me way too long to figure out how to get S3 cors headers working, here are my notes.

  1. In the S3 interface, click the magnifying glass icon to the left of your bucket.
  2. Click the “Edit CORS Configuration” button. It should be right next to “Add Bucket Policy”
  3. You should already have a CORS XML file in here, if not mine looked like this :
<?xml version="1.0" encoding="UTF-8"?>
<CORSConfiguration xmlns="http://s3.amazonaws.com/doc/2006-03-01/">
<CORSRule>
    <AllowedOrigin>*</AllowedOrigin>
    <AllowedMethod>GET</AllowedMethod>
    <MaxAgeSeconds>3000</MaxAgeSeconds>
</CORSRule>
</CORSConfiguration>
  1. This CORS header allows all websites to perform GET requests against this resource.
  2. To reference the file, you must use the url structure [bucket].s3.amazonaws.com/[object]
  3. If using an img tag, it must contain the attribute crossorigin="anonymous". Read more here.

Check out MDN for more information about CORS headers.

Posted toTechon7/5/2014

API Dublin

At API Dublin I spoke about Vimeo’s upload API, and how we rebuilt it from the ground up.

Learn More

Posted toTechon3/31/2014