Dashron V5 — Full Synthwave

Today V5 of Dashron.com went live. Here are the details.

Design

As you can see, the new Dashron V5 design was heavily inspired by synthwave visuals. I dug through band posters, pamphlets, and other design examples and came up with this. The theme sometimes sacrifices aesthetics for readability, but overall I think I achieved the vibe I was looking for.

Some additional design notes:

  • There are no images in this design. Instead of images I use SVGs, and usually write them by hand.
  • The background is a modified version of this codepen. I felt the scroll was too distracting, so I slowed it down.
  • The “Dashron” font is called “Razor”, and is found here. The neon glow css is from this article.
  • The rest of the fonts are called “Poppins” and found here. I like how the bold header gives old VHS case vibes.
  • I got help making the gradients from CSS Gradient.

Authoring

The authoring flow remains the same as described in Dashron.com V4. I write content in WordPress and it’s automatically picked up by the website.

Backend

Roads still powers the backend, but roads-starter is out of the picture. I wanted to take a different approach, and did so as a part of my side project, Dungeon Dashboard. I hope to release my changes at some point in the future. I felt the organization of roads-starter didn’t fit my needs with maintaining multiple sites.

Templates

V4 used Handlebars for templates, but I have moved V5 over to React. You might wonder why I use React when I have no front end JavaScript. There are two main reasons for this.

First, I adore JSX.

  • This entire project is typescript, so having type hints/safety all the way through to the HTML is wonderful.
  • Having clear imports for components makes debugging far easier than tracking down template files based on convention, or partial file paths.
  • I prefer to put rendering logic into the template rather than the controllers, keeping it in the same file as the html and far away from the data retrieval. This is difficult with handlebars, as it has a limited amount of functionality. With React, I can transform anything into HTML, any way I want.

Second, I am using front end React on Dungeon Dashboard, so I kept it here for simplicity sake.

CSS

V4 used Bulma.io for rendering, but I found it far too limiting. For Dungeon Dashboard I moved to using Tailwind & Daisy. Tailwind is a class-based CSS system, that is far closer to CSS than most frameworks. It has been particularly impressive for a handful of reasons:

  • In limiting my choices, it improves my consistency. e.g. font-size uses xs through 9xl, which maps to a change rems. I don’t come up with rems on a whim.
  • It groups CSS in an opinionated way that will improve my design. e.g. font-size increases both font-size and line-height for better readability.
  • Tailwind makes long, gross class lists. This is often seen as a negative, but in the world of React it encourages you to rethink your component organization for the better. I am now more encouraged to think about which parts of my site are reusable, and which should be split into their own component.
  • Just as I like my rendering logic closer to my templates, having my CSS closer to my HTML has reduced interruptions, as I no longer have to jump between many files at once for minor tweaks.
  • Responsive design is a joy in Tailwind. It’s trivial to indicate that certain styles only apply in certain sizes.
  • Tailwind lets me break out of their restrictions any time I need without losing access to their responsive design patterns. It really lets me move faster.

On top of this I use Daisy, a css component library built specifically to work with Tailwind. Daisy allows me to easily pull in well designed UI components without having to depend on huge react component libraries. I just throw in the proper classes and everything looks great.

Overall, I’m very happy with the new setup. Please reach out if you have any questions or interest in the work I’m doing!

Posted on 10/25/2023

Creating a better Zoom background with the help of GitHub Copilot

My office has a big, blank wall. It’s the boring background of all my Zoom calls.

A big ol’ blank wall

This wall is in dire need of… something. My first thought was my Buster Keaton poster:

A poster of Buster Keaton at a film editing table

But that’s too static. I want to change the background frequently and my basement can’t handle the amount of posters I would have to buy. So I bought a projector to show anything I want. Here’s what I’ve got so far:

Frames from The Night of the Hunter projected onto the wall

Here’s how I got there, but if you’re more interested in coding with GitHub Copilot you can jump ahead to the code.

The Plan

I love movies, so I want to turn the blank wall into a silver screen. I want the ballet from Singing in the Rain, the safe-cracking from Thief, the graffiti from Candyman. Moments from Lone Wolf and Cub, Rear Window, Raiders of the Lost Ark, Deep Red and so much more.

While considering my options, I remembered the Very Slow Movie Player (VSMP). The VSMP is a small e-ink screen that plays movies at super slow speed. What if I did that, but projected the results onto my blank wall? I could have a much more dynamic background and, when bored, explore the details of these scenes one frame at a time. Besides, who wouldn’t want to watch Gene Kelly dancing on loop for an entire meeting.

So I bought a projector, specifically a cheap, refurbished Epson Brightlink 585wi. I can connect directly via HDMI, so I’ll host this on a raspberry pi and connect the two.

The Content

An easy way to achieve frame-by-frame playback is to extract frames from the video as images. Often people use VLC or MakeMKV to extract the video from a Blu-ray, and FFMPEG to convert the video into images. FFMPEG is a remarkable library that should always be your first stop when working with video. I used the following command:

./ffmpeg.exe -ss 00:00:53 -i ./hunter.mp4 -vf fps=1/5 ./images/hunter%d.png

Here’s more details on all the parameters

  • ss: This parameter indicates where you will start recording. For Night of the Hunter I started 53 seconds in to skip past black screen and intro credits.
  • i: The input file.
  • vf: This is a “filtergraph” or a series of filters that modify the video. It’s incredibly powerful but I’m only using it to reduce the frame rate. I only want one frame every 5 seconds, hence 1/5th of a frame per second.
  • ./images/hunter%d.png: This is a filename template. FFMPEG will create the images using this pattern, replacing %d with an ever growing number for each image created.

The Code

I’ve got all my images and a way to get them on the wall. Now I need a way to show the images and cycle through each frame. Being a web developer, I built a simple website and web server to manage everything.

For this project I wanted to work fast and save time, so I turned to GitHub Copilot. Copilot has an interesting feature where you write a comment describing how your code should work and it will write the code. After using it for a while, I think it’s witchcraft. Let me show you.

I started with basic express boilerplate (note that this is all in TypeScript).

import * as express from 'express';
import * as fs from 'fs';
const app = express();
const port = 8080;
app.listen(port, () => {
    console.log('server started');
});

Next I built the landing page. This page will show all the possible scenes I can play, stored on the server as folders of images. To build this page I added the following comment between the port definition and app.listen

/**
 * This endpoint should scan the list of folders in the images directory and list them as html links
 */

…and GitHub Copilot wrote the following for me automatically:

app.get("/", (req, res) => {
    fs.readdir("./images", (err, files) => {
        if (err) {
            res.send("Error reading directory");
        } else {
            // build a list of links
            let links = "";
            for (let i = 0; i < files.length; i++) {
                links += `<a href="/${files[i]}">${files[i]}</a><br>`;
            }
            // send the list of links
            res.send(links);
        });
    });
});

Witchcraft.

I didn’t have to look up how readdir worked, it had error handling, it correctly guessed the directory ./images, and set up links anticipating the next endpoint I needed to build. That’s so much time saved.

Then I pushed Copilot a little harder. I added another comment to create a new page:

/*
 * This endpoint should return an html page containing links to all the images in the requested folder
 */

And GitHub Copilot created the following:

app.get("/:folder", (req, res) => {
    fs.readdir(`./images/${req.params.folder}`, (err, files) => {
        if (err) {
            res.send("Error reading file");
        } else {
            // build a list of links
            let links = "";
            for (let i = 0; i < files.length; i++) {
                links += `<img src="/${req.params.folder}/${encodeURIComponent(files[i])}" /><br>`;
            }
            // send the list of links
            res.send(links);
        }
    });
});

The URL is right, the folder structure is right, and the output is right. Now, there may be a security vulnerability in using req.params.folder directly in readdir. That scares me in a different way than the cauldron-based magic that wrote this code. Security seems to be a common concern with Copilot, the programmer still needs to be on guard. 

Lastly, I wrote:

/*
 * This endpoint should return the requested image 
 */

And sure enough, it created an endpoint that will load the image and serve it to the client when accessed.

Now I can see all the images associated with this project, but they’re all shown at once. I need to cycle through them one at a time. I was sure this was going to be too much for GitHub, but I tried it anyway:

/* 
 * Make all images on the page invisible, then show the first image and cycle through the images every 5 seconds
 */

And GitHub gave me:

<script>
        document.querySelectorAll("img").forEach(img => {
            img.style.visibility = "hidden";
        });
        document.querySelector("img").style.visibility = "visible";
        setInterval(() => {
            let current = document.querySelector("img");
            let next = current.nextElementSibling;
            if (next == null) {
                next = current.parentElement.firstElementChild;
            }
            current.style.visibility = "hidden";
            next.style.visibility = "visible";
        }, 5000);
</script>

And I was proven wrong. Copilot was definitely up to the task and wrote a viable solution before I even tried to think of the right approach.

The Clean Up

With the latest code the website was flashing between images. It was very jarring, so now I fade the scene from image to image. I didn’t bother with Copilot here, and in the end it required a bunch of trial and error. Here’s my final process:

  1. Start with one image on the page, absolutely positioned.
  2. When you want to fade in the new image, add it to the page absolutely positioned with a CSS fade in animation.
  3. When the animation is complete, update the first image to show the contents of the new image, and remove the new image.
  4. After a set amount of time, repeat the process starting at step 2.

I made other changes along the way to improve the project, and you can see the code here: https://github.com/Dashron/projector-art.

The Final Results

Frames from The Night of the Hunter projected onto the wall

In the end GitHub Copilot didn’t write everything for me, but it still saved me a lot of time. I think I’m going to put that extra time to good use, and watch a bunch of movies about witches.

Check out the code here

This article is better than it was thanks to Eric, my favorite casual wordsmith.

Posted on 2/8/2022

Dashron.com V4

Late at night on December 16th I launched the new version of Dashron.com. This new version is a full redesign and a rewrite. The site was in desperate need of an update.

Code Organization

The previous version of dashron.com used roads-starter to help with code organization. roads-starter was a library that offered objects to help organize code and reduce duplication. roads-starter wasn’t cutting it for my larger side projects so I rewrote it to use code generation. I believe the new technique will be more maintainable.

CSS Framework

I am not a designer, and have not spent time improving those skills. For the redesign I wanted to build on the experience of stronger designers, so I selected the bulma css framework. I’ve been happy with bulma and will continue using it for future projects.

Authoring

The old site never had a good authoring flow. Behind the scenes I had one big text box that accepted markdown. This was limiting, not very inviting, and thus a barrier to creating content. The new site has no API or database. It consumes the WordPress API which allows me to author everything using the excellent WordPress editor.

You’ll also notice a new top navigation. In this new navigation is a series of topics I invest my time in. I will be writing about all these topics, and am excited to get back into creating content.

Posted on 12/18/2021

Common Hypermedia Patterns with JSON Hyper-Schema

I did a deep dive into JSON Hyper-Schema, and wrote a guide to help others learn the specification without having to read the specification.

In this third and final part I build upon the previous articles and explain how JSON Hyper-Schema works with common hypermedia patterns.

Read part three on the APIs you won’t hate blog

Posted on 8/22/2018

Roads API at REST Fest

In this talk I gave a brief overview of my latest project, Roads API. This framework attempts to simplify many aspects of the API development process.

Or watch on Vimeo

Posted on 4/21/2018

Getting started with JSON Hyper-Schema: Part 2

I did a deep dive into JSON Hyper-Schema, and wrote a guide to help others learn the specification without having to read the specification.

In part two I build upon the foundation of part one with the addition of resource representations, arbitrary request bodies, HTTP headers and HTTP methods

Read part two on the APIs you won’t hate blog

Posted on 4/3/2018

Getting started with JSON Hyper-Schema

I did a deep dive into JSON Hyper-Schema, and wrote a guide to help others learn the specification without having to read the specification.

In part one I describe the basics: Why its useful, JSON Schema and the foundations of JSON Hyper-Schema.

Read part one on the APIs you won’t hate blog

Posted on 12/21/2017

Encouraging public API consumers to use Hypermedia

I’ve had a lot of great discussions at API Strat this year and it has inspired me to get back into writing. In the near future I’m going to convert my API World talk into a blog post and discuss some of my upcoming projects. For now, I want to lay out my thoughts on encouraging public API consumers to use hypermedia. To do that, I need to walk through three quick questions.

1. What is the goal of hypermedia?

There are many different reasons to use hypermedia but I think the most immediate goal is to reduce an API consumers dependence on hard coding their API actions. By allowing your API consumer to find these actions in an API response the server can manipulate the availability and implementation of these actions on the fly.

2. How do developers discover new features on your developer portal?

Do they come to a developer site looking for the “/me/likes/{video_id}” endpoint or are they looking for “how to like a video”?

I see a lot of reference docs falling into this pattern and use a URL path as the title of a segment of documentation

PUT /me/likes/{video_id}

This endpoint likes a video
- Video_id: int, the id of a video

I think the documentation can be quickly improved with some organization and copy improvements.

Like a video

PUT /me/likes/{video_id}
 - Video_id: int, the id of a video

New users don’t know if your API is GraphQL, REST, GRPC or so on. New users have a problem they need to solve and the documentation explains how to solve that problem. We shouldn’t assume they have any knowledge of your URLs.

(Quick shout out to Keen.io who does this well: https://keen.io/docs/api/?shell#get-project)

3. If an API feature is not documented, does it exist?

I would argue no. A new user finds a feature by searching through documentation, they are not randomly testing URLs. If someone does find a hidden URL I don’t believe they should have any expectations of reliability or stability. If you want to treat these random-url-finders with a little more respect you can consider stricter access control protections such as URL signatures.

The Theory

Let’s recap

  • Hypermedia is about encouraging users to use in-response actions instead of hard-coded URLs
  • New developers learn about an API by researching the problem they are trying to solve and the actions they might take to solve that problem. These users do not go to your documentation and start looking for a specific URL
  • Undocumented features don’t exist

If we believe all of these ideas, let’s stop documenting any URL associated with a hypermedia control. Let’s treat those URLs like they don’t exist. Following this theory, what do our docs look like?

Like a video

1. First you need a Video representation. You can retrieve this representation from any of our video endpoints, e.g. “My uploaded videos”, “Get all of a channels videos”, “Video Search” and so on.

2. Once you have the video representation look in the JSON for the key “likes”. This key will either contain an object or "null".

    a. If the value of "like" is null, you are not able to like this video and you should not allow your users to attempt to like this video.
        {...
            "like": null
        ...}

    b. If the value of "like" is an object, that object is a hypermedia control. A like hypermedia control describes how you can make an HTTP request to like this video.
        {...
            "like": {"uri":"/me/likes/12345"}
        ...}

If you want to learn more about how to use hypermedia controls, read our guide here!

Or if we want the content to be a little lighter and focused on a quick reference it might look like this…

Like a video

The hypermedia control to like a video is located in the video representation under the key "like".

Read more about video representations here.
Read more about hypermedia controls here.

Real world examples

Vimeo has one (accidental) example of success with this strategy. Video uploads are documented here.

TL;DR:

  1. POST /me/videos, grab the upload_link and complete_uri from the response
  2. Send your video data to the upload_url
  3. When the upload is complete, make a DELETE request to to the complete_uri

By leaving the URLs associated with the upload_url or complete_uri completely undefined, we have been able to encourage this link driven workflow for all API consumers.

Has anyone attempted this? I would love to hear any thoughts or experiences in the comments or on Twitter.

Posted on 11/3/2017

September Conference Recap

Hey everyone,

In September I attended REST Fest and API World. Here’s a couple of presentations I enjoyed, and links to my own talks.

REST Fest

Adam Kliment showed a cool little project called “Restful JSON”. Restful JSON is a formalized naming convention for URLs in JSON. I’m a big fan of standardizing small, composable pieces of JSON. Maybe next we can standardize forms or pagination.

Joshua T. Kalis discussed LuxUI which tackles the idea of a generating a web UI from hypermedia controls. I love seeing more people explore this idea. Django and Rails have automatic administration tools for their frameworks, maybe this can bring us closer to a more generic API solution.

My talk this year discussed two specific API design strategies Vimeo uses for video privacy.

REST Fest is always full of great talks. I recommend digging into all of this years videos.

API World

Mike Stowe presented a project called REST API multiple-request chaining. This project aims to standardize a JSON format that combines many http requests into a single request. We experimented with something similar at Vimeo to speed up web pages and mobile apps, but this project is a much more mature solution. I’m going to watch it evolve, and learn from his findings.

I was unable to attend Dan Schafer’s presentation on GraphQL, but we spoke later in the day. It was great to cut through the hype and hear architectural opinions right from the source. Our discussion reinforced my belief that GraphQL is a great solution to some very specific problems, but the hype is encouraging misuse.

API World did not record the presentations this year, but you can watch a previous recording of Dan’s presentation here.

I also spoke at API World about developer experience, and how to manage your API as your company changes. My slides are available here.

API Strat

I will also be at API Strategy and Practice this week. Track me down and say hi, I want to know how you are using and building APIs.

If you are not attending this year, reach out to me on Twitter, I’m always happy to talk.

Posted on 11/1/2017

API World

At API World I spoke about managing change in APIs. You can see part of the talk below, see my slides here, or learn more about the conference here.

Posted on 9/27/2017

How Vimeo’s API Handles Complex Video Privacy

Or watch on Vimeo

Posted on 9/16/2017

When Hypermedia Saves the Day

Or watch on Vimeo

Posted on 9/17/2016

Designing APIs that change

Your API is a long-term promise.

I promise that my API actions, and how they behave will work as described in the documentation.

This promise is hard to uphold. Time isn’t explicitly defined in that agreement. The API creator may think they can change their API whenever it’s necessary. The API consumer probably thinks the API will remain the same forever.

These two assumptions directly oppose each other.

APIs that want to iterate and improve their product could email all of their consumers, asking them to update their integrations. But what if those companies don’t have full-time software engineers on staff? What if they have a 6-month turnaround on their engineering process? What if they don’t have anyone listening to the email address attached to their API account?

APIs that want to create a perfectly stable platform could write their API once, and never make any changes. But what if you need to deprecate a feature completely? What if your website grows and evolves, leaving your API as a confusing, inconsistent facet of your product? What if you notice a glaring oversight in your original design?

In the real world, it’s more complex. I’ve been working on a frequently changing API for many years now, and would like to share some of the techniques I’ve learned to achieve long term stability for your API.

  1. Understand the limitations of your API consumers.
  2. Design APIs with change in mind.
  3. Design and develop APIs with your support team in mind.
  4. Be creative and retain backwards compatibility.

I’m going to expand on all of these topics in future posts. Follow me, or check back later for more updates!

Posted on 8/6/2016

Coroutines

Let’s get started

If you are working with JavaScript, there’s a good chance that you have a ton of promises or callbacks nested over and over again. Promises helped me clean up the numerous callbacks, but coroutines really took it to the next level. Coroutines allow you to remove callbacks entirely, and write asynchronous code that looks completely synchronous. In a couple of quick steps, I’ll show you how to simplify your promise-based code by converting to coroutines.

Note: This article briefly talks about generators. If you would like a more thorough description, check out my article on generators!

Let’s start with an example

Here’s an example using only promises. I’ve made it a little complex to really show off how powerful coroutines are. Throughout the rest of this post I’ll walk you through the conversion process.

Note: The request method performs an HTTP GET request on a url, and returns a Promise.

Example 1

function GET() {
    // Make an HTTP GET request to http://www.dashron.com
    return request('http://www.dashron.com')
        .then(function (json) {
            // parse the response
            json = JSON.parse(json);
            // Request a couple more web pages in response to the first request
            return Promise.resolveAll([
                request('http://www.dashron.com/' + json.urls[0]),
                request('http://www.dashron.com/' + json.urls[1])
            ])
            .then(function (pages) {
                // Build the response object
                return {
                    main: json,
                    one: pages[0],
                    two: pages[2]
                };
            });
        })
        .catch(function (error) {
            // Handle errors
            console.log(error);
        });
}

// When the GET request is complete, log the response, which is a combination of all responses
GET().then(function (response) {
    console.log(response);
});

Create a coroutine

First you need to create a coroutine. I’ve written a library (roads-coroutine) that helps you build coroutines. This library exposes a function, which takes a generator function as its only parameter and returns a coroutine.

Example 2

var coroutine = require('roads-coroutine');

var GET = coroutine(function* GET() {
    // ... Removed for brevity
});

GET().then(function (response) {
    console.log(response);
});

Find your promises, and add yield statements

Find all your promises, and throw yield directly in front (without removing anything!). yield is a keyword that can only be used in generators. In the below example, it was added before request and Promise.resolveAll. When everything is done, yield acts like an asynchronous equals sign. It will wait until the promise is resolved, and pass the result to the left. If the promise is rejected, yield will throw the appropriate error.

Example 3

var coroutine = require('roads-coroutine');

var GET = coroutine(function* GET() {
    // Make an HTTP GET request to http://www.dashron.com
    return yield request('http://www.dashron.com')
    .then(function (json) {
        // parse the response
        json = JSON.parse(json);
        // Request a couple more web pages in response to the first request
        return yield Promise.resolveAll([
            request('http://www.dashron.com/' + json.urls[0]),
            request('http://www.dashron.com/' + json.urls[1])
        ])
        .then(function (pages) {
            // Build the response object
            return {
                main: json,
                one: pages[0],
                two: pages[2]
            };
        });
    })
    .catch(function (error) {
        // Handle errors
        console.log(error);
    });
});

// When the GET request is complete, log the response, which is a combination of all responses
GET().then(function (response) {
    console.log(response);
});

Kill your promise handlers

Now that yield handles your promise functions for you (by returning the result, and throwing the reject), you can just use normal variables and try/catch. In example 4 I remove all then and catch statements, and replace them with variables and a try/catch.

Example 4

var GET = coroutine(function* GET() {
    try {
        // Make an HTTP GET request to http://www.dashron.com
        var json = yield request('http://www.dashron.com');

        // parse the response
        json = JSON.parse(json);
        // Request a couple more web pages in response to the first request
        var pages = yield [
            request('http://www.dashron.com/' + json.urls[0]),
            request('http://www.dashron.com/' + json.urls[1])
        ];

        return {
            main: json,
            one: pages[0],
            two: pages[2]
        };
    } catch (error) {
        // Handle errors
        console.log(error);
    };
});

// When the GET request is complete, log the response, which is a combination of all responses
GET().then(function (response) {
    console.log(response);
});

Notice all nesting is gone. Instead of making more requests in the then of the first promise, yield will handle the waiting for you. The above code looks synchronous, and is much easier to read.

Aesthetics aside, this solves one major headache with promises. With promises, if you ever forget a catch, your error will be ignored, and lost forever (or caught by the hard-to-manage uncaughtRejectionHandler. With coroutines, your exceptions will be thrown as expected, and can be processed as you see fit.

Let’s make it official

ECMAScript 7 is adding two new keywords to support this feature natively. async functions instead of generators, and the await keyword instead of yield.

async function () {
    console.log(await request("http://dashron.com"));
}

There are some minor differences which make this system better (such as order of operations) but we have a while until it will be available for use. In the meanwhile, keep using roads-coroutine!

Posted on 5/17/2016

Generators

Generators were introduced in ES6, and are available on these platforms. While I have not used generators in the browser yet, I use them heavily in server-side iojs.

Why should I care?

In my opinion, the number one reason to use generators is to clean up asynchronous code. Generators can also be used to create array-like objects, but their interactions with promises are incredibly powerful. This article will explain generators; a future article will explain how it applies to cleaning up asynchronous code. For now, I want to take you through the unique ways in which generators differ from normal functions.

Overview

First things first, here is a quick overview of how generators work. Some of this might not make sense yet, so take a quick glance and then read the full tutorial below.

Generator

Example 1

function* doStuff(value) {
    var foo = yield value;
    return foo;
}

Some notes about the generator:

  • Must be declared as function* (with an asterisk).
  • Should contain one or more yield statements.
  • Returns an iterator, not the function’s return value.
  • Starts in a paused state until you call the next() method of the iterator.
  • yield will also pause execution of the generator until the iterator allows it to continue via the next() method.
  • To pass data out of the generator, you must yield or return your value. This value will be part of the object returned by the iterator’s next method.

Iterator

Example 2

var iterator = doStuff("banana");
while (!iterator.done) {
    iterator.next();
}

Some notes about the iterator returned by a generator:

  • The most important method is next(), which will resume execution of the generator until it hits the next yield statement, or the function has completed its execution.
  • To pass data into your generator (eg "banana"), provide it as a parameter to next() on the iterator. It will be the return value of a yield statement. This is optional.
  • Each call to next() returns an object with two properties, value and done.
  • value contains the current value of the iterator. In this case the yielded value.
  • done will be true if the function has completed execution.
  • If you want yield to throw an exception instead of returning a value, your iterator can use the throw() method.

Ok, tell me more

Generators are different from normal functions in four ways:

  • Generators must contain an asterisk (*) next to the function keyword (e.g. function* doStuff()). This defines the function as a generator, instead of a normal function.
  • Generators can contain yield statements. (e.g. var x = yield foo();).
  • Generators do not return your return value, they return an iterator.
  • Generators are not executed at the time they are invoked.

Yield

Before we go into why or how we use a yield statement, let’s just talk about the syntax. The following example is a fairly basic line of code. We will compare that line to one with a yield statement

Example 3

result = encodeURIComponent("http://www.dashron.com");

As you are probably aware, the above code is executed in two easy steps:

  1. The assignment operator (=) requires a value on the right, so encodeURIComponent is called with a parameter.
  2. The assignment operator then puts the return value of encodeURIComponent into the variable, result.

So, what happens if you add a yield statement?

Example 4

result = yield encodeURIComponent("http://www.dashron.com");

At this level, yield acts a bit like an assignment operator.

  1. The assignment operator requires a value on the right, so we have to process the statement yield encodeURIComponent("http://www.dashron.com").
  2. The yield statment also requires a value on the right, so encodeURIComponent("http://www.dashron.com") is executed with the string parameter.
  3. yield takes the return value of encodeURIComponent(), performs a little bit of magic (more on this later), and passes a value to the assignment operator.
  4. The assignment operator then puts the return value of the yield statement into the variable, result.

Note: Unlike the assignment operator, yield does not need a variable to its left. Like a function, you can use parenthesis to interact with the return value in place. For example, the following is valid:

Example 5

result = (yield encodeURIComponent("http://www.dashron.com")).length;

So what can yield do? A lot actually. It’s a little complicated, so let’s go over it step by step.

Now things get a little weird

yield pauses your function, and allows you to resume execution at any time. I want to get that out of the way first, because it’s not something you see outside of generators. In fact, you don’t even need to use yield, generators always start out paused. To see how this works, let’s check out a generator example without any yield statements:

Example 6

function* doStuff() {
    return "Noses on dowels";
}

var result = doStuff();
var nextResult = result.next();

In Example 6, result does NOT equal "Noses on dowels". result contains an iterator. This object is the “remote control” of your generator. It has a single method, next(). Every time you call next() on your iterator, the function will execute up until: (1) it encounters a yield statement; or (2) the function has finished execution. Here, result contains your iterator, and nextResult contains contains information about the current iteration.

Now let’s add a couple of yield statements into the mix:

Example 7

function* doStuff() {
    var catchphrase = yield "Didja get that thing I sent you";
    var finalphrase = yield catchphrase;
    return finalphrase;
}

var result = doStuff();
var nextResult = result.next().value;
var secondResult = result.next("Blackwatch Plaid");
var finalResult = result.next("Happy Cake Oven");

Each time you call next(), it executes part of the doStuff() function. Let’s break down Example 7 into each call to next().

The first call to next()

Any time you call next() it behaves identically, except for the first and last time. Let’s walk through each next() call in order, starting with var nextResult = result.next();. This call will execute the code shown in example 7.1.

Example 7.1

yield "Didja get that thing I sent you";

Notice that the code to the left of the yield statement (var catchphrase =) is not shown in Example 7.1, becasue it is not executed at this time. That’s because the yield statement pauses execution before it can happen! You must interact with your iterator to continue to the rest of the code. So let’s review the second next() call, var secondResult = result.next("Blackwatch Plaid");. This call will execute the code shown in Example 7.2.

The standard call to next()

Example 7.2

var catchphrase = yield
yield catchprase;

The first line of code in Example 7.2 needs to assign a value to the variable catchphrase. The assignment operator is expecting a value from the yield statement, and this value is provided by the iterator’s next() method. Example 7.2‘s code is executed when you call result.next("Blackwatch Plaid");, so yield returns "Blackwatch Plaid".

Example 7.2 above is important, and worth re-reading. This is the standard behavior of a iterator’s next() method. Every time you call next(), a chunk of your generator will be executed, until there is no code left to run. next() will fail if there is no code left, so you need to keep track of one more piece of information: the done parameter.

The third (and final) call to next()

Example 7.3 demonstrates the final code in this generator’s execution.

Example 7.3

var finalphrase = yield
return finalphrase;

This contains everything that is executed between the final yield and return statements. In example 7, code is run the third time next() is called. Calling next() a fourth time is not terribly useful, it will return the same value as the third, without executing any code. To make sure you don’t call next() unnecessarily you need to keep an eye on the return values of next(). Each time next() is called it returns an object with two properties.

  • value: This depends on the execution. If this is not the final next statement, it will contiain the yielded value. If this is the final next statement, it will contain the returned value
  • done: true if the generator has completed execution. false otherwise.

So if done is true, you should stop calling next().

Example 7 did not make use of the done property because it wasn’t necessary. done is used most commonly in more complex code, so let’s jump into our final example.

Yield with loops

Example 8

function* getTen() {
    for (var i = 0; i < 10; i++) {
        yield i;
    }
}

var gen = getTen();

Notice that the generator in Example 8 only has one visible yield statement. This does not mean that the function execution will only be paused once. Becuase the yield is inside a for loop, each iteration of the loop will reach the yield and pause execution. This specific function will pause execution 10 times, sending out a number each time (0 through 9).

To properly execute the generator you will need to call the next() method many times. I’m lazy, and I don’t want to copy the next() method over and over again. Instead, we can throw next() into a loop and check return value each time. next returns the object mentioned above (with example 7.3), so you should watch it’s done property. As long as it evaluates to false, we can continue to call this iterator’s next() method.

Example 9

var progress = null;
do {
    progress = gen.next();
    console.log(progress.value);
} while(!progress.done);

And now we’re done! Your generator will be processed completely, hitting every yield statement until the function is complete. But what does this have to do with asynchronous code and callbacks? I will be writing more on that in the near future, so check back soon!

Posted on 10/3/2015

Quick notes on setting up Amazon s3 CORS headers

It took me way too long to figure out how to get S3 cors headers working, here are my notes.

  1. In the S3 interface, click the magnifying glass icon to the left of your bucket.
  2. Click the “Edit CORS Configuration” button. It should be right next to “Add Bucket Policy”
  3. You should already have a CORS XML file in here, if not mine looked like this :
<?xml version="1.0" encoding="UTF-8"?>
<CORSConfiguration xmlns="http://s3.amazonaws.com/doc/2006-03-01/">
<CORSRule>
    <AllowedOrigin>*</AllowedOrigin>
    <AllowedMethod>GET</AllowedMethod>
    <MaxAgeSeconds>3000</MaxAgeSeconds>
</CORSRule>
</CORSConfiguration>
  1. This CORS header allows all websites to perform GET requests against this resource.
  2. To reference the file, you must use the url structure [bucket].s3.amazonaws.com/[object]
  3. If using an img tag, it must contain the attribute crossorigin="anonymous". Read more here.

Check out MDN for more information about CORS headers.

Posted on 7/5/2014

API Dublin

At API Dublin I spoke about Vimeo’s upload API, and how we rebuilt it from the ground up.

Learn More

Posted on 3/31/2014

Bifocals.js

Yesterday I launched Bifocals.js, a node library for handling http responses.

It was my first big launch, and I learned a ton from it. Before I go into that, lets show some numbers.

  • Peaked at # 9 on Hacker News
  • 7,077 Page Views
  • 6,202 Unique Visitors
  • 56% United States
  • 70% Chrome
  • 48% Mac
  • 87% from news.ycombinator.com
  • Avg. Visit Duration: 00:00:17

That visit duration is abysmal. Clearly the docs need to be improved.

I received the best discussion via Facebook, and then Hacker News. No one initially knew what the hell my library did. So I wrote up a new description, which will be added to the docs later.


Bifocals makes it incredibly easy to split your web page up into little tiny chunks. It might not be immediately obvious why this is useful, but it becomes slightly more clear with an understanding of the javascript event queue.

Any time an http request hits the server, it puts your callback into a queue. Node processes this queue in order. Every time you handle a callback for an http request, a database request, any socket or i/o in general, it uses this queue. Additionally, process.nextTick will add functions onto this queue.

If each of your callbacks operate quicker, it should allow improve the request time across your application. Each time a callback completes it releases control to another callback. With mix of pages that render at different speeds, faster requests should get out of the way while longer requests are still handling i/o.

Bifocals not only allows your view functions to be smaller, but it allows them to operate out of order. All of your views can start performing IO at the same time, and no matter which one finishes first bifocals will render the final output accurately.

When all of these techniques are put together, theoretically it should create a faster overall page speed across the entire site. (I hope to have real stats soon)

If you don’t need these benefits, bifocals offers two additional nice features.

  • It abstracts away template rendering to a simple workflow, and allows you to use any core render system you want.
  • It provides easy access to http status codes, and the strange http standard associated with them.

Hopefully this is a little more clear. Thanks to everyone who had comments, I have some cool new features coming soon.

Posted on 10/12/2012