Using Vagrant and Docker to Automate Local Development Environments

A typical software project will have a team of developers, each running their own development environment. Then there will be several more boxes created for test, pre-production and production environments. Unless they’re standardized, there’s a risk these environments can fall out-of-sync. If an over eager ops engineer installs the latest update for MySql, code that runs fine locally can suddenly start failing on production.

The easiest way of ensuring the same software is on every environment is to use Virtual Machines. In my current project we use VirtualBox to run the VM. It is configured with Vagrant and Docker.

This gives the developer access to a long list of services, all exactly the same version which are run on our other environments. This includes:

  • RabbitMq
  • MariaDb
  • phpMyAdmin
  • StatsD
  • Graphana

…and many more. Although I develop on Windows, each of these services runs on the same Ubunto Linux distro we deploy to. Each of the service is available from the host computer using Port Forwarding.

To achieve this we have a simple vagrantfile checked into the root directory of our source code. The vagrantfile is simply a Ruby file, with information about the type of box to create, and how to configure it. Getting all the software up and running is as simple as opening a command prompt, navigating to a path within this directory and running:

vagrant up

So if a new developer joins the team, they can simply grab the latest source code and run “vagrant up”. This will spin up a new Virtual Machine, and upon completion each of the services is available on their typical ports. They’re ready to develop!

There are plenty of primers out there for using vagrant, so we’ll focus in on how Vagrant can use Docker to easily install and configure the software – a process called Provisioning. This is made possible using a vagrant plugin (vagrant-docker-compose) that let’s us provision using Docker Compose.

The plugin could be installed manually by running the following from a command prompt:

vagrant plugin install vagrant-docker-compose

but to ensure each developer has the plugin installed, this can also be automated in the vagrantfile so it’ll be installed automatically the first time we try to “vagrant up”:

unless Vagrant.has_plugin?("vagrant-docker-compose")
puts "Requires Vagrant plugin for docker-compose, installing..."
system("vagrant plugin install vagrant-docker-compose")
puts "docker-compose plugin installed, please run `vagrant up` again"

Then within the vagrantfile, we will set the location of the docker-compose configuration file. In this case it’ll be set to “docker-compose.yml”. This file lives on the root directory alongside the vagrantfile:

Vagrant.configure(2) do |config| = "ubuntu/trusty64"

config.vm.provision :docker
config.vm.provision :docker_compose,
yml: "/vagrant/docker-compose.yml",
run: "always"

The docker-compose.yml file contains the list of all the software to be provisioned on the VM, along with the ports to make it available on.

  image: rabbitmq:3.4.4-management
  container_name: rabbitmq
    - "5672:5672"
    - "15672:15672"
    - /var/lib/rabbitmq:/var/lib/rabbitmq

Finally the vagrantfile forwards the ports from the host machine to the VM, to make the services available: "forwarded_port", guest: 5672, host: 5672                  # Rabbit MQ "forwarded_port", guest: 15672, host: 15672                # Rabbit MQ (Management Console) "forwarded_port", guest: 3306, host: 3306                  # Maria DB "forwarded_port", guest: 13306, host: 13306                # phpMyAdmin

Branch by Abstraction

Branch by abstraction: a pattern for making large-scale changes to your application incrementally on mainline.

I was asked to do a small talk about a refactoring process called “Branch by Abstraction” at work. Not knowing anything about this technique or how it related to the work I’m currently doing, I created the following notes. Some of the description and the images are taken from the Martin Fowler explanation available here and the “remix” of that article available here

What problem are we trying to solve with this technique? The scenario is that we want to undertake a large-scale refactoring on a piece of software. However, we still want to push out regular releases during the refactoring periods, so we can’t break the software whilst the refactoring is under way. At any point we want to be able to push out an new update with everything working.

One solution might be to use feature branching. In this case we’d create a new branch for the refactoring, and upon completion we’d merge the code back into the trunk. This might be a viable solution, or we might decide that we don’t want to keep changes on a long-running branch, or that the merging will be significantly painful and that we’re prefer to take an approach that avoided it. This might well be the decision if the refactoring is significantly large that it takes weeks or months to complete the change. The arguments for and against branching are too big and painful to get into here.

An alternative is to use “Branch by Abstraction”. The name is slightly misleading as no branching is involved. Instead this technique describes an alternative to branching where all development work continues to be done on the trunk, which is kept in a stable state throughout. Changes are instead performed by introducing abstraction.

What does this mean? Imagine this is the initial state of the system. We have several “clients” make use of the flawed “supplier” code:

The first thing we’d do is to create an abstraction layer around the code we want to refactor. This might be a simple interface which calls into the flawed suppler code. The clients are then migrated over to use the new abstraction layer. For the sake of a simple example, imagine it is trivial to move all the refactored code behind the abstraction layer:

At the point where nothing is reliant on it, the initial flawed supplier code can be deleted, along with the abstraction layer which has now fulfilled it’s purpose.

So the underlying idea is to create a single abstraction of the supplier, and then to create multiple implementations of that abstraction which can exist side-by-side. We can then migrate from the old implementation to the new implementation at our own speed, until it has been replaced completely.

Step-by-step, the process would be (copied directly from this article):

  • Create an abstraction over the part of the system that you need to change.
  • Refactor the rest of the system to use the abstraction layer.
  • Create new classes in your new implementation, and have your abstraction layer delegate to the old or the new classes as required.
  • Remove the old implementation.
  • Rinse and repeat the previous two steps, shipping your system in the meantime if desired.
  • Once the old implementation has been completely replaced, you can remove the abstraction layer if you like.

Code which is already follows decent SOLID principles, especially using dependency inversion and the interface segregation principles will be easier to refactor using these techniques, as the interface makes a natural place to introduce the abstraction layer.

In practice it may be too big a step to move every client behind the abstraction layer in one go. There is nothing to stop us picking one client which only makes use of a small portion of the supplier and making the change there first. This diagram shows an example of a possible first step when refactoring a more complex case:

Although not mentioned in the Martin Fowler article, it’s possible to also use feature toggles to switch between the two implementations, or to slowly roll out the new supplier to individual accounts.

What are the benefits of this approach? As mentioned, the goal is to keep the code deployable at any stage. The code “works” at all times, so only the team involved in the refactoring are affected by the change. It avoids merging. Confidence  may be higher when committing micro-changes on a regular basis, as opposed to a larger single change at the end of a branch. Adding the abstraction may also help improve the modelling of the application in it’s own right.

Semantic Versioning with Rake

I’ve recently headed up a new project at work and we’ve switched from using a fairly arcane system of numbering our builds based on the svn build number and the phases of the moon to Semantic Versioning, or “SemVer” for short.

Full information about Semantic Versioning can be found here. It is a well established standard for dealing with the difficulties of dependencies. The basics are as follows:

A semantic version consists of three numbers, representing Major, Minor and Patch releases. These are in the format:


Each number conveys some meaning to the user about what has changed in the update.

  • The Major version should be incremented whenever a breaking change is introduced.
  • The Minor version should be incremented whenever a new feature is added but backwards compatibility is maintained.
  • The Patch version should in incremented whenever a bug fix is added but backwards compatibility is maintained.

This means that when you see the version number change for a dependency which uses semvar, you’re given some important context information about what the change means for you. If the patch number changes, then you should know the change is limited to bug fixes – feel free to update to the newer version without worrying about anything changing or breaking. If the minor version changes then similarly you can feel safe in updating, but you might want to check to see what new features have been added. If the major version changes then you can expect breaking changes – you may need to do some work to keep your code working if you update to the latest version.

There are tools for working with SemVer in pretty much every language. On my home machine (running Windows) you can install semver from Ruby using:

gem install semver

(By the way, to install Ruby on your Windows machine I’d recommend using Chocolatey as it’ll also allow you to easily install the dev kit using the following packages: ruby, ruby2.devkit. The dev kit will be needed later in this article for working with the Albacore gem in Rake).

To create a new SemVer file use:

semver init

This will create a .semver file, which can live at the root of the solution, with the following contents:

:major: 0
:minor: 0
:patch: 0
:special: ''

To increment the patch version use:

semver inc patch

To increment the minor version use:

semver inc minor

Note that incrementing the Minor version resets the Patch version back to zero.

To increment the major version use:

semver inc major

Note that incrementing the Major version resets both the Minor and Patch versions back to zero.

This is all well and good, but how do I use the semvar file to actually version a .NET project?

The Semantic Version will be used within our build process by Rake. Rake is a build language which can be used to define tasks which help you build, test and deploy code. These tasks are stored within a rakefile, which is simply a Ruby file that can be stored alongside your .semver file at the root of your solution. We’d previously used MsBuild as our build language, but that has proved to often be painful as it involves working with verbose XML files, whilst Rake uses a comparatively clean domain specific language. If you ever need to do any complex or unusual then you have the entire Ruby language at your disposal. However Rake isn’t just for Ruby projects – by using a gem called Albacore, Rake can work with .NET tools like NUnit, MsBuild and (in our new project’s case) Mono’s xBuild.

(To be continued with examples of wiring Rake up to use the semvar file to automatically version a .NET project).

Responding to RabbitMq Messages in Integration Tests

I’ve made a GitHub repository with a working example of this technique here.

At work today we wanted to write an integration test for some code that would trigger a job by publishing a message to a RabbitMQ queue. This message would be consumed by a worker program that would perform a long running job expected to take two or three seconds to complete, and the result would be written to a database. After this job completed, the code published a second event to another queue to signal that the work was complete.

In our integration test we could publish the first trigger message, but this meant the processing would take place asynchronously. This gave us the problem of figuring out when the job had completed so we could run our assert statements in order to make sure the data was in the correct state. The two obvious approaches were to poll the database at regular intervals to check for the result, or to respond to the completion event. We decided the correct approach was the second but this lead to some real difficulties as there are a few things happening asynchronously.

Here’s the code we ended up using (assume that the event message published to RabbitMq when the job completed is called JobCompleted):

using (var signal = new AutoResetEvent(false))
    // Subscribing to JobCompleted events spawns a background thread where we cannot do any
    // assertions (as assertion exceptions in the background thread will not bubble to the 
    // foreground thread where the unit test is running). We need a signalling mechanism so 
    // the background thread can indicate when a message has been received, so the 
    // foreground (unit test) thread can go ahead and run assertions on the data...
    using (var signal = new AutoResetEvent(false))
        // When a message is received, send a signal to the foreground thread...
        _bus.Subscribe<JobCompleted>("WorkerIntegrationTest", _ =>; { signal.Set(); });

        _bus.Publish(new JobCompleted
            CreatedAt = DateTime.UtcNow,
            // Other message data goes here.

        // Wait for a signal from the event subscriber thread, if we don't get one within 
        // 30 seconds fail the test...
        if (!signal.WaitOne(TimeSpan.FromSeconds(30)))
            Assert.Fail("Did not see a JobCompleted event within 30 seconds.");

        using (var connection = new MySqlConnection(_connectionString))
            // Load data to check here.
            Assert.That(data, Is.Not.Null, "Data not present in the database within 30 seconds.");
            // More asserts go here.
            Assert.Pass("Data present in database and appears in correct shape");

The AutoResetEvent’s WaitOne method is used to make the code wait 30 seconds unless it receives a signal from the Set method. The test subscribes to the JobCompleted event and if it’s recieves a message it calls the Set method which sends the signal to unblock the thread. At this point we can safely proceed with the asserts on the original thread as we know the data has been written to the database at this point. If the 30 seconds passes without the JobCompleted message being received, then Assert.Fail is called with an appropriate timeout message.

The end result is the tests complete as quickly as possible (faster than waiting for the next database polling cycle) and we get a bit of performance testing thrown in for free!

Understanding Database Indexes

Imagine you had a piece of paper with a list of numbers and full names on it – first name, then surname. The names aren’t in any particular order – it’s just a random bunch of full names written out on a page with a number that uniquely identifies that name. The list might start like this:

1   Matt Durrant
2   Oscar Stapleton
3   Gemma Thomas
4   Amelia Durrant
5   Matt Swarbrick

You’re told to search the list and draw a circle around every example of a certain first name. How might you approach searching the list? You’d have to go through every single name in turn. You couldn’t stop when you found the first name you cared about – you’d still have to carry on all the way through to the very last name on the list to make sure you’d found every instance of the name on there.

If the list was short (say 10 or so names) then this process might seem trivial. Scanning through the list of names takes no time or effort at all. But the longer the list of names becomes, the longer it’s going to take you to search all the way through. Once you’re into hundreds or even thousands of names, searching the list starts to become a major headache.

How about if you put the names in alphabetical order (by first name) and tried the search again? This would make the search a lot easier. You could skip straight through the names at the start to the beginning of the block of names you cared about. Then you could read through all the examples of the first name you were searching for, and you could ignore any of the remaining names in the list as you knew you wouldn’t have to care about any them. Much faster.

However sometimes you also want to search the list by surname, so scrapping the original list and re-ordering it alphabetically doesn’t always help you. Instead you’re given a separate list of first names, in alphabetical order along with the numbers that allows you to quickly find the names in the original list. This is an index. By searching the index for the first name you can easily grab the list of names from the original list.

Amelia     4
Gemma   3
Matt        1, 5
Oscar      2

The index doesn’t contain the surnames, just the first name as that’s the thing you’ll be using to search on. You’d still have to read the surname back off the original list. If you also wanted to search by surname, you could create another index, this time with the surnames in alphabetical order.

What would happen if you then wanted to change the original list? Maybe you wanted to add a few more names at the end, or update a few of the names in the middle somewhere? Well, you’d also have to make a corresponding update to any of the indexes that were affected.

You may be starting to see some of the upsides and downsides of using an index. An index helps you search a list quickly, but it doesn’t come for free. You need some extra paper to write your index on, and you have to do a bit of work to maintain your index every time you make a change to the original list. If your list is quite short this might be more hastle than it’s worth. And if you have a list that you update all the time, you’re also going to have to constantly updating the indexes on your list.

Let’s apply this to SQL and see how it works. Firstly lets create a table called “Customers” that mirrors our example above:

FirstName NVARCHAR(100) NULL,
LastName NVARCHAR(100) NULL)

This creates a table with three columns. There is “Id” column that starts at 1 and increases by 1 for every row we add to the table. This is the primary key on the table. There is also a “FirstName” column, and a “LastName” column. Both of these can hold some text.

Just by creating a table with a primary key we’ve already got our first index on the table. When you create a primary key on a table (in this case for our “Id” column”) it will automatically be used to create an Implicit Index. It makes sense that we’d want to use our primary key to look up rows quickly.

We can see that index has been created by using the query:

Table Non_unique Key_name Seq_in_index Column_name Collation Cardinality Sub_part Packed Null Index_type Comment Index_comment
customers 0 PRIMARY 1 Id A 0 (NULL) (NULL) BTREE

This tells us a bit about the index that was created. The name of the index is “PRIMARY”, and as you might expect it is a Unique index. This means the index will enforce uniqueness across each of it’s entries (which of course makes complete sense for a Primary Key). The Cardinality is the number of unique rows in the index. In this case because we have 0 zero rows in the table, it returns 0. The BTree value refers to the way the actual index is stored. This may be interesting to you, but the way the data in the index is stored is invisible to you when you’re using the table so you can definitely get away without knowing too much about it.

Next let’s run the sql block below to insert 100 dummy names:

INSERT INTO `Customers` (`FirstName`,`LastName`) VALUES ("Evangeline","Hurley"),("Ruth","Daugherty"),("Mira","Caldwell"),("Bell","Wolf"),("Leandra","Parks"),("Ian","Lane"),("Hannah","Zamora"),("Todd","Clemons"),("Brandon","Hoover"),("Rhonda","Bolton");
INSERT INTO `Customers` (`FirstName`,`LastName`) VALUES ("Sawyer","Wells"),("Dean","Hewitt"),("Imelda","Gonzalez"),("Tobias","Mccarty"),("Debra","Aguilar"),("Hayfa","Cabrera"),("Ori","Raymond"),("Karina","Hebert"),("Lael","Hall"),("Maggie","Mccarty");
INSERT INTO `Customers` (`FirstName`,`LastName`) VALUES ("Alfreda","Matthews"),("Ezekiel","Graham"),("Aaron","Downs"),("Odessa","Pacheco"),("Quincy","Humphrey"),("Marny","Wong"),("Sylvia","Pitts"),("Stuart","Davenport"),("Norman","Burton"),("Harriet","Cobb");
INSERT INTO `Customers` (`FirstName`,`LastName`) VALUES ("Jacqueline","Knowles"),("Isadora","Delgado"),("Britanney","Wiley"),("Raya","Rodriquez"),("Chava","Wilkerson"),("Barclay","Clay"),("Emmanuel","Stark"),("Blaze","Townsend"),("Georgia","Powell"),("Avram","Hunter");
INSERT INTO `Customers` (`FirstName`,`LastName`) VALUES ("Chava","Mcfadden"),("Jessamine","Cox"),("Bert","Sims"),("Idola","Chambers"),("Colton","Golden"),("Kyle","Boyle"),("Aaron","Carver"),("Deirdre","Parrish"),("Blaine","Blackburn"),("Carla","Olson");
INSERT INTO `Customers` (`FirstName`,`LastName`) VALUES ("Axel","Pace"),("Ramona","Reynolds"),("Cullen","Whitaker"),("Kendall","Hatfield"),("Rhoda","Davis"),("Gretchen","Mason"),("Omar","Reese"),("Wang","Rasmussen"),("Mohammad","Foster"),("Delilah","May");
INSERT INTO `Customers` (`FirstName`,`LastName`) VALUES ("Ramona","Harrell"),("Sebastian","Dennis"),("Sade","Livingston"),("Destiny","Fernandez"),("Genevieve","Lewis"),("Brennan","Kelly"),("Jesse","Frederick"),("Dane","Beard"),("Rowan","Quinn"),("Shellie","Lambert");
INSERT INTO `Customers` (`FirstName`,`LastName`) VALUES ("Julie","Norris"),("Emmanuel","Franco"),("Dalton","Tanner"),("Herman","England"),("Justina","Mcdowell"),("Gretchen","Vinson"),("Quentin","Payne"),("Demetria","Hopkins"),("Ria","Wynn"),("Bethany","Carey");
INSERT INTO `Customers` (`FirstName`,`LastName`) VALUES ("Valentine","Weeks"),("Leigh","Hutchinson"),("Dylan","Burton"),("Adrian","Garrett"),("Alec","Haley"),("Raymond","Gay"),("Theodore","Myers"),("Colton","Wiggins"),("Ivy","Bauer"),("Mariko","Paul");
INSERT INTO `Customers` (`FirstName`,`LastName`) VALUES ("Janna","Shannon"),("Ella","Yates"),("Illiana","Hays"),("Cade","Weiss"),("Elizabeth","Foster"),("Luke","Clemons"),("Serina","Cote"),("Theodore","Lawrence"),("Gray","Boyer"),("Britanni","Cook");

(By the way, I’ve created this list using a site called GenerateData which is a useful tool for easily creating dummy data like this).

Note that if you check the indexes again on the table, the cardinality for the PRIMARY index is now 100 – as there are 100 unique ID values in the table.

Now lets recreate our first example of having to search the piece of paper (now our Customer table) for a certain first name. We can write a simple Select statement to do this, but that won’t tell us how the search was performed. To do that we need to add the EXPLAIN keyword to the start of the search. This will tell us exactly how much work needed to be done in the search:

FROM Customers
WHERE FirstName = 'Ramona'
id select_type table type possible_keys key key_len ref rows Extra
1 SIMPLE Customers ALL (NULL) (NULL) (NULL) (NULL) 100 Using where

As we had expected this reveals that our Select query performed a search of all 100 rows in the table. We can see from the Type value of All that a full table scan was done. Every row in the Customer table had to be searched to make sure all the Ramona’s had been found.

Contrast this with the same query against our primary key:

FROM Customers
WHERE Id = '30'
id select_type table type possible_keys key key_len ref rows Extra
1 SIMPLE Customers const PRIMARY PRIMARY 4 const 1

Only a single row was read to perform this search. The possible_keys column shows the list of indexes the search could have made use of, and the key column shows the actual index it chose (in this case there is only a single index to choose from – PRIMARY).

(If you’re interested in reading the other outputs of EXPLAIN, this article may be useful).

Now let’s add a new index for our First Name column. I’ve called it FirstName_IDX. The syntax for this is simple enough:

ON Customers (Firstname)

This will create a Non-Unique Index – there is no restriction on only having one of each first name, which is exactly what we want in this case. You could try and create a unique index (using the syntax CREATE UNIQUE INDEX) but this would fail due to their being duplicate first names already in the database.

Let’s retry our search for Ramona and see what’s changed:

FROM Customers
WHERE FirstName = 'Ramona'
id select_type table type possible_keys key key_len ref rows Extra
1 SIMPLE Customers ref FirstName_IDX FirstName_IDX 303 const 2 Using index condition

Now we only search the two rows that have Ramona as a first name. Much better. We can see that our new index FirstName_IDX is being used as expected.

What else could we use the index for? Could we use it to speed up searches for a first name that starts with a certain letter? What about first names that end with a certain letter? From what you now know about indexes you can probably guess that an index will help us with the first search, but not the second. And we know that we can use EXPLAIN to prove this:

FROM Customers  
WHERE FirstName LIKE 'R%'
id select_type table type possible_keys key key_len ref rows Extra
1 SIMPLE Customers range FirstName_IDX FirstName_IDX 303 (NULL) 9 Using index condition
FROM Customers
WHERE FirstName LIKE '%R'
id select_type table type possible_keys key key_len ref rows Extra
1 SIMPLE Customers ALL (NULL) (NULL) (NULL) (NULL) 100 Using where

You can also create indexes that use multiple columns.

Hopefully this article has given you a decent overview of what problem an index helps solve, alongside some of the pros and cons to using them. The database is commonly a bottleneck in web development. Use EXPLAIN to look for places in slow-running queries where full table scans are being undertaken, and consider whether an Index would drastically cut down the number of rows that need to be searched. On larger tables this can lead to massive gains in the speed of your queries. Adding indexes everywhere is not a great idea however as they do come with an overhead. You need to understand what your queries are doing and only add the appropriate Indexes.

Redis on Windows Azure

We’re rolling Redis caching into our project at work, so I though I’d give it a dummy run by first adding it into the Park Run site I made over the weekend. Is this overkill for such a simple site? Absolutely. The site works by screen scraping data from the official park run site. This is actually super-fast and there are no traffic considerations, so this is purely a learning exercise.

The Redis cache on Azure is brand new – right now it’s so new it’s only available via the Azure preview portal. Setup is simple, the only decisions I needed to make were the name, and to change the default location to Northern Europe (the same as the site). Once the cache is created you can access the HostName and Keys you’ll need to connect to the cache from your code.

The client I’ve used is StackExchange.Redis (at work we need running on Mono so we’re using a fork of ServiceStack called NServiceStack), which can be installed into your project with Nuget.

Once that’s set up you can connect to the cache, using the HostName and Key as the password:

ConnectionMultiplexer connection = ConnectionMultiplexer.Connect(",ssl=true,password=<VALUE_OF_KEY>");
var cache = connection.GetDatabase();</code>

This can be reused, it shouldn’t be created on a per-connection basis. The second line returns a reference to the database we can use to add and retrieve data. In the Park Run site, I’m using the cache as a simple key-value store. Any classes populated by data grabbed from the Park Run website get serialised and chucked into the cache. Happily Microsoft had made available a useful helper class to easily serialize my classes and add them to the cache using StringSet, so the code was as simple as this:

            var cache = GetRedisCache();
            var cachedRunnersData = (VenueData)cache.Get(venue);

            if (cachedRunnersData != null)
                 return cachedRunnersData;

            //Screen scraping code goes here.
            cache.Set(venue, venueData);
            return venueData;

Park Run Graphs

I’ve been making good progress on improving my 5km running times recently. Having some tough competition from other runners at the weekly Park Run’s in Winchester has helped, and I managed to knock out another sub-20 minute run for the second weekend on the trot.

However, when I was checking the results on the Park Run website, I thought it was a shame that they don’t show a graph of your weekly times so you could easily visualise the progress you’ve been making. Then I remembered I’m a coder and could just make it myself, so I spent an hour knocking a site together:

Park Run Graphs

I built it for myself, but if you’re a follow park runner you can add your venue name to the url to find yourself:

Or also add your athlete number into the url (if you know it) to see your own results. For example here’s someone from Winchester I grabbed at random:

Results are scraped directly from the official Park Run uk site.

Zorb Football

Last weekend I was in Cambridge for a Tim’s stag do, and we tried Zorb football for the first time. Playing on the morning after a heavy night out on one of the hottest days of the year was not optimal! Here’s the brutal footage:

How Google Ventures Run 5 Day Design Sprints

One of the best workshops I attended at UX London 2014 was from the Google Venture Design Studio. It was run by two of the design partners, Jake Knapp and Daniel Burka, who talked about how they use 5 day prototyping sprints to design sites and apps for start ups. Some really interesting ideas emerged in the talk, and there were plenty of practical ideas that could be useful in my day-to-day work when thinking about new products and features.

The problem they’re trying to solve is that most software is usually created with several rounds of building, learning and iterating. This is typically how development work has gone everywhere I’ve worked, but it has some flaws in practice. The building phase always takes a lot of time, normally much longer than initially estimated, and once you’ve made something it normally generates it’s own overheads slowing you down further in the future. Even if your initial design wasn’t that great, if it does something useful you’ll probably have gained some users, who will become reliant on features that you’re now stuck supporting even whilst you’re trying to build the next shiny thing. Throw it away, you annoy your existing customers. Keep supporting it, you’re burning up time and are stuck with features you might want to replace entirely.

The aim of the 5 day prototyping sprint is to try and bypass some of that by squeezing the maximum amount of design education into a very short amount of time. It’s been inspired by similar processes at Ideo and Stanford’s Institute of Design, and has been used by Google Ventures against a variety of different domains with problems of varying complexities to provide companies with enough confidence to go out and build the products for real.

For a real life example of this process in action, have a look at this case working with Blue Bottle Coffee, to redesign their website.

The first thing I found interesting was how much of the process resolves around a series of self-enforced constraints, designed to add speed and urgency. A good example is the very first action of the sprint, which is to immediately book a series of customer interviews for the final day. This helps set a firm deadline for the upcoming work.

Here’s an overview of what that 5 day sprint consists of:

Day 1: Understand
Dig into the design problem through research, competitive review, and strategy exercises.

Day 2: Diverge
Rapidly develop as many solutions as possible.

Day 3: Decide
Choose the best ideas and hammer out a user story.

Day 4: Prototype
Build something quick and dirty that can be shown to users.

Day 5: Validate
Show the prototype to real humans (in other words, people outside your company) and learn what works and what doesn’t work.

Notice that very little time is spent actually building anything. It’s not until day 4, pretty late in the schedule, that construction of a prototype begins.

During the afternoon workshop we covered the first couple of days in detail. In this blog post I’ll give you some details about those days, and there are links at the end to Google Ventures own articles about the remaining days if you want to read more. I’ll put up an article about the remaining three days soon.

Day One: Understand

The first day is all about understanding the design problem. For this day you’ll want to include as many people as possible, everyone from the founders to design, development and marketing. All of them will be able to provide insights and will have different perspectives and information that will be useful. The goal is to gain common understanding of the problem, and to this end an outside facilitator may be useful as they’ll need to ask the kind of basic questions that might spark fresh ideas.

Typically this day will consist of a series of exercises. These could include a talk by the CEO about the market and opportunity. You might want to decide what metrics you want to use to measure the success of the project (the Heart Framework). Time can also be spent reviewing the current project – looking at what works well right now, and what isn’t so great – and also looking at some lightning demos of other products which are in a similar space. In our workshop we also ran a series of user interviews to get the customer’s perspective.

Throughout these exercises all of the participants should be jotting down “how might we” questions on post-it notes. During the workshop we were prototyping a flat rental site, so examples that came up included “how might we show users the average house price of an area” or “how might we search for flats near our friends”.

Next take all that information to figure out a single important user story that represents your product. This is the story that will building a prototype for. For our flat example, the group wanted to focus on the process of managing the move itself. Get this up on a whiteboard, so everyone is on the same page.

Day Two: Diverge

The goal of this day is rapidly generate as many solutions as possible. I was surprised that so much of this was done independently, without group brainstorming. I think this is because it’s felt that talking isn’t actually a very effective way to come to a decision, and can lead to “decision by committee”.

Each participant sketches out their designs for solutions to the user story chosen on Day 1, using inspiration from the “how might we” notes that are all over the walls. These are very rough sketches on paper and are done in several stages under a lot of time pressure.  These are personal, so you can put whatever you like down – no-one else in the group gets to see them at this points.

First we all drew mind maps, with a deadline of about 5 minutes, to get all the basic ideas on paper at a very low fidelity. Next was a (stressful!) process called Crazy 8s, where we were given a total of just 5 minutes total to produces 8 designs – about 40 seconds each!

Finally is a storyboarding stage. This actually gets shared with the rest of the group, so you get a bit longer to work on it – but still only about 10 minutes. You get a blank piece of paper with 3 post it notes on it, and onto each note you’ll draw some of the UI. Together they represent a progression through the site. The paper can contain annotations. Each storyboard should be named (to make it easy to refer to it in discussions) and should be completely standalone – all the information should be present without needing any further explanation. They’re also anonymous.

Next the storyboards get pinned up on the wall and there is a chance for everyone to view/judge all the other entries. Everyone is also given a sheet on sticky coloured dots – these can be stuck on any parts of the storyboard you think is a good idea. This forms a sort of “heat map” of ideas you’ll want to discuss further.

Each design is discussed for a strict 3 minutes each, and then there is a round of “super voting” using larger sticky dots. This is weighted in favour of the decision makers – maybe the CEO gets all the votes here, or the decision-makers in the company may get additional votes where everyone else only gets a single vote. This process is deliberately not democratic, and again helps fight design by committee.

What did I learn?

  • Use rapid prototyping to avoid the build, learn, iterate cycle. How many projects have I worked on that dives straight into building?
  • Deliberately introduce pressure: you’ll get things done fast if you have to. Even individual discussions should be time-boxed.
  • Talking is not an effective way to design. The goal is to avoid “Design by committee”.
  • Design should not be democratic. Design needs an opinion, not consensus.

Further reading:

How To Conduct Your Own Google Design Sprint

The Dead Simple Way Google Ventures Unlocks Great Ideas

The Product Design Sprint: Understand (Day 1)

The Product Design Sprint: Diverge (Day 2)

The Product Design Sprint: Decide (Day 3)

The Product Design Sprint: Prototype (Day 4)

The Product Design Sprint: Validate (Day 5)

Behind the Scenes with Blue Bottle and Google Ventures