Open Credo

March 11, 2015 | Microservices

Locally Developing Micro-Services with Docker Compose

Docker ComposeOne of the pain points experienced with developing microservices is that it often proves too cumbersome to replicate an environment for local development. This usually means the first time an application talks to its “real” dependencies is when it gets deployed to a shared testing environment. A relatively laborious continuous integration process usually precedes this deployment, making our feedback cycle longer than we would like. In this post I describe a workflow that aims to improve that, using Docker and Docker Compose (formerly known as fig).

WRITTEN BY

Bart Spaans

Bart Spaans

Locally Developing Micro-Services with Docker Compose

Docker Compose Basics

Docker Compose provides a way to manage related Docker containers on a single machine. The services are described in a simple yaml file, which makes it easy to define dependencies.

web:
  build: web/
  ports:
  - "8080:8080"
  links:
  - postgres

postgres:
  image: postgres
  expose:
  - "5432"

This, for example, defines two services: One that is to be built using the Dockerfile in the web/ directory, and one that is to be pulled down from the public Docker Hub. A link is then set up between them, which allows the ‘web’ service to get to the ‘postgres’ service by connecting to the address ‘postgres:5432’.

Simply running ‘docker-compose up’, will build, pull down and run all the containers that are needed, and map port 8080 on the host to the ‘web’ service.

Replicating Micro Service Environments

The above example is useful when the number of services is small, but in a typical micro-services architecture your definition would look more like this:

web:
  build: web/
  ports:
  - "8080:8080"
  links:
  - postgres
  - payments
  - catalog

payments:
  build: payments/
  ports:
  - "8081:8080"

catalog: 
  build: catalog/ 
  ports: 
  - "8082:8080" 
  links: 
  - postgres

[…SNIP…]

postgres: 
  image: postgres 
  expose: 
  - "5432"

At this point it becomes non-trivial to update, build and maybe even run all of the dependencies on a single machine; setting you back to square one.

Continuous Integration

To solve the problem of having to update and build all the dependent services we introduce a build system, such as Jenkins, that builds Docker containers for our micro-services and uploads them into a Docker Hub. We can then refer to the containers in our Hub like this:

web:
  image: our.docker.hub.host/web:latest
  ports:
  - "8080:8080"
  links:
  - postgres

This way we will always get the latest dependencies. A nice twist on this is, assuming Docker is our deployment unit, we could tag our containers with the environment they are in, making it trivial to replicate the services running in production locally for example – which could be handy for debugging a live issue.

web:
  image: our.repository.host/web:prod
  ports:
  - "8080:8080"
  links:
  - postgres

Local Development

So far we have focused on the replication of an environment, but how do we develop against it?

Let’s assume that we have 20 projects, all stored in their own version control repository. Each of these repositories should have a Dockerfile that runs and maybe even builds the project, and a docker-compose.yml file that describes its dependencies as we have done above.

The project itself doesn’t necessarily have to be part of the docker-compose.yml file. One reason not to include it is that when you are developing a service you’ll often have to kill, rebuild and restart the container; this process can be slow using Docker Compose as it also restarts all of the service’s dependencies. To remedy this we can use the plain old docker command to run the service, and leave the dependencies running in the background, although this does mean we have to link them up manually (unless they expose all the ports on the host).

Another benefit of not defining the project in the docker-compose.yml file is that we can share and reuse the definitions across projects and store them in a common place, making it easier to maintain.

Caveats

Being able to reliably pull down and run all of the needed dependencies with a single command has some clear benefits, but sometimes, will only get you so far.

If it’s hard to isolate subsets of your services, your local machine might not cope, if you’re dependent on software that can’t be run in Docker, you will have to work around that. At that point putting in mocked services can buy you some time, but in essence we are back to the original problem where the application only runs for “real” when it has gone through the build and deployment pipeline. A layered architecture, where each service has a small traversal tree, can help here.

Conclusion

Docker Compose is by no means a silver bullet, but when it works it works beautifully. It is easy to setup, and when you’re familiar with Docker it introduces very few new concepts. This also gives it an advantage over a local installation of slightly more heavyweight frameworks such as Kubernetes or Marathon, although a similar approach could be taken there.

Related links

Docker category

 

This blog is written exclusively by the OpenCredo team. We do not accept external contributions.

RETURN TO BLOG

SHARE

Twitter LinkedIn Facebook Email

SIMILAR POSTS

Blog