Containerized Builds with Docker

The Buildkite Agent ships with built-in support for running your builds in Docker containers. Running your builds with Docker allows each pipeline to define and document its testing environment, greatly simplifying your build servers, and provides build isolation when parallelizing your build.

These instructions are for the current Buildkite Agent v2 stable series. For the v3 beta please use the Docker Buildkite Plugin.

Creating a Docker Compose config

For Docker-based builds we recommend using Docker Compose as it allows each pipeline to define its own docker-compose.yml with dependent containers and environment variables to be passed through.

Here's a example of a docker-compose.yml file for Ruby on Rails application that depends on Postgres, Redis and Memcache:

version: '2'
    image: postgres
    image: redis
    image: tutum/memcached
    build: .
      - .:/app
      - db
      - redis
      - memcache
      PGHOST: db
      PGUSER: postgres
      REDIS_URL: redis://redis
      MEMCACHE_SERVERS: memcache

NOTE: The Dockerfile must include a WORKDIR directive matching the mount point of the app. So you would use WORKDIR /app in the example above.

Mounting . (the directory of current build) as a volume in the container allows any changes from inside the container to be visible to the outside build agent, which is required if you want the agent to be able to access and upload artifacts created by the build.

Another example of a production docker-compose configuration is from the buildkite-agent Golang source code:

version: '2'
    build: .
      - .:/go/src/
      - /usr/bin/buildkite-agent:/usr/bin/buildkite-agent

The above example has an additional volume: the buildkite-agent binary itself, so the build scripts can use the buildkite-agent artifact and buildkite-agent meta-data commands. It also passes through environment variables needed for the artifact command, as well as two secret keys set up by an environment hook.

Testing the config

To test everything is set up correctly you can invoke the docker-compose command yourself on a development machine:

docker-compose run app script/tests

Once you've got everything working, you can configure your build pipeline and run it with Buildkite.

Configuring the build step

Firstly we need to add the following environment variable to our build, either on an individual build step or as a pipeline-level environment variable:


This will tell the agent to run each build step like so:

docker-compose -f docker-compose.buildkite.yml run app <command>

If you need, you can also specify an alternate Docker Compose config file:


Adding buildkite-agent to the docker group

To allow buildkite-agent to use the Docker client you’ll need to ensure its user has the necessary permissions. For most platforms this means adding the buildkite-agent user to your system’s docker group, and then restarting the Buildkite Agent to ensure it is running with the correct permissions. See your platform’s Docker installation instructions for more details.

Adding a cleanup task

Even though the buildkite agent cleans up the images after each run, inevitably over time your Docker host can fill up with a lot of unused images.

We recommend running docker-gc or similar tool on a daily basis.

Docker without Docker Compose

The Buildkite Agent also has support for running builds for pipelines that only need a single container and Dockerfile to run. To use plain Docker instead of Docker Compose, add the following environment variable in your build step configuration:


For each build job the agent will now build a Docker image and run the build step’s command inside a container.

If you need, you can also specify an alternate Dockerfile:

Creating Your Own Docker Infrastructure

If your team has significant Docker experience you might find it worthwhile invoking your own runner scripts rather than using the simpler built-in Docker support.

To do this see the agent hooks and parallelizing builds documentation.