Deploying to Kubernetes

Estimated time: 30 minutes

This tutorial demonstrates deploying to Kubernetes using Buildkite best practices. The tutorial uses one pipeline for tests and another for deploys. The test pipeline runs tests and push a Docker image to a registry. The deploy pipelines uses the DOCKER_IMAGE environment variable to create a Kubernetes deployment via kubectl. Then, you'll see how to link them together to automate deploys from master.

First up, you need to add a step to your existing test pipline that pushes a Docker image. Also check your agents have kubectl access to your target cluster. Refer to the notes at the end of tutorial for tips on setting this up.

Final Test Pipeline

Create the Deploy Pipeline

This section covers creating a new Buildkite pipeline that loads steps from .buildkite/pipeline.deploy.yml. We'll use a trigger steps later on to connect the test and deploy pipelines.

The first step will be a pipeline upload using our new deploy pipeline YML file. Create a new pipeline. Enter buildkite-agent pipeline upload .buildkite/pipeline.deploy.yml in the commands to run field.

Creating a New Pipelien

Now create .buildkite/pipeline.deploy.yml with a single step. We'll write the deploy script in the next step.

pipeline.yml
steps:
  - label: ":rocket: Push to :kubernetes:"
    command: script/buildkite/deploy
    concurrency: 1
    concurrency_group: deploy/tutorial

Set concurrency and concurrency_group when updating mutable state. These settings ensures only one step runs at a time.

Writing the Deploy Script

The next step is writing a deploy script that generates a Kubernetes deployment manifest from the DOCKER_IMAGE environment variable.

Let's start with manifest file. This sample file creates a Deployment with three replicas (horizontal scale in Kubernetes lingo) each listening port 3000. Change the containerPort to fit your application.

The official deployment documentation covers much more than what fits in this tutorial. Refer back to these docs for information on setting CPU and memory, controlling networking, deployment update strategies, and how to expose your application to the internet.

Let's call this file k8s/deployment.yml.

k8s/deployment.yml
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: tutorial
  labels:
    app: tutorial
spec:
  # TODO: replace with a value that fits your application
  replicas: 3
  selector:
    matchLabels:
      app: tutorial
  template:
    metadata:
      labels:
        app: tutorial
    spec:
      containers:
        - name: app
          image: "${DOCKER_IMAGE}"
          ports:
            # TODO: replace with the correct port for your application
            - containerPort: 3000

Note manifest includes ${DOCKER_IMAGE}. There is no environment variable subsitution in YAML or kubectl itself. This is where our custom deploy script comes in. Our deploy script will use envsubst ("environment subsitute"; docs) as a minimal templating solution. The resulting output may be piped directly into kubectl.

The full script has three parts:

  1. Check $DOCKER_IMAGE is set
  2. Generate a complete manifest with envsubt and apply with kubectl
  3. Wait for Kubernetes to complete the deploy.

This fits neatly into a Bash script. Here's the complete script/buildkite/deploy:

#!/usr/bin/env bash

set -euo pipefail

if [ -z "${DOCKER_IMAGE:-}" ]; then
  echo ":boom: \$DOCKER_IMAGE missing" 1>&2
  exit 1
fi

manifest="$(mktemp)"

echo '--- :kubernetes: Shipping'

envsubst < k8s/deployment.yml > "${manifest}"
kubectl apply -f "${manifest}"

echo '--- :zzz: Waiting for deployment'
kubectl wait --for condition=available --timeout=300s -f "${manifest}"

You can test pipeline now that everything is in place. All you need is your Docker image.

Test the Pipeline

Open the deployment pipline and click "New Build". Click "Options" and set the DOCKER_IMAGE environment variable.

New Manual Build

Assuming your agents have the required access, then success! πŸŽ‰

Manual Build Success

Continuous Deployment

We'll use a trigger steps to connect the test and deploy pipelines. This effectively creates a continuous deployment pipeline.

First, add a wait step at the end of your existing .buildkite/pipeline.yml otherwise deploys will trigger at the wrong time and even for failed builds!

pipeline.yml
  # Add a wait step to only deploy after all steps complete
  - wait

  # More steps to follow

Next add a trigger step:

pipeline.yml
  - label: 'πŸš€ Deploy'
    # TODO: replace with your deploy pipeline's name
    trigger: kubernetes-tutorial-deploy
    # Only trigger on master build
    build:
      message: "${BUILDKITE_MESSAGE}"
      commit: "${BUILDKITE_COMMIT}"
      branch: "${BUILDKITE_BRANCH}"
      env:
        # TODO: replace with your Docker image name
        DOCKER_IMAGE: "asia.gcr.io/buildkite-kubernetes-tutorial/app:${BUILDKITE_BUILD_NUMBER}"
    branches: master

This trigger step creates a build with the same message, commit, and branch. buildkite-agent pipeline-upload interpolates environment variables so the correct values are replaced when the pipeline starts. The env setting passes along the DOCKER_IMAGE environment variable.

Lastly, the branches options indicates to only build on master. This prevents deploying unexpected topic branches.

It's magic time. Push some code. πŸŽ‰ Continuous deployment! If something goes wrong, then verify your kubectl and Kubernetes versions are compatible. You can check with kubectl version. If your agents cannot connect to the cluster, then check the kubetl access section for setup advice.

Final Test Pipeline

Next Steps

Congratulations! πŸŽ‰ You've setup a continuous deployment pipeline to Kubernetes. Practically speaking there are some things to do next.

Configuring kubectl Access

Configuring kubectl access depends on your infrastructure. Here's an overview for common senarios.

If you're on GCP using agents on GCE and an GKE cluster:

  1. Grant GCE agents GKE access with a service account
  2. Install gcloud agent instances
  3. Use gcloud container clusters get-credentials to get kubectl access

If you're on AWS using agents on EC2 and an EKS cluster:

  1. Grant agent access to EKS API calls with an instance profile
  2. Register the Buildkite agent IAM role with EKS
  3. Install kubectl on agents
  4. Install IAM authenticator on agents
  5. Install the AWS CLI
  6. Use aws update-kubeconfig to get kubectl access