
Building dynamic CI/CD pipelines
A CI/CD pipeline is a powerful workflow for releasing software updates quickly and precisely–though sometimes, you need more than a static, one-size-fits-all approach. This article will cover dynamic pipelines, and how to create them easily.
CI/CD pipeline basics
A Continuous Integration and Continuous Deployment (CI/CD) pipeline accomplishes two major goals:
-
Validates that the latest changes to your application code build and execute correctly
-
Ships your changes to one or more environments for acceptance testing and release
To achieve these goals, a CI/CD pipeline needs to fulfill certain requirements. First, the pipeline should be reliable and repeatable. Assuming your source code changes are syntactically and functionally correct, the pipeline should build and deploy your application without errors.
Then, the pipeline should be almost entirely, or almost entirely, automated. Ideally, your pipeline will kick off automatically whenever you or another team member checks code into a buildable branch in your source control repository. The pipeline should not only deploy source code but also accompanying resources, such as cloud infrastructure, that it requires to run.
You may have some stages of your pipeline that require manual approval before these changes can proceed (i.e., code review). Outside of these approvals, your pipeline should proceed without manual intervention.
Lastly, your CI/CD pipeline should work for all environments. The pipeline should be able to generate your development and any intermediate test environments that your software engineering lifecycle requires, including your production environment.
Stages of CI/CD pipelines
A CI/CD pipeline is generally broken down into a series of stages. These stages are repeated for each environment to which you are deploying.
These stages include:
Build: The pipeline verifies that the changes are syntactically correct and that it can build a new version of the application. This usually involves compiling any source code (for compilable languages such as Java, C#, and Rust) and also bundling the application into a deployment package (such as a Docker container).
Validate: The pipeline ensures that the changes are functionally correct. Often this involves running a series of unit tests against the changed code to ensure it runs as you expect without introducing regressions.
Your pipeline may also run a series of other static and dynamic tests to ensure the application meets various functionality and security requirements. This can include scanning binary files for known vulnerabilities, scanning for embedded credentials, and signing binaries or script files prior to deployment.
Deploy: The pipeline creates your application’s runtime stack, installs the application in its target environment, and performs any configuration (i.e., data import and data migration) required for your app to run.
Verify: The pipeline ensures that the application runs as expected in its target environment. Typically, this means executing integration tests that verify your changes work end-to-end and in isolation. The pipeline may also monitor a set of metrics (i.e., CPU load, successful HTTP invocations, runtime errors, etc.) to verify over time that the environment continues to operate within expected parameters.
Why dynamic pipelines?
Most teams building their first pipelines create static pipelines by defining a YAML file or similar on-disk configuration, though static pipelines have severe limitations. As the complexity of your application and the number of environments you support grows, building everything from static config becomes cumbersome, if not impossible.
To build CI/CD pipelines that are automated, reliable, resilient, and can deploy to multiple environments, you need dynamic pipelines. Dynamic pipelines use a combination of static configuration and code to customize your process for each branch and environment of your build.
With dynamic pipelines, you can replace status configuration with code. Hasura used Buildkite’s dynamic pipelines capability to replace over 2,000 lines of YAML with a Go program. The new system can generate a rich environment with complex flow and branching ,- and without unwieldy YAML configs.
How do you build dynamically?
Many CI/CD pipeline tools, such as CircleCI, only support defining a pipeline via a declarative syntax format. By contrast, Buildkite’s CI/CD pipeline orchestrator supports dynamically built pipelines out of the box. With Buildkite, you can create pipelines that optionally execute certain steps only on specific branches and deployments (i.e., performing code signing or data migration tasks against a production deployment).
Steps to building CI/CD pipelines dynamically
It’s simple to build dynamic pipelines with Buildkite. Here’s an example of adding dynamic steps to a pipeline.
Create a pipeline
The first step you’ll need to take is to create a new pipeline in Buildkite. You’ll need a Buildkite account–sign up here for our Developer plan (it’s free!).
The easiest way to get started is from one of these sample projects: our Bash shell script or our Powershell script.
These sample pipelines don’t contain a project to build by default; instead, they provide the framework for a dynamic pipeline within Buildkite. The build defines the following files:
-
A template.yml that defines the build, and
-
A pipeline.yml that defines the initial steps to execute in the pipeline.
The pipeline steps can execute scripts in any language supported by your build container. For this example, pipeline.yml executes a simple file: a Bash shell script (script.sh
) that echoes some output to the Buildkite console.
Start a build agent
Before adding any custom build steps, you should verify that you can run the base project as is. This will execute the pre-command hook, Buildkite’s default custom build step, and the post-command hook.
The easiest way to create a build agent is to create one on AWS. Buildkite offers a working AWS CloudFormation template that will generate an auto-scaling group of Elastic Compute Cloud (EC2) agents.
To start an agent, navigate to the Agents page. At the top, you will see a list of instructions for creating build agents. On the right-hand side, you’ll also see a button that says Reveal Agent Token.
Click the Reveal Agent Token and copy the value. Then, click AWS to launch the Auto Scaling group in your AWS account. This will take you to the AWS CloudFormation page for the Buildkite Agent Auto Scaling stack.
On this page, click Next. Then, paste the value you copied from the Reveal Agent Token field into the BuildKiteToken field.
Note: It’s best practice to use the BuildkiteAgentTokenParameterStorePath and BuildkiteAgentTokenParameterStoreKMSKey properties to retrieve this value securely from the SSM Parameter Store. We use BuildkiteToken here for purposes of simplicity.
Click through the rest of the AWS CloudFormation prompts until you get to the Submit button. Click this button to create your Auto Scaling group.
Once the stack completes, you’ll need to set a desired number of instances for the Auto Scaling group. By default, this number is set to zero, which means no instances are started. To change this, navigate to EC2 in the AWS Management Console and on the left-hand navigation bar under Auto Scaling, click Auto Scaling Groups. You should see a group that starts with the name of the CloudFormation stack you created.
Click this and, under Group details, click Edit. Set the Desired capacity and Minimum capacity fields to 1 to create a single build agent instance.
After a few minutes, your first agent should start and you should see it appear on your Agents page.
Run the build and examine the output
With that in place, you can run the first build. Go to the Builds page and click New Build. Specify a name for the build, the commit to use, and the branch to build against.
Click Create Build.
Buildkite sends your build to an available build agent. When it’s done, it displays the results in an easy-to-read format in the Buildkite web console.
The first command is run from the template.yml
file: buildkite-agent pipeline upload
This configures the environment, runs pre-command and post-command hooks, and loads the pipeline definition.
The second command is a build step from the pipeline file (pipeline.yml
). This executes the shell script. In this example, the script provides a ASCII thumbs up and congratulates you on a job well done.
Add custom steps
So far in this example, we don’t have a “dynamic” build CI/CD pipeline–a pipeline that runs a script. Buildkite helps make this script dynamic with a few simple source code changes.
The key to generating a dynamic pipeline is to move from a static pipeline definition (contained above in the pipeline.yml) to one that’s generated from code. Then, you can change the build step to accept the output of this script as input to the buildkite-agent pipeline upload
command.
To do this, you would change the code in the Buildkite repo’s .buildkite
directory. You would alter the existing shell script or create a new script that generates the CI/CD pipeline YAML format used by Buildkite. This doesn’t have to be a shell script; you can also invoke a script executable for any language runtime that your build agent supports.
As an example, this script iterates the specs
directory and finds a suite of tests that should be run prior to deployment. It echoes these tests out as Buildkite commands in pipeline.yml. It also adds a command to deploy the project, though only if we’re building off of the main
branch. The script leverages the environment variables that the Buildkite agent injects into your environment to accomplish this.
#!/bin/bash set -eu echo "steps:" # A step for each dir in specs/ find specs/* -type d | while read -r D; do echo " - command: \"$D/test.sh\"" echo " label: \"$(basename "$D")\"" done # A deploy step only if it's the master branch if [[ "$BUILDKITE_BRANCH" == "main" ]]; then echo " - wait" echo " - command: \"echo Deploy!\"" echo " label: \":rocket:\"" fi
With this script uploaded, you can then delete your pipeline.yml. In the template.yml, you would change the command
line from:
- command: "buildkite-agent pipeline upload"
to:
- command: “.buildkite/script.sh | buildkite-agent pipeline upload”
This pipes the output of your script - a pipeline YAML file - into the Buildkite agent, which will receive and execute the commands you generate.
Challenges in building dynamic CI/CD pipelines
Generating a build CI/CD pipeline spec on the fly can be a challenge in systems that require a valid definition before the build begins. Most developers have to implement workarounds, such as creating pre-build hooks to overwrite the existing configuration.
Many development teams often achieve dynamic pipelines by running their entire build process as a set of deployment scripts. Unopinionated systems like Jenkins make this easy. The downside is that you end up handling concurrency control, logging, monitoring, pipeline security, artifact storage, and more,. in code. This leads build scripts to become large, unwieldy, and hard to maintain.
Teams with large, complex software ecosystems can find it difficult to manage concurrency and complexity within highly dynamic pipelines. Wix ran into this problem with its own CI/CD system, which handles 9,000 backend builds a day. Without a way to prioritize builds and offload different build types to different agent pools, it fell victim to “build storms” where low-priority feature tests were blocking high-priority hotfixes from shipping to production.
How Buildkite can help
Buildkite’s first-class support for dynamic pipelines makes it the easiest, most scalable tool for building CI/CD pipelines.
Buildkite offers a number of features that make building CI/CD pipelines easier, including integrated artifact support, CloudWatch integration for monitoring, concurrency control, and near-unlimited scalability. Plus, you can monitor and control build activity across all of your projects through a single view.
To see it in action, check out our on-demand webinars and sign up for Buildkite.