We've recently updated the navigation
You can find Pipelines and Test Analytics in their own sections in the new horizontal menu bar at the top of the page.
Defining Your Pipeline Steps
Pipeline steps are defined in YAML and are either stored in Buildkite or in your repository using a
Defining your pipeline steps in a
pipeline.yml file gives you access to more configuration options and environment variables than the web interface, and allows you to version, audit and review your build pipelines alongside your source code.
Create a pipeline from the Pipelines page of Buildkite using the ➕ button.
Required fields are Name and Repository.
You can set up webhooks at this point, but this step is optional. These webhook setup instructions can be found in pipeline settings on your specific repository provider page.
There are two ways to define steps in your pipeline: using the YAML step editor in Buildkite or with a
pipeline.yml file. The web steps visual editor is still available if you haven't migrated to YAML Steps but will be deprecated in the future.
If you have not yet migrated to YAML Steps, you can do so on your pipeline's settings page. See the Migrating to YAML Steps guide for more information about the changes and the migration process.
However you add steps to your pipeline, keep in mind that steps may run on different agents. It is good practice to install your dependencies in the same step that you run them.
If you're using YAML steps, you can set defaults which will be applied to every command step in a pipeline unless they are overridden by the step itself. You can set default agent properties and default environment variables:
agents- A map of agent characteristics such as
queuethat restrict what agents the command will run on
env- A map of environment variables to apply to all steps
For example, to set steps
blahblah.sh to use the
something queue and the step
yada.sh to use the
To add steps using the YAML editor, click the 'Edit Pipeline' button on the Pipeline Settings page.
Starting your YAML with the
steps object, you can add as many steps as you require of each different type. Quick reference documentation and examples for each step type can be found in the sidebar on the right.
Before getting started with a
pipeline.yml file, you'll need to tell Buildkite where it will be able to find your steps.
In the YAML steps editor in your Buildkite dashboard, add the following YAML:
steps: - label: ":pipeline: Pipeline upload" command: buildkite-agent pipeline upload
When you eventually run a build from this pipeline, this step will look for a directory called
.buildkite containing a file named
pipeline.yml. Any steps it finds inside that file will be uploaded to Buildkite and will appear during the build.
When using WSL2 or PowerShell Core, you cannot add a
buildkite-agent pipeline upload command step directly in the YAML steps editor. To work around this, there are two options:
- Use the YAML steps editor alone
- Place the
buildkite-agent pipeline uploadcommand in a script file. In the YAML steps editor, add a command to run that script file. It will upload your pipeline.
pipeline.yml file in a
.buildkite directory in your repo.
If you're using any tools that ignore hidden directories, you can store your
pipeline.yml file either in the top level of your repository, or in a non-hidden directory called
buildkite. The upload command will search these places if it doesn't find a
The following example YAML defines a pipeline with one command step that will echo 'Hello' into your build log:
With the above example code in a
pipeline.yml file, commit and push the file up to your repository. If you have set up webhooks, this will automatically create a new build. You can also create a new build using the 'New Build' button on the pipeline page.
For more example steps and detailed configuration options, see the example
pipeline.yml below, or the step type specific documentation:
If your pipeline has more than one step and you have multiple agents available to run them, they will automatically run at the same time. If your steps rely on running in sequence, you can separate them with wait steps. This will ensure that any steps before the 'wait' are completed before steps after the 'wait' are run.
When a step is run by an agent, it will be run with a clean checkout of the pipeline's repository. If your commands or scripts rely on the output from previous steps, you will need to either combine them into a single script or use artifacts to pass data between steps. This enables any step to be picked up by any agent and run steps in parallel to speed up your build.
When you run a pipeline, a build is created. The following diagram shows you how builds progress from start to end.
Build state can be one of
You can query for
finished builds to return builds in any of the following states:
When a triggered build fails, the step that triggered it will be stuck in the
running state forever.
When all the steps in a build are skipped (either by using skip attribute or by using `if` condition), the build state will be marked as `not_run`.
When you run a pipeline, a build is created. Each of the steps in the pipeline ends up as a job in the build, which then get distributed to available agents. Job states have a similar flow to build states but with a few extra states. The following diagram shows you how jobs progress from start to end.
As well as the states shown in the diagram, the following progressions can occur:
|can progress to
||can progress to
- Jobs become
brokenwhen their configuration prevents them from running. This might be because their branch configuration doesn't match the build's branch, or because a conditional returned false.
- This is distinct from
skippedjobs, which might happen if a newer build is started and build skipping is enabled. Broadly, jobs break because of something inside the build, and are skipped by something outside the build.
- Jobs can be
canceledintentionally, either using the Buildkite UI or one of the APIs.
The REST API does not return
finished, but returns
failed according to the exit status of the job. It also lists
scheduled for legacy compatibility.
Job state can be one of
Each job in a build also has a footer that displays exit status information. It may include exit signal reason, which indicates whether the Buildkite agent stopped or the job was cancelled.
Here's a more complete example based on the Buildkite agent's build pipeline. It contains script commands, wait steps, block steps, and automatic artifact uploading:
Buildkite pipelines are made up of the following step types:
By default the pipeline upload step reads your pipeline definition from
.buildkite/pipeline.yml in your repository. You can specify a different file path by adding it as the first argument:
steps: - label: ":pipeline: Pipeline upload" command: buildkite-agent pipeline upload .buildkite/deploy.yml
A common use for custom file paths is when separating test and deployment steps into two separate pipelines. Both
pipeline.yml files are stored in the same repo and both Buildkite pipelines use the same repo URL. For example, your test pipeline's upload command could be:
buildkite-agent pipeline upload .buildkite/pipeline.yml
And your deployment pipeline's upload command could be:
buildkite-agent pipeline upload .buildkite/pipeline.deploy.yml
For a list of all command line options, see the buildkite-agent pipeline upload documentation.
Because the pipeline upload step runs on your agent machine, you can generate pipelines dynamically using scripts from your source code. This provides you with the flexibility to structure your pipelines however you require.
The following example generates a list of parallel test steps based upon the
test/* directory within your repository:
To use this script, you'd save it to
.buildkite/pipeline.sh inside your repository, ensure it is executable, and then update your pipeline upload step to use the new script:
.buildkite/pipeline.sh | buildkite-agent pipeline upload
When the build is running it will execute the script and pipe the output to the
pipeline upload command. The upload command will insert the steps from the script into the build immediately after the upload step.
In the below
pipeline.yml example, when the build runs it will execute the
.buildkite/pipeline.sh script, then the test steps from the script will be added to the build before the wait step and command step. After the test steps have run, the wait and command step will run.
If you need the ability to use pipelines from a central catalog, or enforce certain configuration rules, you can either use dynamic pipelines and the
pipeline upload command to make this happen or write custom plugins and share them across your organization.
To use dynamic pipelines and the pipeline upload command, you'd make a pipeline that looks something like this:
steps: - command: enforce-rules.sh | buildkite-agent pipeline upload label: ":pipeline: Upload"
Each team defines their steps in
team-steps.yml. Your templating logic is in
enforce-rules.sh, which can be written in any language that can pass YAML to the pipeline upload.
enforce-rules.sh you can add steps to the YAML, require certain versions of dependencies or plugins, or implement any other logic you can program. Depending on your use case, you might want to get
enforce-rules.sh from an external catalog instead of committing it to the team repository.
When creating a new pipeline, you can take a shortcut if you want to set up the new pipeline with the same steps as an existing pipeline.
?clone URL parameter, you can prefill the new pipeline page with the steps from another pipeline. It will not copy any other fields such as environment variables or repository information.
The below example URL will copy the steps from the 'My Llamas Pipeline' into the New Pipeline page:
To run command steps only on specific agents:
- In the agent configuration file, tag the agent
- In the pipeline command step, set the agent property in the command step
For example to run commands only on agents running on macOS:
You can also upload pipelines from the command line using the
buildkite-agent command line tool. See the buildkite-agent pipeline documentation for a full list of the available parameters.