The Buildkite Agent’s
start command is used to manually start an agent and register it with Buildkite.
Starting an agent
buildkite-agent start [options...]
When a job is ready to run it will call the "bootstrap-script" and pass it all the environment variables required for the job to run. This script is responsible for checking out the code, and running the actual build script defined in the pipeline.
The agent will run any jobs within a PTY (pseudo terminal) if available.
$ buildkite-agent start --token xxx
--config value- Path to a configuration file [
--name value- The name of the agent [
--priority value- The priority of the agent (higher priorities are assigned work first) [
--acquire-job value- Start this agent and only run the specified job, disconnecting after it's finished [
--disconnect-after-job- Disconnect the agent after running a job [
--disconnect-after-idle-timeout value- If no jobs have come in for the specified number of seconds, disconnect the agent (default: 0) [
--cancel-grace-period value- The number of seconds a canceled or timed out job is given to gracefully terminate and upload its artifacts (default: 10) [
--shell value- The shell command used to interpret build commands, e.g /bin/bash -e -c (default: "/bin/bash -e -c") [
--tags value- A comma-separated list of tags for the agent (e.g. "linux" or "mac,xcode=8") [
--tags-from-host- Include tags from the host (hostname, machine-id, os) [
--tags-from-ec2-meta-data value- Include the default set of host EC2 meta-data as tags (instance-id, instance-type, ami-id, and instance-life-cycle) [
--tags-from-ec2-meta-data-paths value- Include additional tags fetched from EC2 meta-data via tag & path suffix pairs, e.g "tag_name=path/to/value" [
--tags-from-ec2-tags- Include the host's EC2 tags as tags [
--tags-from-gcp-meta-data value- Include the default set of host Google Cloud instance meta-data as tags (instance-id, machine-type, preemptible, project-id, region, and zone) [
--tags-from-gcp-meta-data-paths value- Include additional tags fetched from Google Cloud instance meta-data via tag & path suffix pairs, e.g "tag_name=path/to/value" [
--tags-from-gcp-labels- Include the host's Google Cloud instance labels as tags [
--wait-for-ec2-tags-timeout value- The amount of time to wait for tags from EC2 before proceeding (default: 10s) [
--wait-for-ec2-meta-data-timeout value- The amount of time to wait for meta-data from EC2 before proceeding (default: 10s) [
--wait-for-gcp-labels-timeout value- The amount of time to wait for labels from GCP before proceeding (default: 10s) [
--git-clone-flags value- Flags to pass to the "git clone" command (default: "-v") [
--git-clean-flags value- Flags to pass to "git clean" command (default: "-ffxdq") [
--git-fetch-flags value- Flags to pass to "git fetch" command (default: "-v --prune") [
--git-clone-mirror-flags value- Flags to pass to the "git clone" command when used for mirroring (default: "-v") [
--git-mirrors-path value- Path to where mirrors of git repositories are stored [
--git-mirrors-lock-timeout value- Seconds to lock a git mirror during clone, should exceed your longest checkout (default: 300) [
--bootstrap-script value- The command that is executed for bootstrapping a job, defaults to the bootstrap sub-command of this binary [
--build-path value- Path to where the builds will run from [
--hooks-path value- Directory where the hook scripts are found [
--plugins-path value- Directory where the plugins are saved to [
--timestamp-lines- Prepend timestamps on each line of output. [
--health-check-addr value- Start an HTTP server on this addr:port that returns whether the agent is healthy, disabled by default [
--no-pty- Do not run jobs within a pseudo terminal [
--no-ssh-keyscan- Don't automatically run ssh-keyscan before checkout [
--no-command-eval- Don't allow this agent to run arbitrary console commands, including plugins [
--no-plugins- Don't allow this agent to load plugins [
--no-plugin-validation- Don't validate plugin configuration and requirements [
--no-local-hooks- Don't allow local hooks to be run from checked out repositories [
--no-git-submodules- Don't automatically checkout git submodules [
--metrics-datadog- Send metrics to DogStatsD for Datadog [
--metrics-datadog-host value- The dogstatsd instance to send metrics to via udp (default: "127.0.0.1:8125") [
--metrics-datadog-distributions- Use Datadog Distributions for Timing metrics [
--log-format value- The format to use for the logger output (default: "text") [
--spawn value- The number of agents to spawn in parallel (default: 1) [
--cancel-signal value- The signal to use for cancellation (default: "SIGTERM") [
--redacted-vars value- Pattern of environment variable names containing sensitive values (default: "_PASSWORD", "_SECRET", "_TOKEN", "_ACCESS_KEY", "*_SECRET_KEY") [
--tracing-backend value- The name of the tracing backend to use. [
--token value- Your account agent token [
--endpoint value- The Agent API endpoint (default: "
--no-http2- Disable HTTP2 when communicating with the Agent API. [
--debug-http- Enable HTTP debug mode, which dumps all request and response bodies to the log [
--no-color- Don't show colors in logging [
--debug- Enable debug mode [
--experiment value- Enable experimental features within the buildkite-agent [
--profile value- Enable a profiling mode, either cpu, memory, mutex or block [
--tags-from-ec2- Include the host's EC2 meta-data as tags (instance-id, instance-type, and ami-id) [
--tags-from-gcp- Include the host's Google Cloud instance meta-data as tags (instance-id, machine-type, preemptible, project-id, region, and zone) [
Each agent has tags (in 2.x we called this metadata) which can be used to group and target the agents in your build pipelines. This way you're free to dynamically scale your agents and target them based on their capabilities rather than maintaining a static list.
To set an agent’s tags you can set it in the configuration file:
or with the
--tags command line flag:
buildkite-agent start --tags "docker=true" --tags "ruby2=true"
or with the
BUILDKITE_AGENT_TAGS an environment variable:
env BUILDKITE_AGENT_TAGS="docker=true,ruby2=true" buildkite-agent start
Once you've started agents with tags you can target them in the build pipeline using agent query rules.
Here's an example of targeting agents that are running with the tag
postgres and value of
You can also match for any agent with a
postgres tag by omitting the value after the
= sign, or by using
*, for example:
Partial wildcard matching (for example,
postgres=*1.9) is not yet supported.
Setting agent defaults
Use a top-level
agents block to set defaults for all steps in a pipeline.
The queue tag
queue tag works differently from other tags, and can be used for isolating jobs and agents. See the agent queues documentation for more information about using queues.
Sourcing tags from Amazon Web Services
You can load an Agent's tags from the underlying Amazon EC2 instance using
--tags-from-ec2-tags for the instance tags and
--tags-from-ec2 to load the EC2 metadata (for example, instance name and machine type).
Sourcing tags from Google Cloud
You can load an Agent's tags from the underlying Google Cloud metadata using
Run a job on the agent that uploaded it (also known as node affinity)
You can configure your agent and your pipeline steps so that the steps run on the same agent that performed
pipeline upload. This is sometimes referred to as "node affinity", but note that what we describe here does not involve Kubernetes (where the term is more widely used).
Normally, we recommend against doing this. The usual practice is to allow jobs to run on whichever agent is available, or to target according to specific criteria (for example, you might want certain jobs to run on a particular operating system). Targeting a specific agent can cause reliability issues (the job can't run if the agent is offline), and can result in work being unevenly distributed between agents (which is inefficient).
First, set the agent hostname tag.
You can do this when starting the agent. This uses the system hostname:
buildkite-agent start --tags "hostname=`hostname`"
Or you can add it to the agent config file, along with any other tags:
Then, make sure you are using
pipeline upload to upload a
pipeline.yml. In Buildkite's YAML steps editor:
steps: - command: "buildkite-agent pipeline upload"
Finally, in your
hostname: "$BUILDKITE_AGENT_META_DATA_HOSTNAME" on any commands that you want to stick to the agent that uploaded the
pipeline.yml. For example:
- command: "I will stick!" agents: hostname: "$BUILDKITE_AGENT_META_DATA_HOSTNAME" - command: "I might not"
When Buildkite uploads the pipeline,
$BUILDKITE_AGENT_META_DATA_HOSTNAME is replaced with the agent's hostname tag value. In effect, the previous example becomes:
- command: "I will stick!" agents: hostname: "agents-computer-hostname" - command: "I might not"
This means the first step in the example can only run on an agent with the hostname "agents-computer-hostname". This is the hostname of the agent that uploaded the job. The second step may run on the same agent, or a different one.
Run a single job
--acquire-job value allows you to start an agent and only run the specified job, stopping the agent after it's finished. This means that when you start the agent, instead of it waiting for work, it sends a request to Buildkite to check if it can acquire (self-assign and accept) the job. Once the agent acquires the job, it runs it, then stops the agent when the job is complete.
Getting the job ID for a single job
value is the job ID. There are several ways to find it:
- Using the Build API's Get a build endpoint. This returns build information, including all jobs in the build.
- Through the GraphQL API.
BUILDKITE_JOB_IDbuild environment variable.
- In outbound job event webhooks.
- Using the GUI: select a job, and the job ID is the final value in the URL.
When to use
Normally, you don't set up an agent to run a specific job. Instead, you'll have a pool of agents running, waiting for Buildkite to send jobs to them.
--acquire-job is useful if you want to create your own scheduler to run a specific job.