# Buildkite Documentation > Buildkite is a platform for running fast, secure, and scalable continuous integration pipelines on your own infrastructure. --- ## Pipelines ### Pipelines URL: https://buildkite.com/docs/pipelines --- ### Overview URL: https://buildkite.com/docs/pipelines --- ### Advantages of Buildkite Pipelines URL: https://buildkite.com/docs/pipelines/advantages/buildkite-pipelines #### Advantages of Buildkite Pipelines Buildkite Pipelines is a hybrid CI/CD platform that orchestrates builds through a managed control plane while execution happens on infrastructure you control. This page describes how Buildkite Pipelines differs from other CI/CD tools and why teams choose it. ##### Why teams switch to Buildkite Pipelines Most CI/CD systems bundle managed infrastructure, features, and opinionated workflows into a single platform. Buildkite Pipelines takes a different approach and provides composable building blocks that let [platform teams](/docs/pipelines/best-practices/platform-controls) design exactly the workflows they need, while developers retain the flexibility to move fast. See [case studies](https://buildkite.com/resources/case-studies/) for how engineering organizations use Buildkite Pipelines at scale. ###### Core differentiators - **Hybrid architecture.** Mix self-hosted and Buildkite hosted agents in the same pipeline — run security-sensitive jobs on your own infrastructure and offload everything else to fully managed runners. - **Unlimited concurrency.** Scale from a handful of agents to 100,000+ with no concurrency restrictions. - **Dynamic pipelines.** Generate and modify pipeline steps at runtime using YAML, the [Buildkite SDK](/docs/pipelines/configure/dynamic-pipelines/sdk), or any language. - **Extensibility.** Customize behavior through integrations, [plugins](/docs/pipelines/integrations/plugins), and agent [hooks](/docs/agent/hooks). - **Security by design.** Agents are [open source](https://github.com/buildkite/agent), poll for work over HTTPS, and support [pipeline signing](/docs/agent/self-hosted/security/signed-pipelines). - **Predictable pricing.** Concurrency- or time-based billing with no surprise charges or credit limits. Whether you're comparing Buildkite Pipelines to [GitHub Actions](/docs/pipelines/advantages/buildkite-vs-gha), [CircleCI](/docs/pipelines/advantages/buildkite-vs-circleci), [Jenkins](/docs/pipelines/advantages/buildkite-vs-jenkins), [GitLab](/docs/pipelines/advantages/buildkite-vs-gitlab), or others, these differentiators hold true. ##### Best-in-class agents for your use case Buildkite Pipelines is compute-agnostic — the platform handles orchestration, but execution happens wherever you need it. Buildkite agents can run on [Buildkite hosted infrastructure](/docs/agent/buildkite-hosted), your [Amazon](/docs/agent/self-hosted/aws) or [Google](/docs/agent/self-hosted/gcp) cloud, your [Kubernetes](/docs/agent/self-hosted/agent-stack-k8s) cluster, or your [own servers](/docs/agent/self-hosted/install). ###### Buildkite hosted agents [Buildkite hosted agents](/docs/agent/buildkite-hosted) provide fully managed build infrastructure for teams that want fast, ephemeral runners without maintaining their own agent fleet: - Latest generation Mac and AMD Zen-based hardware with a proprietary low-latency virtualization layer. - Agents provision on demand and are destroyed after each job, providing clean builds with hypervisor-level [isolation](/docs/pipelines/security). - Per-second billing with no minimum charges and no rounding. - [Caching](/docs/agent/buildkite-hosted/cache-volumes#container-cache-volumes), [git mirroring](/docs/agent/buildkite-hosted/cache-volumes#git-mirror-volumes), and [remote Docker builders](/docs/agent/buildkite-hosted/linux/remote-docker-builders) included at no additional cost. - Jobs dispatch within seconds, with consistently low queue times. ###### Buildkite self-hosted agents [Buildkite self-hosted agents](/docs/agent/self-hosted) give teams full control over their build infrastructure: - Run agents on [Linux](/docs/agent/self-hosted/install/linux), [macOS](/docs/agent/self-hosted/install/macos), [Windows](/docs/agent/self-hosted/install/windows), [Docker](/docs/agent/self-hosted/install/docker), or any platform that fits your workload, including GPUs and custom hardware. - Autoscale with the [Elastic CI Stack for AWS](/docs/agent/self-hosted/aws/elastic-ci-stack) or the [Agent Stack for Kubernetes](/docs/agent/self-hosted/agent-stack-k8s), or manage capacity yourself. - Customize every stage of the [job lifecycle](/docs/agent/hooks#job-lifecycle-hooks) with agent and repository [hooks](/docs/agent/hooks), enforce security policies, and manage [secrets](/docs/pipelines/security/secrets/managing) within your own network. - Source code and secrets never leave your infrastructure. Agents clone repositories directly within your network and poll for work over HTTPS with no inbound ports required. ##### Performance and scale As engineering organizations grow, CI often becomes the point of friction — builds queue, feedback slows, and developers context-switch while waiting. Buildkite Pipelines treats performance as a first-class concern so that CI keeps pace with the teams it serves. ###### Speed and parallelization Fast feedback loops come from deep [parallelization](/docs/pipelines/best-practices/parallel-builds) and [dynamic pipelines](/docs/pipelines/configure/dynamic-pipelines) that skip unnecessary work. Small per-build time savings compound across thousands of daily builds. - Handle large [monorepo](/docs/pipelines/best-practices/working-with-monorepos) structures efficiently with dynamic pipeline generation that selectively builds only what changed. - Match compute to workload using agent [queues](/docs/agent/queues) and [tags](/docs/agent/cli/reference/start#setting-tags), dedicating fast agents to critical pipelines and smaller agents to less demanding tasks. - Identify where time is spent — job overhead, startup latency, unnecessary steps — and use that insight to drive improvements like [caching](/docs/pipelines/best-practices/caching) and faster bootstrapping. ###### AI workflows AI-assisted development puts more pressure on CI/CD systems. When developers ship more code faster, CI must be able to absorb spikes in build volume from AI-generated code without becoming the bottleneck. Buildkite Pipelines provides [predictable behavior](/docs/pipelines/architecture) and a structured environment that scales with AI-driven workloads. - Add more agents as build volume grows — there are no [concurrency](/docs/pipelines/configure/workflows/controlling-concurrency) caps or queue delays as Buildkite Pipelines scales from small teams to hundreds of thousands of concurrent agents. - Run AI/ML workloads on GPUs, TPUs, and custom hardware that don't fit a traditional CI shape. - Connect AI coding agents to pipelines through the [Buildkite MCP server](/docs/apis/mcp-server) with precise, cached context that stays accurate and token-efficient. ###### Dynamic pipelines [Dynamic pipelines](/docs/pipelines/configure/dynamic-pipelines) generate and modify pipeline steps at runtime based on code changes, test results, or any custom logic. Start with [YAML pipelines](/docs/pipelines/configure/defining-steps), and when you need more, write pipelines in actual code with the [Buildkite SDK](/docs/pipelines/configure/dynamic-pipelines/sdk), which supports Go, Python, TypeScript, Ruby, and C#. - Fan out tests only after builds succeed, skip unnecessary steps based on file changes, or generate deployment steps based on what actually changed. - Upload new steps mid-execution, retry a specific failed step without restarting the entire pipeline, or adjust the remaining execution path based on what happened earlier in the build. - Build reusable abstractions and dynamic workflows that adapt at runtime. - Because pipeline generation is code, you can test your workflow logic the same way you test any other software — with unit tests, code review, and version control. ##### Developer experience With fast feedback, clear failure messages, and transparent logs, Buildkite Pipelines keeps developers focused on code instead of spending time on debugging CI. The Buildkite Pipelines interface provides immediate visibility into pipeline behavior and system health through rich build [annotations](/docs/pipelines/configure/annotations), integrated [test results](/docs/test-engine), and transparent failure information. - [Log output](/docs/pipelines/configure/managing-log-output) renders as real terminal output with full ANSI color support, preserving your test framework's formatting, color-coded diffs, and structured output. - Configurable [log grouping](/docs/pipelines/configure/managing-log-output#grouping-log-output) (`---`, `+++`, `~~~`) organizes output into [collapsible sections](/docs/pipelines/configure/managing-log-output#grouping-log-output-collapsed-groups). - Build steps can write rich Markdown content directly into the [build page](/docs/pipelines/build-page) using [annotations](/docs/agent/cli/reference/annotate), surfacing test failure summaries, coverage reports, or deploy links. - Builds running on your own infrastructure let you SSH into the machine, inspect the environment, and reproduce failures locally. - [Buildkite Test Engine](/docs/test-engine) detects [flaky tests](/docs/test-engine/glossary#flaky-test), automatically [mutes](/docs/test-engine/test-suites/test-state-and-quarantine#automatic-quarantine) unreliable ones, and assigns follow-up, so teams get a clean signal from their test suites. ##### Extensibility and integrations Buildkite Pipelines fits into your existing toolchain rather than replacing it, and gives you multiple ways to customize pipeline behavior without forking or patching the platform. ###### Integrate with your existing tools Buildkite Pipelines specializes in CI/CD rather than bundling source code management, project planning, security scanning, and deployment monitoring into a single product. Your integration options include: - Source control: [GitHub](/docs/pipelines/source-control/github), [GitLab](/docs/pipelines/source-control/gitlab), [Bitbucket](/docs/pipelines/source-control/bitbucket). - Observability: [Datadog](/docs/pipelines/integrations/observability/datadog), [Honeycomb](/docs/pipelines/integrations/observability/honeycomb), [Amazon EventBridge](/docs/pipelines/integrations/observability/amazon-eventbridge), [OpenTelemetry](/docs/pipelines/integrations/observability/opentelemetry). - Notifications: [Slack](/docs/pipelines/integrations/notifications/slack), [PagerDuty](/docs/pipelines/integrations/notifications/pagerduty), [CCMenu and CCTray](/docs/pipelines/integrations/notifications/cc-menu), and [notification plugins](/docs/pipelines/integrations/notifications/plugins). - Secrets management: [HashiCorp Vault or AWS Secrets Manager](/docs/pipelines/security/secrets/managing). ###### Buildkite plugins [Buildkite plugins](/docs/pipelines/integrations/plugins) add reusable functionality to pipeline steps. Plugins are version-pinned and run on your agents, so you control exactly what executes in your environment. - Browse the [plugins directory](https://buildkite.com/resources/plugins/) to find open source plugins maintained by Buildkite and the community. - [Write your own](/docs/pipelines/integrations/plugins/writing) plugins to encapsulate common patterns and share them across teams. ###### Hooks [Hooks](/docs/agent/hooks) let you customize agent behavior and enforce standards at every stage of the [job lifecycle](/docs/agent/hooks#job-lifecycle-hooks): - Manage [secrets](/docs/pipelines/security/secrets/managing) - Enforce security policies - Modify checkout behavior - Standardize environments across all pipelines ###### Pipeline templates [Pipeline templates](/docs/pipelines/governance/templates) let administrators of Enterprise-plan Buildkite organizations define standard step configurations that can be applied across all pipelines in an organization. Use pipeline templates to enforce consistent build patterns, reduce duplication, and give teams a starting point that follows established best practices. ##### Security and compliance Buildkite Pipelines separates the control plane from the execution environment. The control plane handles orchestration and receives only build metadata — job status, logs, and timing data. Source code, secrets, and build artifacts remain on infrastructure you control. For a full security overview, see [Pipelines security](/docs/pipelines/security) and [Security best practices](/docs/pipelines/best-practices/security-controls). ###### Clusters [Clusters](/docs/pipelines/security/clusters) create isolated boundaries between agents, queues, and pipelines within a single Buildkite organization. Use clusters to let teams self-manage their own agent pools, restrict which pipelines can run on which agents, and manage [secrets](/docs/pipelines/security/secrets/buildkite-secrets) within a defined scope. ###### Data privacy and residency Organizations with data residency requirements can control where agents run and where build data is stored. Agents clone repositories directly within your network, so code never transits through Buildkite infrastructure. For stricter security postures, agents can be locked down further with network controls and [signed pipelines](/docs/pipelines/advantages/buildkite-pipelines#security-and-compliance-pipeline-signing). ###### Pipeline signing In Buildkite Pipelines, the agent itself can reject tampered instructions rather than relying solely on access controls. [Pipeline signing](/docs/agent/self-hosted/security/signed-pipelines) lets agents cryptographically verify that the steps they run haven't been tampered with, protecting against scenarios where the control plane or an intermediary is compromised. ##### Predictable costs Buildkite Pipelines pricing is based on agent [concurrency](/docs/pipelines/configure/workflows/controlling-concurrency), typically using the 95th percentile, so short bursts don't inflate costs. Learn more in [Pricing](https://buildkite.com/pricing/). - **No surprise bills.** No per-minute charges, runner-minute overages, or credit allocations to exhaust. - **Bring your own compute.** Use [Buildkite hosted agents](/docs/agent/buildkite-hosted) with per-second billing for managed infrastructure, or run on your own infrastructure — including spot instances or spare capacity — to optimize costs. - **Developer time matters more than CI minutes.** CI that looks free on paper can be expensive when slow pipelines keep engineers waiting. Buildkite Pipelines is designed to reduce cycle time and eliminate queuing, so the real cost of CI is measured in throughput gained across the engineering organization. ##### Support All Buildkite plans include access to support from engineers who can advise on implementation and troubleshoot complex configurations. Enterprise Premium Support adds: - 24/7 emergency pager and live chat support - Guaranteed SLAs with priority response times - A dedicated technical account manager - 99.95% uptime SLA ##### Migrating to Buildkite Pipelines Buildkite provides [migration guides](/docs/pipelines/migration) to help teams move from their existing CI/CD system. The following pages explore the advantages of migrating from specific systems with side-by-side comparisons: - **[GitHub Actions](/docs/pipelines/advantages/buildkite-vs-gha):** Move beyond static workflows, concurrency caps, and multi-tenant reliability issues. Workflow files translate step-for-step, and self-hosted Buildkite agents replace GitHub-hosted runners. - **[CircleCI](/docs/pipelines/advantages/buildkite-vs-circleci):** Replace credit-based billing, concurrency caps, and static config with dynamic pipelines, predictable pricing, and full infrastructure control. CircleCI orbs map to Buildkite plugins, and workflows translate to Buildkite steps. - **[Jenkins](/docs/pipelines/advantages/buildkite-vs-jenkins):** Eliminate controller maintenance, plugin conflicts, and painful upgrades while keeping infrastructure control. Jenkinsfiles map directly to Buildkite pipeline YAML, and the agent model replaces the controller/node topology. - **[GitLab](/docs/pipelines/advantages/buildkite-vs-gitlab):** Replace rigid stage-based pipelines and runner-minute limits with flexible, dynamic workflows. GitLab's `.gitlab-ci.yml` stages map to Buildkite steps, with the added ability to modify those steps at runtime. ##### Get started [Sign up](https://buildkite.com/signup) to try Buildkite Pipelines — hosted agents are available immediately, with no infrastructure setup required. Or follow the [getting started guide](/docs/pipelines/getting-started) to connect your own agents. --- ### Advantages over GitHub Actions URL: https://buildkite.com/docs/pipelines/advantages/buildkite-vs-gha #### Advantages of migrating from GitHub Actions GitHub Actions is a workflow automation tool built into GitHub. Buildkite Pipelines takes a different approach: instead of bundling CI/CD as a platform feature, it focuses on doing CI/CD exceptionally well. GitHub Actions is easy to start with as it is natively integrated into GitHub, making it a good choice for small teams. As organizations scale, however, its limitations become apparent: hard concurrency caps, static workflows, unpredictable costs, and reliability issues. Buildkite Pipelines is designed from the ground up for scale, speed, and reliability. ##### Scaling and limits GitHub Actions imposes a 256-job matrix cap per workflow run and self-hosted runners require manual provisioning with slow startup times. Buildkite Pipelines supports 100,000+ concurrent agents with no artificial limits. Agents are lightweight software requiring only an outbound HTTPS connection, and turnkey autoscaling is available through the [Elastic CI Stack for AWS](/docs/agent/self-hosted/aws), [Elastic CI Stack for GCP](/docs/agent/self-hosted/gcp/elastic-ci-stack), and [Agent Stack for Kubernetes](/docs/agent/self-hosted/agent-stack-k8s). ##### Dynamic pipelines and static workflows GitHub Actions workflows are static once triggered. To add jobs based on what changed, you must dispatch new workflows or pre-declare everything up front, leading to wasted compute. Also, you can only nest workflow calls up to 10 levels of depth, and secret passing must be explicit instead of allowing each pipeline to define what it needs. In Buildkite Pipelines, with the help of [dynamic pipelines](/docs/pipelines/configure/dynamic-pipelines), you can generate or modify steps at runtime based on changed files, repository state, or any custom logic. This way, you can fan out tests only after builds succeed, skip unnecessary steps, or generate deployment steps based on what actually changed. ##### Better reliability GitHub Actions experiences frequent reliability issues that can block entire organizations. Buildkite Pipelines maintains strong uptime, and since builds run on your infrastructure, you're not affected by multi-tenant cloud environment problems or resource contention from other tenants. ##### High-performance hosted machines [Buildkite hosted agents](/docs/pipelines/hosted-agents) offer the newest Apple silicon available for CI, so mobile teams can test on the same hardware their users run. Persistent cache volumes on NVMe (Linux) and disk images (macOS) retain dependencies, Git mirrors, and Docker layers for up to 14 days. GitHub caches are limited to 7 days and 10 GB per repository, restored from object storage for each job. ##### Centralized visibility GitHub Actions is distributed across repositories with no central view for governance, guardrails, or standardization at scale. Buildkite Pipelines provides a unified dashboard to monitor build health across your entire organization. ##### Monorepo performance GitHub has no native path-based filtering for dynamic step injection. Buildkite handles large [monorepos](/docs/pipelines/best-practices/working-with-monorepos) efficiently through [dynamic pipelines](/docs/pipelines/configure/dynamic-pipelines) that analyze dependencies and build only what changed, with the [`if_changed` attribute](/docs/pipelines/configure/step-types/command-step#agent-applied-attributes) for declarative path filtering. ##### Test optimization GitHub has no native test intelligence—teams must rely on custom scripts or marketplace actions. [Buildkite Test Engine](/docs/test-engine) provides intelligent test splitting that balances suites dynamically using historical runtime data, automatic flaky test retries, flaky test quarantine, and rich analytics. ##### Predictable pricing GitHub Actions uses per-minute billing that can lead to unexpected costs as teams grow. In Buildkite Pipelines, [pricing](https://buildkite.com/pricing/) is based on agent concurrency using the 95th percentile, so short bursts don't inflate costs and you'll also be able to use your own compute including spot instances to educe costs further. ##### Job routing and priorities Buildkite Pipelines provides sophisticated job routing with queues, priorities, and concurrency controls. Urgent hotfixes can move ahead of long test suites, and risky deploys don't collide. GitHub Actions lacks this level of control. ##### Security and compliance With Buildkite Pipelines, your code and secrets never leave your environment. The least-privilege GitHub App integration means Buildkite never sees your source code. You maintain full control over your build infrastructure and security posture, which is critical for organizations with strict compliance requirements. GitHub's hosted runners require code and secrets to pass through their infrastructure. ##### Developer experience Buildkite provides rich logging with colors, links, and emojis that make build output easier to parse. The JUnit Annotate plugin surfaces failed tests inline for faster triage. Cross-repository triggers enable automatic build choreography across multiple repositories. ##### Migration path You can try out the [Buildkite pipeline converter](/docs/pipelines/migration/pipeline-converter) to see how your existing GitHub Actions pipelines might look converted to Buildkite Pipelines. To start converting your GitHub Actions pipelines to Buildkite Pipelines, follow the instructions in [Migrate from GitHub Actions](/docs/pipelines/migration/from-githubactions), then migrate pipeline by pipeline. The key changes you'll need to be mindful of: - `jobs` become `steps` with `key` attributes - `needs` becomes `depends_on` - `runs-on` maps to `agents` queues - `actions/checkout` is removed since Buildkite Pipelines checks out code automatically. If you would like to receive assistance in migrating from GitHub Actions to Buildkite Pipelines, please reach out to the Buildkite Support Team at [support@buildkite.com](mailto:support@buildkite.com). --- ### Advantages over CircleCI URL: https://buildkite.com/docs/pipelines/advantages/buildkite-vs-circleci #### Advantages of migrating from CircleCI CircleCI is a hosted CI/CD platform built around a fixed hierarchy of organizations, VCS connections, and projects, where each project maps one-to-one to a repository. CircleCI works well for small teams getting started quickly, but its credit-based pricing, plan-based concurrency caps, and static configuration model can become obstacles as teams and repositories grow. [Buildkite Pipelines](/docs/pipelines) takes a different approach as the pipelines are decoupled from repositories, there are no concurrency limits, and the usage costs are predictable. ##### Pipeline structure and flexibility CircleCI projects using the GitHub App integration can define multiple pipelines per project, each with its own configuration file and trigger. However, pipelines still live within the org → project → repository structure, and cross-repository triggering requires additional setup through separate trigger sources. Buildkite Pipelines treats pipelines as decoupled, runtime-programmable units that are not tied to a specific repository. You can create multiple pipelines per repository, trigger pipelines across repositories, or run pipelines independently of any repository, letting teams model CI/CD around how they actually build and ship software. ##### Scaling and limits CircleCI's performance and throughput are constrained by plan-based concurrency, queued capacity, and shared-platform limits. The free plan caps concurrency at 30 jobs (one for macOS), and higher plans raise that cap but still impose fixed ceilings. Even self-hosted runners are limited by plan tier. As organizations scale, these limits show up as longer queue times and slower developer feedback loops. Buildkite Pipelines scales by adding [agents](/docs/agent), without platform-imposed concurrency caps. The agent architecture is lightweight, supports 100,000+ concurrent agents, and offers turnkey autoscaling through the [Elastic CI Stack for AWS](/docs/agent/self-hosted/aws), [Elastic CI Stack for GCP](/docs/agent/self-hosted/gcp/elastic-ci-stack), and [Agent Stack for Kubernetes](/docs/agent/self-hosted/agent-stack-k8s). ##### Dynamic pipelines vs. static configuration CircleCI configuration is largely static and declarative: workflows are defined up front, and once a pipeline starts, you are mostly choosing among predeclared paths. Commands, jobs, and workflows can be parameterized, which functions as a templating system, but adds cognitive load as configurations grow. While CircleCI now supports multiple pipelines per project, each individual pipeline configuration is still static. CircleCI offers two approaches to manage complexity within a configuration, both with significant tradeoffs: - **Config packing:** Split configuration across multiple files (`commands/`, `executors/`, `jobs/`, `workflows/`, `root.yml`) and run `circleci config pack` to generate a single merged config. This trades readability for modularity: tracing a single workflow may require following references across many files in the folder structure. - **Continuations:** An orb-based mechanism for selecting which YAML to continue with at runtime. Continuations are limited to a single continuation config per repository and are generally hard to reason about. Both approaches still require heavy use of parameters to customize different execution paths, and attempt to work around the fact that CircleCI configuration is fundamentally static. Teams end up over-specifying "just in case" jobs and conditionals, which wastes compute and increases maintenance overhead. With Buildkite Pipelines, [dynamic pipelines](/docs/pipelines/configure/dynamic-pipelines) let you generate and modify steps at runtime using real code in whatever language suits your execution environment and your team's expertise. You can decide what to run based on changed files, dependency graphs, repository state, or external signals. Instead of producing a monolithic config, you can isolate concerns across multiple pipelines and generate only the steps you need. The pipeline adapts during execution rather than forcing you to predeclare every possible path. ##### Reliability and infrastructure control Both CircleCI and Buildkite Pipelines rely on a managed control plane for orchestration, and both support self-hosted runners that handle checkout and execution locally during a job. With Buildkite Pipelines, you have additional control over where supporting infrastructure lives. For example, you can direct [artifact storage](/docs/pipelines/configure/artifacts) to your own S3 bucket, and use your own persistent volumes or shared network storage for caching. This means you can reduce dependencies on vendor-managed infrastructure for concerns beyond orchestration. ##### High-performance hosted machines CircleCI provides hosted compute, but performance and cost can vary depending on the resource classes and what you need to add on top. Docker layer caching, for example, carries both a per-job credit cost and a storage cost, and all cached layers live entirely on CircleCI infrastructure with no option to redirect them elsewhere. Buildkite Pipelines offers flexible compute options: run on your own infrastructure using [self-hosted agents](/docs/agent/self-hosted) when that is the best option for your use case, or use [Buildkite hosted agents](/docs/agent/buildkite-hosted) when you want fully managed Linux or macOS compute. Hosted agents are designed for fast startup and isolated environments, with higher-performance options for workloads like mobile CI that benefit from modern Apple silicon. Persistent cache volumes on NVMe (Linux) and disk images (macOS) retain dependencies, Git mirrors, and Docker layers for up to 14 days. ##### Data sharing and caching CircleCI provides three built-in mechanisms for sharing data between jobs: - Caches (save and restore specific paths keyed by configurable cache keys) - Workspaces (persist and attach working directory state across jobs) - Artifacts (meant for outputs consumed outside CI). These primitives are well-integrated but live entirely on CircleCI infrastructure with no option to use your own storage. Storage for caches, workspaces, and Docker layer caching all consume paid credits. CircleCI has no built-in mechanism for sharing lightweight state between steps, such as key-value pairs generated at runtime. With Buildkite Pipelines, you control where data lives. For lightweight state sharing, [meta-data](/docs/pipelines/configure/build-meta-data) lets steps exchange key-value pairs at runtime without file-based sharing. [Artifacts](/docs/pipelines/configure/artifacts) can be stored in your own S3 bucket by setting environment variables on your agents. Caching strategies are flexible because agents run on your infrastructure, so you can use persistent volumes, shared network storage, or cache volumes on [hosted agents](/docs/agent/buildkite-hosted). You are not locked into a single vendor-managed storage model. ##### Centralized visibility and governance CircleCI provides org-level Insights and dashboards for build performance, but has no built-in mechanism for enforcing pipeline standards or isolating groups of runners and pipelines for different teams. Buildkite Pipelines provides a unified dashboard that shows build health, queue metrics, and agent status across the entire organization. [Clusters](/docs/pipelines/security/clusters) let platform teams define isolated boundaries for agents and pipelines, and [pipeline templates](/docs/pipelines/governance/templates) enforce consistent build patterns across teams. ##### Monorepo performance CircleCI supports path filtering, but sophisticated monorepo strategies require additional scripting and configuration to correctly model cross-directory dependencies and to avoid rebuilding unaffected services. Buildkite Pipelines handles large [monorepos](/docs/pipelines/best-practices/working-with-monorepos) efficiently by making the pipeline itself programmable. [Dynamic pipelines](/docs/pipelines/configure/dynamic-pipelines) can implement dependency-aware builds and selective rebuilds, with the [`if_changed` attribute](/docs/pipelines/configure/step-types/command-step#agent-applied-attributes) for declarative path filtering. For teams that want a turnkey solution, the [Monorepo Diff plugin](https://buildkite.com/resources/plugins/monorepo-diff) watches for changes across directories and triggers the appropriate pipelines automatically. This reduces wasted work and helps keep feedback fast as repository complexity grows. ##### Orbs vs. plugins Both CircleCI orbs and Buildkite [plugins](/docs/pipelines/integrations/plugins) are versioned, open source, and can be forked and pinned. The key differences are when they run and what they can use. CircleCI resolves and expands orbs at config compilation time, before the job starts, and orbs are written almost exclusively in Bash. Buildkite plugins run directly on your agents as hooks during job execution, can be written in any language available on the agent, and their runtime behavior is directly auditable in the environment where they run. ##### Test optimization CircleCI has strong built-in test integration. The `store_test_results` step accepts JUnit output and provides a **Tests** tab in the UI with failed and flaky test visibility, along with tooling for test splitting when results are stored. These features are available even on basic plans. [Buildkite Test Engine](/docs/test-engine) goes further with intelligent test splitting that balances suites dynamically using historical runtime data, automatic flaky test retries, flaky test quarantine, and rich analytics across your entire organization. ##### Predictable pricing CircleCI's credit-based billing can become difficult to predict as build volume grows. Credits are consumed by compute (job minutes), storage (Docker layer caching, caches, workspaces), and users. User costs can become the main pain point: CircleCI counts anyone who commits to a connected repository as a user, not just users who log in to the UI. Each additional user beyond the plan's included count adds a significant credit cost that increases at higher plan tiers. This can make CircleCI feel reasonable for small teams but increasingly hard to justify as the team grows. Buildkite Pipelines [pricing](https://buildkite.com/pricing/) is based on agent concurrency using the 95th percentile, so occasional spikes don't inflate costs. You can also use your own compute including spot instances to reduce costs further. ##### Job routing and priorities Both platforms support job routing: CircleCI uses resource classes and self-hosted runner labels, while Buildkite Pipelines uses agent [queues](/docs/agent/queues) and tag-based matching. The difference is prioritization. CircleCI has no native priority system, so when an urgent fix and a long-running test suite compete for the same runner pool, there is no built-in way to let the urgent job run first. Buildkite Pipelines [priority settings](/docs/pipelines/configure/step-types/command-step#priority) let urgent jobs move ahead of lower-priority work without manual intervention. ##### Secret management CircleCI secrets are managed through contexts configured in the UI, and each job must explicitly opt into a context. There is no alternative mechanism. Buildkite Pipelines supports multiple approaches: [agent hooks](/docs/agent/hooks), Kubernetes secrets, S3, a [secrets manager](/docs/pipelines/security/secrets/managing), or the [HashiCorp Vault plugin](https://buildkite.com/resources/plugins/vault-secrets). Teams can choose the model that fits their security posture without being forced into a single UI-driven pattern. ##### Migration path Migrations from CircleCI are rarely a one-to-one YAML translation. CircleCI limitations often shape a team's CI architecture, and moving to Buildkite Pipelines is an opportunity to remove those constraints. The most effective approach is to rethink workflows around what Buildkite Pipelines makes possible: breaking apart monolithic configs into multiple pipelines, using [dynamic pipelines](/docs/pipelines/configure/dynamic-pipelines) to reduce complexity, and taking advantage of flexible compute and storage. To start converting your CircleCI pipelines to Buildkite Pipelines, use the following principles: 1. Audit your current setup: document orbs, resource classes, contexts, and workflows. 1. Convert CircleCI workflows to Buildkite Pipelines steps with explicit dependencies using `depends_on` and `wait`. 1. Replace orbs with equivalent Buildkite [plugins](https://buildkite.com/resources/plugins/) or inline commands. 1. Map CircleCI contexts to Buildkite Pipelines [environment variables](/docs/pipelines/configure/environment-variables) or a [secrets manager](/docs/pipelines/security/secrets/managing). 1. Remove explicit `checkout` steps, since Buildkite Pipelines checks out code automatically. 1. Start with non-production pipelines and run both systems in parallel to validate results. You can try out the [Buildkite pipeline converter](/docs/pipelines/migration/pipeline-converter) to see how your existing CircleCI pipelines might look converted to Buildkite Pipelines. If you would like to receive assistance in migrating from CircleCI to Buildkite Pipelines, please reach out to the Buildkite Support Team at [support@buildkite.com](mailto:support@buildkite.com). --- ### Advantages over Jenkins URL: https://buildkite.com/docs/pipelines/advantages/buildkite-vs-jenkins #### Advantages of migrating from Jenkins Jenkins is the original open-source automation server that pioneered CI/CD. Buildkite Pipelines takes a different approach: instead of self-managing everything, it pairs a managed control plane with your own infrastructure to deliver the speed and reliability that modern engineering teams need. Jenkins has served the CI/CD community for over 20 years, but the architecture that enabled its flexibility creates operational challenges at scale. ##### Managed control plane By default, Jenkins is primarily self-hosted. You need to deploy, scale, secure, and upgrade your controllers yourself. When a controller is slow or down, developers are blocked. Buildkite Pipelines separates orchestration from execution: a managed SaaS control plane with agents running on your infrastructure. In Buildkite Pipelines, can choose between self-hosted and [hosted agents](/docs/agent/buildkite-hosted). ##### Buildkite agents Jenkins upgrades are notoriously difficult, often delayed for years due to plugin compatibility risks. With Buildkite Pipelines, the control plane updates continuously. Agent updates are also straightforward and incremental. In contrast to Jenkins, Buildkite agents are ephemeral by design: spin up, run a job, tear down. This ensures clean, reproducible builds. ##### Scaling without a central bottleneck Adding Jenkins capacity means tuning controllers and executors. Buildkite agents poll for work. Adding capacity means adding agents, with no central bottleneck. ##### Simpler pipelines Jenkins Groovy pipelines are powerful but complex, with pitfalls that can affect controller stability. Buildkite Pipelines uses YAML, which is easier to read and version-control. See more in [Pipeline design and structure](/docs/pipelines/best-practices/pipeline-design-and-structure). ##### Dynamic pipelines With the help of Buildkite [dynamic pipelines](/docs/pipelines/configure/dynamic-pipelines), you can generate or modify steps at runtime based on changed files, repository state, or any custom logic. Fan out tests only after builds succeed, skip unnecessary steps, or generate deployment steps based on what actually changed. ##### Fewer plugin dependencies Jenkins has more than 1,800 plugins of varying quality. Plugin issues can destabilize entire controllers. Buildkite Pipelines' core features are built in, and the [Buildkite plugins](/docs/pipelines/integrations/plugins) run on agents, isolating failures to individual builds. ##### Lower total cost Jenkins is free to download but requires dedicated admin teams to manage the infrastructure. Buildkite Pipelines reduces operational overhead, letting your team focus on building and delivering software. ##### Migration path You can try out the [Buildkite pipeline converter](/docs/pipelines/migration/pipeline-converter) to see how your converted Jenkins pipelines look in Buildkite Pipelines. To start converting your Jenkins pipelines to Buildkite Pipelines, follow the instructions in [Migrate from Jenkins](/docs/pipelines/migration/from-jenkins), then migrate pipeline by pipeline. The main challenge you might face in the migration is cultural: shifting from sequential execution and shared workspaces to Buildkite's parallel-by-default, fresh-workspace model. For help migrating migrating from Jenkins to Buildkite Pipelines, please reach out to the Buildkite Support Team at [support@buildkite.com](mailto:support@buildkite.com). --- ### Advantages over GitLab URL: https://buildkite.com/docs/pipelines/advantages/buildkite-vs-gitlab #### Advantages of migrating from GitLab GitLab is a DevSecOps platform covering the entire software development lifecycle. Buildkite Pipelines takes a different approach: instead of doing a little bit of everything, it focuses on doing CI/CD exceptionally well. ##### Lightweight Buildkite agents vs. heavyweight runners GitLab runners are full compute units requiring specific executors (shell, Docker, Kubernetes) and complex setup with firewall rules and connectivity requirements. Most GitLab customers use hosted runners because self-hosting is complicated. Also, many GitLab users have to resort to using self-hosted GitLab runners to work around the 400-minute limit on the free plan for small organizations. Buildkite agents are lightweight software that can run anywhere with a simple outbound HTTPS connection. Multiple agents can run per CPU, and setup in Kubernetes is straightforward. Your code and builds stay in your environment by default. ##### Flexible pipelines vs. rigid stages GitLab pipelines use predefined stages (build, test, deploy) that enforce serial execution order, with additional configuration needed to enable parallel jobs. Jobs are grouped into stages and execute sequentially. Dynamic capabilities are limited to "child pipelines" that require project-level configuration. Buildkite Pipelines has no predefined stages. You can use `depends_on` and `wait` steps to build custom DAGs with full flexibility. [Dynamic pipelines](/docs/pipelines/configure/dynamic-pipelines) generate steps on the fly during execution based on runtime conditions, repository state, or any custom logic. ##### Better monorepo performance GitLab struggles with large monorepo structures at scale. Buildkite Pipelines handles monorepos efficiently through dynamic pipeline generation that can analyze dependencies and selectively build only what changed. ##### Simpler job routing GitLab tags require exact matches. If a job has tags `[Linux, GPU, Docker]`, a runner must have all three tags. Buildkite queues and tags offer more flexibility, allowing agents to match jobs based on various criteria without requiring exact tag matching. ##### Explicit artifact control GitLab automatically passes artifacts between stages, which can obscure state management. Buildkite Pipelines uses explicit `artifact_upload` and `artifact_download` commands, giving you clear control over what moves between steps. ##### Predictable pricing vs. runner minutes GitLab charges for runner minutes on top of user fees. This poses a risk of exceeding the monthly allocation and facing unexpected bills. You also cannot mix pricing tiers within an organization: if you want Ultimate features, every user must be on Ultimate. Buildkite Pipelines pricing is based on agent concurrency, typically using the 95th percentile. No surprise bills from exceeding allocations, and short bursts don't inflate costs. ##### Integration with GitLab SCM Some organizations use GitLab for source code management while using Buildkite Pipelines for CI/CD. Buildkite Pipelines integrates with GitLab via webhooks, triggering pipelines from Git events. This way, you get GitLab's SCM features with Buildkite's superior CI/CD performance. ##### Migration path To start converting your existing GitLab pipelines to Buildkite Pipelines, use the following principles: 1. Audit current setup: document variables, tags, routing logic, and performance benchmarks. 1. Convert pipeline structure from serial stages to parallel steps with explicit dependencies. 1. Map GitLab predefined variables to Buildkite Pipelines equivalents. 1. Replace automatic artifact passing with explicit upload/download commands. 1. Start with non-production pipelines and run both systems in parallel to validate results. Teams typically see faster execution through better parallelization, reduced infrastructure complexity, more predictable costs, and simplified agent management after migration. If you would like to receive assistance in migrating from GitLab to Buildkite Pipelines, please reach out to the Buildkite Support Team at [support@buildkite.com](mailto:support@buildkite.com). --- ### Frequently asked questions URL: https://buildkite.com/docs/pipelines/advantages/faq #### Frequently asked questions about Buildkite Pipelines Common questions about how Buildkite Pipelines works, how it compares to other CI/CD tools, and what types of workloads it supports. ##### Why is Buildkite Pipelines faster than other CI/CD tools? Speed comes from three factors: unlimited concurrency so builds never queue behind shared runners, [dynamic pipelines](/docs/pipelines/configure/dynamic-pipelines) that can skip unnecessary work at runtime, and the ability to match compute to workload using agent [queues](/docs/agent/queues) and [tags](/docs/agent/cli/reference/start#setting-tags). Small per-build time savings compound across thousands of daily builds. Unlike platforms with shared runner pools, Buildkite agents are dedicated to your workloads and scale independently. ##### How does Buildkite Pipelines handle security and data privacy? Buildkite Pipelines uses a hybrid architecture: a managed control plane orchestrates builds, but execution happens on your own infrastructure. Source code, secrets, and build artifacts never transit through Buildkite's systems — the control plane only receives job status, logs, and timing metadata. Agents are [open source](https://github.com/buildkite/agent), poll for work over HTTPS (no inbound ports required), and support [pipeline signing](/docs/agent/self-hosted/security/signed-pipelines) so agents can cryptographically verify that steps haven't been tampered with. ##### What are dynamic pipelines in Buildkite? [Dynamic pipelines](/docs/pipelines/configure/dynamic-pipelines) generate and modify pipeline steps at runtime using any language, including the [Buildkite SDK](/docs/pipelines/configure/dynamic-pipelines/sdk) (Go, Python, TypeScript, Ruby, C#). Unlike static YAML workflows, dynamic pipelines can upload new steps mid-execution, skip work based on file changes, fan out test jobs after a build succeeds, and adjust the execution path based on earlier results. Because pipeline generation is code, you can test workflow logic with unit tests and code review — the same way you'd test any other software. ##### How does Buildkite Pipelines compare to GitHub Actions? GitHub Actions is convenient for small teams, but organizations at scale run into concurrency caps on shared runners, static workflow limitations that require third-party workarounds, and multi-tenant reliability issues. Buildkite Pipelines supports 100,000+ concurrent agents with no caps, provides dynamic pipelines that adapt at runtime, and keeps source code on your infrastructure. See the full [GitHub Actions comparison](/docs/pipelines/advantages/buildkite-vs-gha). ##### How does Buildkite Pipelines compare to Jenkins? Jenkins gives teams full infrastructure control, but requires managing controllers, plugins, and upgrades. Buildkite Pipelines provides a managed control plane that updates continuously — no Jenkins controller to patch and no plugin compatibility matrix to manage — while agents still run on your infrastructure. Teams get self-hosted control without the operational burden. See the full [Jenkins comparison](/docs/pipelines/advantages/buildkite-vs-jenkins). ##### How does Buildkite Pipelines compare to GitLab CI/CD? GitLab CI/CD bundles CI into a broader DevSecOps platform, but its stage-based pipelines enforce serial execution order and runner setup can be complex. Buildkite Pipelines has no predefined stages, supports flexible job routing through [queues](/docs/agent/queues) and tags, and handles large monorepos efficiently through dynamic pipeline generation. See the full [GitLab comparison](/docs/pipelines/advantages/buildkite-vs-gitlab). ##### Can Buildkite Pipelines handle monorepos? Yes. Buildkite Pipelines handles [monorepos](/docs/pipelines/best-practices/working-with-monorepos) efficiently through dynamic pipeline generation that analyzes dependencies and selectively builds only what changed. Combined with [parallelization](/docs/pipelines/best-practices/parallel-builds) and agent [queues](/docs/agent/queues), teams can run large monorepo workflows — across hundreds of services or packages — without wasting compute on unchanged components. ##### Does Buildkite Pipelines support AI and ML workloads? Yes. Buildkite Pipelines is compute-agnostic and supports GPUs, TPUs, and custom hardware for AI/ML workloads. Agents can run on any infrastructure, so teams can provision specialized compute where their models need it. AI coding agents can connect directly to pipelines through the [Buildkite MCP server](/docs/apis/mcp-server), and the platform absorbs spikes in build volume from AI-generated code without hitting concurrency caps. --- ### Getting started URL: https://buildkite.com/docs/pipelines/getting-started #### Getting started with Pipelines 👋 Welcome to Buildkite Pipelines! You can use Pipelines to build your dream CI/CD workflows on a secure, scalable, and flexible platform. This getting started page is a tutorial that helps you understand the fundamentals of Buildkite Pipelines. This tutorial guides you through the creation of a pipeline to automate builds of an example project, which you could use as a starting point to build your own project, or, if you have some familiarity with Buildkite, your own project. ##### Before you start ##### Create a new pipeline A [_pipeline_](/docs/pipelines/glossary#pipeline) is what represents a CI/CD workflow in Buildkite Pipelines. You define each pipeline with a series of [_steps_](/docs/pipelines/glossary#step) to run. When you trigger a pipeline, you create a [_build_](/docs/pipelines/glossary#build), and steps are dispatched as [_jobs_](/docs/pipelines/glossary#job), which are run on [agents](/docs/pipelines/glossary#agent). Jobs are independent of each other and can run on different agents. If you signed up: - With GitHub, the **New Pipeline** page's **Git scope** is set to your GitHub account, and its most recently updated repository is automatically selected in the **Repository** field. **Note:** * If you're new to Buildkite Pipelines and want to learn more about creating some pipelines, select **Or try an example** to examine the list of existing example pipelines you can build. * If your GitHub account is new and contains no repositories, the **Starter pipeline** of the **Buildkite Examples** is automatically selected. - By email, the **New Pipeline** page presents the **Starter pipeline** of the **Buildkite Examples**. Ensure you familiarize yourself with the **New Pipeline** page's functionality in [Understanding the New Pipeline page](#create-a-new-pipeline-understanding-the-new-pipeline-page) before proceeding to build some [example pipelines](#create-a-new-pipeline-example-pipelines). ###### Understanding the New Pipeline page The **New Pipeline** page has the following fields: - **Git scope**: Allows you to select from the following list of options: * Your GitHub account or organization. * A selection of **Buildkite Examples** to start with, which allows you to learn more about how Buildkite Pipelines builds projects for a variety of different use cases. * The **Use remote URL** options allow you to select a **GitLab**, **Bitbucket**, or **Any account**, for any other remotely accessible Git repository. The **Manage accounts** option further down this list also allows you to configure connections to these repository providers. See the [Source control](/docs/pipelines/source-control) section for more information. * The **Connect GitHub account** option allows you to do just that. This option is useful if you signed up by email, and need to connect your GitHub account to the Buildkite platform, and generates the [same **Install Buildkite** step as part of the GitHub sign-up process](#before-you-start). - **Repository**: Select the Git repository available to your selected **Git scope**. Upon selecting a repository: * The **Checkout using** option appears, where you can select between **SSH** or **HTTPS**. * If you selected a repository which is not one of the **Buildkite Examples**, then the **Build Triggers** section may appear, which shows the actions that trigger a build of this pipeline. You can disable this triggering by clearing the **Trigger builds when** checkbox. - **Pipeline name**: Buildkite Pipelines automatically generates a name for your pipeline, which is based on your repository's name. However, you can change this default name using this field. - **Description** ( _optional_ ): Enter a description for your pipeline, which will appear under the pipeline name on the main **Pipelines** page. - **Default Branch**: The repository branch that your pipeline will build, unless instructed otherwise. Leave this unchanged for this tutorial. - **Teams**: The Buildkite teams that have permission to build your pipeline. **Note:** If you just [signed up to Pipelines](#before-you-start), then this field won't be visible, as it's only shown once [teams](/docs/platform/team-management) have been configured in your Buildkite account/organization. If this field is shown, leave it unchanged for this tutorial. - **Cluster**: The Buildkite cluster whose configured agents will build your pipeline. Leave this unchanged for this tutorial. - **YAML Steps editor**: This field allows you to define steps within your main Buildkite pipeline. To make things easier though, you can start with an initial pipeline from the **Template** dropdown. Using this dropdown, you can select from the following options: * **Helper templates**: - **Hello world**: For a simple example of how to structure commands in Buildkite pipeline YAML syntax. - **Pipeline upload**: To upload a Buildkite pipeline stored in your repository. * **Example templates**: This section lists pipelines which are used to build example projects available from the **Repository** field, when the **Git scope** has been set to **Buildkite Examples**. > 📘 > If you're already familiar with creating Buildkite pipelines and have created one at `.buildkite/pipeline.yml` from the root of your selected **Repository**, then ensure the **Pipeline upload** option has been selected from the **Template** dropdown of the **YAML Steps editor**. This option generates a pipeline step within your main Buildkite pipeline, which uploads the rest of your pipeline (defined in the `.buildkite/pipeline.yml` file from your repository), and uses the steps in that file to build your project. Learn more about this in [Create your own pipeline](/docs/pipelines/create-your-own). > If you already have a Buildkite account/organization and user account, you can access the **New Pipeline** page by selecting **Pipelines** from the global navigation > **New pipeline**. ###### Example pipelines Ensure you're already familiar with the **New Pipeline** page's functionality (described in [Understanding the New Pipeline page](#create-a-new-pipeline-understanding-the-new-pipeline-page)) before proceeding. 1. Ensure **Buildkite Examples** is selected in **Git scope** and select **Starter pipeline**. 1. In the **YAML Steps editor**, note the three steps that constitute this pipeline: `build`, `test`, and `deploy`, and the dependency order in which these steps' jobs will be run. **Note:** Without analyzing the pipeline syntax in too much detail, take note the annotation-related command that's part of the `deploy` step. 1. Select **Create and run** to create your first **Starter pipeline**. This button creates your **Starter pipeline** and runs its first build. 1. Once your build has completed, check its **Annotations** tab, which displays the content of the repository's `.buildkite/annotation.md` file. Once you've seen how Buildkite Pipelines builds a simple pipeline like **Starter pipeline**, try creating and building other pipelines from the **Buildkite Examples** provided, which suit the technologies you've been working with. > 📘 > For each repository of the **Buildkite Examples** selected in the **Repository** field, the pipeline shown in the **YAML Steps editor** field is retrieved from that repository's `.buildkite/pipeline.yml` file. > Also be aware that a Buildkite pipeline commits nothing to your repository, unless you explicitly instruct your pipeline to do so. More Buildkite example repositories are available from the [Buildkite Resources Examples](https://buildkite.com/resources/examples/) page. ##### Next steps That's it! You've got yourself up and running with Buildkite Pipelines and have already created and built some new pipelines! As part of this sign-up process, Pipelines set you up with a few default configurations behind the scenes. These include the following: - A _Buildkite cluster_: Buildkite Pipelines requires that all of its pipelines are managed through a [Buildkite cluster](/docs/pipelines/glossary#cluster), which is a security feature that's used to organize queues. When a new Buildkite account/organization is created, a single cluster is created, called **Default cluster**. Learn more Buildkite clusters from the [Clusters overview](/docs/pipelines/security/clusters). - A _queue_: When the **Default cluster** is created, a default [queue](/docs/pipelines/glossary#queue), simply called **queue** is also created. When creating a personal Buildkite account, this queue is a _Buildkite hosted queue_, which runs _Buildkite hosted agents_. Learn more about queues from [Queues overview](/docs/agent/queues) and Buildkite hosted agents from its [overview](/docs/agent/buildkite-hosted) page. While creating a new personal Buildkite account automatically sets you up to run Buildkite hosted agents, Buildkite also supports self-hosted agents, which you can manage in your own infrastructure. Learn more about the differences between these agent architectures in [Buildkite Pipelines architecture](/docs/pipelines/architecture). Once you're familiar with building some Buildkite examples, next try [creating your own pipeline](/docs/pipelines/create-your-own). --- ### Create your own pipeline URL: https://buildkite.com/docs/pipelines/create-your-own #### Create your own pipeline So you've created pipelines based on pre-filled examples and are ready to make your own? This is the tutorial for you. You'll continue playing with Buildkite by writing a pipeline definition for your own code. While the specifics may vary based on your code and goal, this tutorial provides a general flow you can adapt to your needs. ##### Before you start This tutorial assumes you've created a starter pipeline, completed the [Getting started](/docs/pipelines/getting-started) guide, or both. You'll also need the following: - The code you plan to create a pipeline for. This could be an example you put together to test different functionality or your real repository. - A task you want to perform with the code. For example, run some tests or a script. ##### Define the steps Next, define the steps you want in your pipeline. These steps could be anything from building and testing your code, to deploying it. Buildkite recommends you start simple and iterate to add complexity, running the pipeline to verify it works as you go. To define the steps: 1. Decide the goal of the pipeline. 1. Look for an [example pipeline](https://buildkite.com/resources/examples/) closest to that goal or a [pipeline template](https://buildkite.com/pipelines/templates) relevant to your technology stack and use case. (You can copy parts of the pipeline definition as a starting point.) **Note:** If you have a pipeline or workflow defined in another CI/CD platform, such as GitHub Actions, Jenkins, CircleCI, or Bitbucket Pipelines, you can use the [Pipeline converter](/docs/pipelines/converter) to help you convert your pipeline or workflow syntax into Buildkite pipeline syntax. 1. In the root of your repository, create a file named `pipeline.yml` in a `.buildkite` directory. 1. In `pipeline.yml`, define your pipeline steps. Here's an example: ```yaml steps: - label: "\:hammer\: Build" command: "scripts/build.sh" key: build - label: "\:test_tube\: Test" command: "scripts/test.sh" key: test depends_on: build - label: "\:rocket\: Deploy" command: "scripts/deploy.sh" key: deploy depends_on: test ``` Follow [Defining steps](/docs/pipelines/configure/defining-steps) and surrounding documentation to learn how to customize the pipeline definition to meet your needs. 1. Commit and push this file to your repository. ##### Create a pipeline You'll create a new pipeline that uploads the pipeline definition from your repository. To create a new pipeline: 1. Select **Pipelines** to navigate to the [Buildkite dashboard](https://buildkite.com/). 1. Select **New pipeline**. **Note:** On this page page, you can connect your Git repositories from any remotely accessible Git repository through one of the **Git scope** > **Use remote URL** options (for example, from a Bitbucket, GitLab, or GitHub account, or, if you'd already [signed up with GitHub](/docs/pipelines/getting-started#before-you-start), a different GitHub account). After connecting your account, you can select its repositories from the **Repository** dropdown during pipeline creation. 1. If you connected your account (in the **Git scope** field), select the appropriate **Repository** from the list of existing ones in your account. 1. Enter your pipeline's details in the respective **Pipeline name** and **Description** fields. You can always change these details later from your pipeline's settings. 1. In the **YAML Steps editor** field, ensure there's a step to upload the definition from your repository, which you can generate automatically using the **Pipeline upload** option from the **Template** dropdown: ```yaml steps: - label: "\:pipeline\:" command: buildkite-agent pipeline upload ``` 1. Select **Create pipeline**. 1. On the next page showing your pipeline name, select **New Build**. In the resulting dialog, create a build using the pre-filled details. 1. In the **Message** field, enter a short description for the build. For example, **My first build**. 1. Select **Create Build**. The page for the build then opens and begins running. Run the pipeline whenever you make changes you want to verify. If you want to add more functionality, go back to editing your steps and repeat. If you've configured webhooks, your pipeline will trigger when you push updates to the repository. Otherwise, select **New Build** in the Buildkite dashboard to trigger the pipeline. If you have trouble getting your pipeline to work, don't hesitate to reach out to support at support@buildkite.com for help. > 📘 Pipeline slugs and names > A pipeline's _slug_, which forms part of the pipeline's URL, is [derived from the pipeline's **Name**](#create-a-pipeline-deriving-a-pipeline-slug-from-the-pipelines-name). If a pipeline's **Name** is changed, this action also changes the pipeline's slug accordingly. Be aware, however, that any previous pipeline slug that a pipeline had (prior to its name being changed), will automatically redirect to the pipeline's current slug. ###### Using private repositories When you create a new pipeline with a private repository URL, you'll see instructions for configuring your source control's webhooks. Once you've followed those instructions, ensure your agent's SSH keys are configured for code access (see relevant instructions for [self-hosted](/docs/agent/self-hosted/code-access) or [Buildkite hosted](/docs/agent/buildkite-hosted/code-access) agents) so your agent can check out the repository. For more advanced pipelines, using your development machine as the agent for your first few builds can be a good idea. That way, all the dependencies are ready, and you'll soon be able to share a link to a green build with the rest of your team. ###### Deriving a pipeline slug from the pipeline's name Pipeline slugs are derived from the pipeline name you provide when the pipeline is created (unless you use the optional `slug` parameter to specify a custom slug). This derivation process involves converting all space characters (including consecutive ones) in the pipeline's name to single hyphen `-` characters, and all uppercase characters to their lowercase counterparts. Therefore, pipeline names of either `Hello there friend` or `Hello    There Friend` are converted to the slug `hello-there-friend`. The maximum permitted length for a pipeline slug is 100 characters. > 📘 > The following regular expression is used to derive and convert the pipeline name to its slug: > `/\A[a-zA-Z0-9]+[a-zA-Z0-9\-]*\z/` Any attempt to create a new pipeline with a name that matches an existing pipeline's name, results in an error. ##### Next steps That's it! You've successfully created your own pipeline! 🎉 We recommend you continue by: - Inviting your team to see your build and try Buildkite themselves. Invite users from your [organization's user settings](https://buildkite.com/organizations/-/users/new) by pasting their email addresses into the form. Each of your invited users will receive an email invitation, whose lifespan is 7 days. After this period, the users' invitations will expire. Those users who have not accepted the invitation will need to be sent another, which would in turn need to be accepted within 7 days. **Note:** To start inviting other users to your Buildkite organization, your email address first needs to be verified. To verify your email address, go to your [personal email settings](https://buildkite.com/user/emails) and select **Resend Verification**. - Learning to [create more complex pipelines](/docs/pipelines/configure/defining-steps) with dynamic definitions, conditionals, and concurrency. - Browse the [pipeline templates](https://buildkite.com/pipelines/templates) to see how Buildkite is used across different technology stacks and use cases. - If you have configured self-hosted queues with agents, customizing your [agent configuration](/docs/agent/self-hosted/configure). - Learning to use [lifecycle hooks](/docs/agent/hooks). - Understanding how to tailor Buildkite to fit your bespoke workflows with [plugins](/docs/pipelines/integrations/plugins) and the [API](/docs/apis). Remember, this is just the start of your journey with Buildkite. Take time to explore, learn, and experiment to make the most out of your pipelines. Happy building! --- ### Buildkite Pipelines architecture URL: https://buildkite.com/docs/pipelines/architecture #### Buildkite Pipelines architecture Buildkite Pipelines provides both a [_self-hosted_](#self-hosted-hybrid-architecture) and [_hosted_](#buildkite-hosted-architecture) architecture for its build environments. ##### Self-hosted (hybrid) architecture A self-hosted architecture (also known as a _hybrid_ architecture) separates the following aspects of Buildkite Pipelines' core functionality: - **Buildkite Pipelines:** A software-as-a-service (SaaS) _control plane_, consisting of the [Buildkite Platform](/docs/platform), as well as its Pipelines product component and interface for visualizing and managing CI/CD pipelines. Buildkite Pipelines coordinates work and displays results. - **Agents:** Small, reliable, and cross-platform build runners that constitute the _build environment_. In a self-hosted architecture, agents are hosted by you, either on-premises or in the cloud. Agents execute the work they receive from Pipelines. In this type of hybrid architecture, Buildkite Pipelines runs the control plane (accessible through the main product interface) as a SaaS product, and you run the build environment on your own infrastructure. In other words, Pipelines handles the _orchestration_, and you bring the _compute_. That means you can fine-tune and secure the build environment to suit your particular use case and workflow. The following diagram shows the split in Pipelines between its SaaS platform and the agents running on your infrastructure. The diagram shows that Buildkite Pipelines provides a web interface, handles integrations with third-party tools, and offers APIs and webhooks. By design, sensitive data, such as source code and secrets, remain within your environment and are not seen by the Buildkite Platform. This decoupling provides flexibility and security as you maintain control over the build environment and agent scaling while Buildkite manages the coordination, scheduling, and web interface. Compared to _fully self-hosted_ solutions, where you run both the control plane and build environment on your own infrastructure, a hybrid architecture reduces the maintenance burden on your team. Unlike managed solutions, a hybrid architecture gives you full control over security within your build environment. Learn more about how to set up this architecture in the [Custom install](/docs/agent/self-hosted/install) section of the Self-hosted agent documentation. ##### Buildkite hosted architecture Buildkite also provides a _managed_ solution, offered through its _Buildkite hosted agents_ feature, where both the control plane of Buildkite Pipelines and its build environment are provided and handled by Buildkite. This solution is useful when you need to get a build environment up and running quickly or you have limited resources to implement a hybrid architecture, or both. Learn more about this feature in [Buildkite hosted agents](/docs/agent/buildkite-hosted), and how to set up this architecture in [Create a Buildkite hosted queue](/docs/agent/queues/managing#create-a-buildkite-hosted-queue). --- ### Dashboard walkthrough URL: https://buildkite.com/docs/pipelines/dashboard-walkthrough #### Dashboard walkthrough Once you've set up a few pipelines and have run some builds, you can see an overview of them on the dashboard. Each pipeline has a set of metrics to give you an overview of its health and performance. ##### Pipeline status A visual indication of your pipeline's current status. This icon is based on the latest build on your default branch. ##### Build history The build history visualizes the last 30 builds that have been run on your default branch. The height of each bar reflects the build's running time, and its status is represented by its color and in the tooltip on hover. ##### Speed The speed of your pipeline is calculated from the average of your 30 most recent builds. This helps you keep an eye on your pipeline's speed, and compare performance between pipelines. ##### Reliability The reliability of your pipeline is a calculation based on passing vs failing builds over the last 30 days. This metric helps you to understand the overall stability of your pipelines. ##### Builds per week The builds per week measurement is calculated based on the average number of builds created over the past 4 weeks on the pipeline's default branch. This metric helps you to understand how frequently a pipeline is run. Note that if the pipeline's default branch setting is left blank (that is, `None` for no default branch), then this metric is calculated on all branches of this repository. ##### Bookmarking pipelines You can keep your most used pipelines at the top of the page by hovering over a pipeline, and selecting the bookmark icon on the right. ##### Filtering pipelines You can filter pipelines using the search bar at the top of the page. This will search the titles of pipelines, and return all those matching your search terms. You can add tags to your pipelines and use them to quickly filter pipelines using the search bar. You can manage a pipeline’s tags in the pipeline's **Settings** section. If your organization has Teams enabled, you can also filter this page by the teams that you're in. When you have more than one team attached to your Buildkite account, you'll see a dropdown list of teams at the top of the dashboard. This defaults to 'All Teams'. Selecting a specific team will filter the list of pipelines to display only those accessible by the selected team. ##### Customizing the page You're able to edit a pipeline's: - name - description - emoji - color - repository - default branch After you've selected a pipeline, the settings button is in the top right corner. The display settings can be found in the pipeline's **Settings** section. Adding a description, emoji, and color for your pipeline is optional, but name, repository, and default branch are all required. The emoji and color will replace the icon on the dashboard. Descriptions also have full emoji support. 🙌 ##### Pipeline page Select a pipeline to view its page, which shows the [build history](#build-history) for that pipeline, your starred branches, and the ten most recently built branches for that pipeline. You can filter a pipeline’s builds by branch, build state, or your own builds using the Filter menu. To see the steps for a build, select the Show steps button on the right of any build. ##### Build page Select a build to view its page, which shows the full list of jobs and other steps in that build, the information about who triggered the build, and the controls for rebuilding or canceling the build while it's in progress. To retry all failed jobs for a build, select the dropdown menu next to the **Rebuild** button, and then select **Retry failed jobs**. This option will only appear in the dropdown menu when the build is finished, and there are eligible jobs to retry. Eligible jobs include command jobs in the failures tab, with the exception of those already waiting for automatic retries. If a pipeline build contains trigger steps, failed jobs in any of its triggered pipelines' builds are also included in the retry. Note that this does not apply to builds triggered by steps where the `async` attribute has been set to `true`. Each job in a build has a footer that displays the job exit status, which provides more visibility into the outcome of each job. It helps you to diagnose failed builds by finding issues with agents and pipelines. Job exit status may include the exit signal reason, which indicates whether the Buildkite agent stopped or the job was canceled. If you want to access the exit status through an API, it's only available in the [GraphQL API](/docs/apis/graphql-api). ##### Supported browsers Buildkite Pipelines is designed with the latest web browsers in mind. For the sake of security and providing the best experience to most customers, we do not support browsers that are no longer receiving security updates and represent a small minority of traffic. We support the latest two stable versions of the following desktop browsers: - [Google Chrome](https://www.google.com/chrome/) - [Mozilla Firefox](https://mozilla.org/firefox) - [Apple Safari](https://www.apple.com/safari/) - [Microsoft Edge](https://www.microsoft.com/en-us/edge) Browsers not listed as supported or in beta or developer builds may not work as you expect, or at all. For the best experience, we recommend using the latest version of a supported browser. All versions of Internet Explorer are not supported, and we recommend you migrate to a modern browser. If you encounter any issues with Buildkite Pipelines on a supported browser, please [contact us](https://buildkite.com/about/contact/) so we can improve its support. --- ### Build page URL: https://buildkite.com/docs/pipelines/build-page #### Build page Buildkite's new build page has been completely reimagined to support modern software delivery at any scale. The redesigned interface brings powerful navigation through a new sidebar and a detailed table view, making it easier than ever to understand and navigate to any specific aspect of a large build. ##### Overview of the new build page with sidebar The new build page consists of three main components: - A collapsible _sidebar_ to allow for quick navigation between steps in your build. - The main _content area_ showing your selected view (**Summary**, **Steps**, or **Annotations**). - A configurable _step panel_ for viewing logs and step information. ##### Core actions ###### Navigating your build The _sidebar_ provides a hierarchical view of all steps in your build. Here's how to use it: - Expand/collapse groups by selecting their arrow icons. - Group steps by state to see important steps (such as blocked or failed) at the top. - Select any step to view its details. - Use the action button (with the curved arrow) or press the `f` key to cycle through failures. - Use keyword search to quickly open or focus a step. - Optionally collapse the sidebar to make more room for the content area. ###### Searching for steps Use the search input to find specific steps in your build. Type the name of the step or any relevant keywords, and the sidebar will filter the list to show only steps that match what you've typed. ###### Viewing step details When you select a step, its details appear in a resizable step panel. You can: - Open the step panel on any tab of the build page. - View **Log**s, **Artifacts**, and **Environment** variables in their respective tabs. - Drag the panel edge to resize. - Dock the panel on the right, bottom, or center using the layout toggle. ###### Managing retries The sidebar now shows an indicator for steps with retries. 1. Look for the retry indicator in the sidebar. 1. Select the step to view the latest attempt. 1. Use the retry selector to switch between attempts. You can also access the retried jobs when you open the step details. ###### Using the table view The **Table** view provides a detailed list of all jobs in your build. This view differs from the sidebar view by showing all jobs in the build, not just the steps. The table view displays all individual jobs in your build, while the sidebar collapses parallel jobs into single steps. This makes it ideal for viewing detailed job information. Here's how to use it: - Sort steps by selecting the column header (select three times to remove sorting). - Filter jobs using the state filter. ###### Browsing your build on mobile The new build page works fully on all devices. You can use the sidebar to navigate to any step and view its details. On mobile devices, only the **Canvas**, **Table**, and **Waterfall** views are hidden. ###### Viewing builds in real time The build page updates in real time when you follow a build. When you follow a build, you'll focus on active steps as they complete. Turn on follow mode by pressing `j` when the build is in progress on the canvas view. > 📘 > Turn on the elevator music for some calming build vibes. Hear your build finish as the music stops. ##### Keyboard shortcuts For a broader overview of keyboard navigation and other accessibility features across Buildkite, see [accessibility](/docs/platform/accessibility). The following keyboard shortcuts are currently available: - `f`: Go to the next failure. - `j`: Follow the build (for in-progress builds, and only available in the **Canvas** view). - `esc`: Clear the active step selection. - `g`: Toggle between collapsing and expanding groups (experimental only). - `s`: Access step search. ##### Tips for large builds For builds with many steps: - Use status filtering to focus on specific states. - Avoid the **Canvas** view on large builds unless you're debugging dependencies between steps. - Collapse passed and waiting groups to reduce clutter. - Use browser search to quickly find specific steps (search isn't built in yet). - Group by state to organize large numbers of steps. ##### Best practices - Keep the sidebar grouped by states and collapse lower priority states such as **Waiting** and **Passed**. - If the build is in progress, use the `j` key to follow the build. Follow mode will automatically focus you on active steps. You can also enable the music mode. - Use appropriate views for different tasks: * **Canvas**: Understanding build structure and dependencies of specific steps. Be aware that this view is not as useful when zoomed out on a large number of steps. * **Table**: Detailed step information when you need to sort by duration or steps alphabetically. * **Waterfall**: Timing and performance analysis. --- ### Overview URL: https://buildkite.com/docs/pipelines/converter #### Buildkite pipeline converter overview The Buildkite pipeline converter serves as a compatibility layer, allowing you to try conversion of your existing CI configurations into a format compatible with Buildkite's pipeline definitions. Rather than serving as a complete automated migration solution, the Buildkite pipeline converter demonstrates how configurations from these other CI/CD platforms could be structured in a Buildkite pipeline configuration format. An AI Large Language Model (LLM) is used to achieve the best results in the translation process. The AI model _does not_ use any submitted data for its own training. ##### CLI Buildkite pipeline converter The [Buildkite CLI](/docs/platform/cli) provides the `bk pipeline convert` command, which lets you convert CI configurations from supported providers directly from your terminal. This is the recommended way to use the pipeline converter as part of a migration workflow. ###### Compatibility The Buildkite pipeline converter Supports the following CI providers: - [GitHub Actions](/docs/pipelines/migration/tool/github-actions) - [CircleCI](/docs/pipelines/migration/tool/circleci) - [Bitbucket Pipelines](/docs/pipelines/migration/tool/bitbucket-pipelines) - [Jenkins](/docs/pipelines/migration/tool/jenkins) - Bitrise (beta) - GitLab CI (beta) - Harness (beta) ###### Example conversion The following GitHub Actions workflow: ```yaml name: Node.js CI on: push: branches: [ main, develop ] pull_request: branches: [ main ] jobs: build: runs-on: ubuntu-latest strategy: matrix: node-version: [18.x, 20.x] steps: - uses: actions/checkout@v4 - name: Use Node.js ${{ matrix.node-version }} uses: actions/setup-node@v4 with: node-version: ${{ matrix.node-version }} cache: 'npm' - name: Install dependencies run: npm ci - name: Run linter run: npm run lint - name: Run tests run: npm test - name: Build application run: npm run build ``` Will be converted by the Buildkite pipeline converter into the following Buildkite pipeline: ```yaml #### ============================================================================ #### Translated from: Node.js CI #### ============================================================================ #### #### TRIGGERS: Configure in Buildkite UI → Pipeline Settings → GitHub #### - Push to branches: main, develop #### - Pull requests to: main #### #### AGENT CONFIGURATION REQUIRED #### ============================================================================ #### The original workflow used the following GitHub Actions runners: #### #### Job | runs-on #### ---------------------|---------------- #### build | ubuntu-latest #### #### You must configure Buildkite agents to handle these workloads. Add an #### `agents` block to each step once your queues are set up. Example: #### #### agents: #### queue: "linux" #### #### Required tools on agents: Node.js 18.x, 20.x, npm #### Alternatively, use the Docker plugin with appropriate images. #### ============================================================================ steps: - label: "\:nodejs\: Build & Test node-{{matrix.node}}" key: "build-node-{{matrix.node}}" # Assumes Node.js is installed on the agent. # If not available, use the Docker plugin: # plugins: # - docker#5.13.0: # image: "node:{{matrix.node}}" # propagate-environment: true plugins: - cache#1.8.1: manifest: package-lock.json path: node_modules restore: pipeline save: pipeline command: | npm ci npm run lint npm test npm run build matrix: setup: node: - "18.x" - "20.x" ``` ###### How to use the CLI Buildkite pipeline converter To convert an existing CI configuration, use the [`bk pipeline convert` command](/docs/platform/cli/reference/pipeline#convert-pipeline) from the [Buildkite CLI](/docs/platform/cli). 1. [Install the Buildkite CLI](/docs/platform/cli/installation) if you haven't already: ```bash brew install buildkite/buildkite/bk ``` 1. Run the `bk pipeline convert` command, specifying the path to your CI configuration file with `--file` and the originating CI provider with `--vendor`: ```bash # For a GitHub Actions workflow bk pipeline convert -F .github/workflows/ci.yml # Alternatively bk pipeline convert --file .github/workflows/ci.yml --vendor github # If you want to specify a custom output path and filename bk pipeline convert --file .github/workflows/ci.yml --vendor github -o .buildkite/custom-converted-pipeline-name.yml ``` You can also pipe a configuration file from stdin instead of using the `--file` flag. When piping from stdin, you must specify the `--vendor` flag and the converted output is printed to stdout by default: ```bash cat .github/workflows/ci.yml | bk pipeline convert --vendor github ``` To save piped output to a file, use the `--output` (`-o`) flag: ```bash cat .github/workflows/ci.yml | bk pipeline convert --vendor github -o .buildkite/pipeline.yml ``` Supported vendors: `github`, `bitbucket`, `circleci`, `jenkins`, `gitlab`, `harness`, `bitrise`. If the converter can detect the vendor from the file path or name, you can omit the `--vendor` flag. If you see the following error: `Error: could not detect vendor from file path. Please specify vendor explicitly with --vendor`, you need to specify the vendor. 1. On a successful conversion, by default, the output is saved to `.buildkite/`. When reading from stdin, the output is printed to stdout instead: ```bash Submitting conversion job... Job submitted. Processing conversion... ✅ conversion completed successfully! Output saved to: .buildkite/pipeline.github.yml ``` In addition to the `--vendor` and `--output` (`-o`), other flags supported by the Buildkite pipeline converter are `--timeout` and `--debug`. For more information and flag usage examples, see the CLI reference for the [`bk pipeline convert` command](/docs/platform/cli/reference/pipeline#convert-pipeline). ##### Interactive web version For a quick try of the Buildkite pipeline converter, you can also use the [interactive web version](https://buildkite.com/resources/convert/). ###### How to use the web Buildkite pipeline converter To start translating your existing pipeline or workflow configuration into a Buildkite pipeline using the web version: 1. Open the [Buildkite pipeline converter](https://buildkite.com/resources/convert/) in a new browser tab. 1. Select your CI/CD platform from the dropdown list. 1. In the left panel, enter the pipeline definition to translate into a Buildkite pipeline definition. 1. Click the **Convert** button to reveal the translated pipeline definition in the right panel. 1. Copy the resulting Buildkite pipeline YAML configuration on the right and [create](/docs/pipelines/configure) a [new Buildkite pipeline](https://www.buildkite.com/new) with it. > 🚧 Conversion errors > If the pipeline configuration you are trying to convert to a Buildkite pipeline contains syntax or other errors or is not a valid pipeline configuration, you will see an error message _"This doesn't look like valid YAML. Please paste your pipeline configuration."_ In this case, ensure that the original pipeline configuration you are translating to a Buildkite pipeline is a valid pipeline definition for the CI/CD platform you are migrating from. ##### Next steps The Buildkite pipeline converter can be used as a standalone tool or potentially integrated into your [Buildkite Migration Services](https://buildkite.com/resources/migrations/) process, offering a way to leverage existing CI configurations within the Buildkite ecosystem. For more tools and recommendations regarding migrating from your existing CI/CD platform to Buildkite, see: - [Migrate to Buildkite Pipelines](/docs/pipelines/migration) - [Migration from GitHub Actions - a step-by-step guide](/docs/pipelines/migration/from-githubactions) - [Migration from Jenkins - a step-by-step guide](/docs/pipelines/migration/from-jenkins) - [Migration from Bamboo - a step-by-step guide](/docs/pipelines/migration/from-bamboo) --- ### GitHub Actions URL: https://buildkite.com/docs/pipelines/converter/github-actions #### GitHub Actions The [Buildkite pipeline converter](/docs/pipelines/converter) helps you convert your GitHub Actions workflows into Buildkite pipelines. The Buildkite pipeline converter analyzes the GitHub Actions workflow to understand its structure and intent, and then generates a functionally equivalent Buildkite pipeline. Because GitHub Actions workflows can include complex combinations of jobs, steps, matrix strategies, and reusable actions, an AI Large Language Model (LLM) is used to get the best results in the translation process. The AI model _does not_ use any submitted data for its own training. The goal of the Buildkite pipeline converter is to give you a starting point, so you can see how patterns you're used to in GitHub Actions would function in Buildkite Pipelines. In cases where GitHub Actions features don't have a direct Buildkite Pipelines equivalent, the pipeline converter includes comments with suggestions about possible solutions. ##### Using the Buildkite pipeline converter with GitHub Actions You can immediately start experimenting with the Buildkite pipeline converter through the [CLI version](/docs/pipelines/converter#cli-buildkite-pipeline-converter-how-to-use-the-cli-buildkite-pipeline-converter) or via an [interactive web-based interface](/docs/pipelines/converter#interactive-web-version-how-to-use-the-web-buildkite-pipeline-converter). ##### How the translation works Here are some examples of translations that the pipeline converter will perform: - **Jobs** become Buildkite Pipelines [command steps](/docs/pipelines/configure/step-types/command-step) with `key` attributes. Multiple `run` steps within a job are combined into a single `command` array. Job dependencies (`needs`) become `depends_on` attributes. - **Checkout** steps (`actions/checkout`) are removed since Buildkite agents automatically check out the repository. Non-default checkout options are translated to equivalent Git commands. - **Triggers** (`on:` block) are removed and documented in a header comment, since Buildkite Pipelines configures triggers through the web interface rather than YAML. - **Runners** (`runs-on` values) are listed in a header comment with guidance on configuring your `agents` blocks to target your Buildkite agent [queues](/docs/pipelines/clusters/manage-queues). - **Matrix strategies** are translated to the native [build matrix](/docs/pipelines/configure/workflows/build-matrix) feature of Buildkite Pipelines, including `include`/`exclude` configurations and per-combination `soft_fail` settings. - **Environment variables** at the workflow level become a top-level `env` block. GitHub context variables (such as `${{ github.sha }}`) are translated to Buildkite Pipelines equivalents (such as `${BUILDKITE_COMMIT}`). - **Secrets** (such as `${{ secrets.API_KEY }}`) become environment variable references (such as `${API_KEY}`) with comments indicating they must be configured on your agents. See [managing secrets](/docs/pipelines/security/secrets/managing) for configuration options. - **Actions** require case-by-case handling. Setup actions assume tools are pre-installed on agents. Cache and artifact actions are translated to Buildkite Pipelines [plugins](/docs/pipelines/integrations/plugins) and commands. GitHub-specific actions (such as `github-script` or `codeql`) may require custom solutions in Buildkite Pipelines - [contact](mailto:support@buildkite.com) the Buildkite Support team for assistance. - **Path filtering** (`paths`, `paths-ignore`, or `dorny/paths-filter`) is translated to `if_changed` attribute in Buildkite Pipelines. - **Job outputs** (`$GITHUB_OUTPUT`, `jobs..outputs`) are translated to `buildkite-agent meta-data set/get` commands. Step summaries (`$GITHUB_STEP_SUMMARY`) become `buildkite-agent annotate` commands. --- ### Jenkins URL: https://buildkite.com/docs/pipelines/converter/jenkins #### Jenkins The [Buildkite pipeline converter](/docs/pipelines/converter) helps you convert your Jenkins pipeline jobs into Buildkite pipelines. Both the Scripted and Declarative forms of Jenkins pipelines are supported. The converter first analyzes the Jenkins pipeline to understand its structure and intent, and then generates a functionally equivalent Buildkite pipeline. Since Jenkins pipelines can be written using the Groovy scripting language, their potential for complexity is much greater than that of other YAML-based CI configuration formats. Therefore, to get the best results in the translation process, an AI Large Language Model (LLM) is used to get the best results in the translation process. The AI model _does not_ use any submitted data for its own training. The goal of the Buildkite pipeline converter is to give you a starting point, so you can see how patterns you're used to in Jenkins would function in Buildkite Pipelines. In cases where Jenkins' features don't have a direct Buildkite Pipelines equivalent, the pipeline converter includes comments with suggestions about possible solutions. ##### Using the Buildkite pipeline converter with Jenkins pipelines You can immediately start experimenting with the Buildkite pipeline converter through the [CLI version](/docs/pipelines/converter#cli-buildkite-pipeline-converter-how-to-use-the-cli-buildkite-pipeline-converter) or via an [interactive web-based interface](/docs/pipelines/converter#interactive-web-version-how-to-use-the-web-buildkite-pipeline-converter). > 📘 > Remember that not all the features of Jenkins pipelines can be fully converted to the Buildkite Pipelines format. See the following sections to learn more about the compatibility, workarounds, and limitations of converting Jenkins pipelines to Buildkite pipelines. ##### Stages The pipeline converter will start by examining the `stage {}` blocks in your Jenkins pipeline. If the stage only contains one step, that step will be translated on its own. If the stage includes multiple steps, those will be captured in a `group` block (see [Group step](/docs/pipelines/configure/step-types/group-step) for more details) in the Buildkite pipeline. ##### Step concurrency By default, Jenkins pipeline steps run serially, whereas Buildkite pipeline steps are executed in [parallel](/docs/pipelines/tutorials/parallel-builds). For consistency, your translated Jenkins pipeline will have `wait` steps (see [Wait step](/docs/pipelines/configure/step-types/wait-step) for more details) added, to maintain the existing serial execution. You can then remove any of the generated wait steps in case they are unnecessary, for example, if you have several different test suites which can safely run in parallel. ##### Build parameters Jenkins supports a variety of different build parameter types natively (`string`, `text`, `boolean`, `choice`, and `password`), with additional types possible with the use of plugin. Buildkite only supports `string` and `select` (see [Input step](/docs/pipelines/configure/step-types/input-step) for more details), so Jenkins parameters will be translated as follows: | Parameter type | Conversion | | --- | ---------- | | String, text | String | | Choice | Select | | Boolean | Choice with true and false options | | Password | Not supported; using [Buildkite secrets](/docs/pipelines/security/secrets/buildkite-secrets) store is recommended instead | | Others (plugins) | Not supported | Also, note that Buildkite pipeline's input parameter values are stored as [build meta-data](/docs/pipelines/configure/build-meta-data), not as variables that can be used in the pipeline definition itself. The pipeline converter will provide guidance about best practices for using input values in your pipeline. --- ### CircleCI URL: https://buildkite.com/docs/pipelines/converter/circleci #### CircleCI The [Buildkite pipeline converter](/docs/pipelines/converter) helps you convert your CircleCI pipelines into Buildkite pipelines. The converter analyzes the CircleCI configuration to understand its structure and intent, and then generates a functionally equivalent Buildkite pipeline. Because CircleCI configurations can include complex combinations of jobs, workflows, executors, orbs, and reusable commands, an AI Large Language Model (LLM) is used to achieve the best results in the translation process. The AI model _does not_ use any submitted data for its own training. The goal of the Buildkite pipeline converter is to give you a starting point, so you can see how patterns you're used to in CircleCI would function in Buildkite Pipelines. In cases where CircleCI features don't have a direct Buildkite Pipelines equivalent, the pipeline converter includes comments with suggestions about possible solutions and alternatives. ##### Using the Buildkite pipeline converter with CircleCI You can immediately start experimenting with the Buildkite pipeline converter through the [CLI version](/docs/pipelines/converter#cli-buildkite-pipeline-converter-how-to-use-the-cli-buildkite-pipeline-converter) or via an [interactive web-based interface](/docs/pipelines/converter#interactive-web-version-how-to-use-the-web-buildkite-pipeline-converter). ##### How the translation works Here are some examples of translations that the Buildkite pipeline converter will perform: - **Jobs** become Buildkite Pipelines [command steps](/docs/pipelines/configure/step-types/command-step) with `key` attributes. The `key` enables dependency references between steps. Multiple `run` steps within a job are combined into a single `command` array. - **Workflows** are flattened into Buildkite Pipelines [step dependencies](/docs/pipelines/configure/depends-on). Job dependencies specified with `requires` become `depends_on` attributes. When multiple workflows exist, they may be organized using [group steps](/docs/pipelines/configure/step-types/group-step). - **Checkout** steps are removed since Buildkite agents automatically check out the repository. - **Executors** are translated to the [Docker plugin](https://buildkite.com/resources/plugins/buildkite-plugins/docker-buildkite-plugin/) configuration. The `docker[].image` becomes the plugin's `image` parameter, `resource_class` is documented for agent queue configuration, and `working_directory` becomes the plugin's `workdir` parameter. - **Orbs** require case-by-case handling. Common orb commands are translated to equivalent Buildkite Pipelines [plugins](/docs/pipelines/integrations/plugins) or native commands. For example, AWS and GCP orb commands may translate to their respective Buildkite plugins. Orbs without direct equivalents include comments indicating manual configuration is required. - **Matrix strategies** are translated to the native [build matrix](/docs/pipelines/configure/workflows/build-matrix) feature of Buildkite Pipelines. CircleCI's `matrix.parameters` becomes `matrix.setup`, and `matrix.exclude` becomes `matrix.adjustments` with `skip: true`. - **Environment variables** at the job level become step-level `env` blocks. CircleCI pipeline values (such as `>`) are translated to Buildkite Pipelines equivalents (such as `${BUILDKITE_COMMIT}`). - **Contexts** (CircleCI's secrets management mechanism) become environment variable references with comments indicating they must be configured on your agents or through a secrets manager. See [managing secrets](/docs/pipelines/security/secrets/managing) for configuration options. - **Workspace persistence** (`persist_to_workspace` and `attach_workspace`) is translated to `buildkite-agent artifact upload` and `buildkite-agent artifact download` commands. - **Caching** (`save_cache` and `restore_cache`) is translated to the [cache plugin](https://buildkite.com/resources/plugins/buildkite-plugins/cache-buildkite-plugin/). - **Artifacts** (`store_artifacts`) are translated to `artifact_paths` on the step. - **Test results** (`store_test_results`) are documented with guidance on configuring [Buildkite Test Engine](/docs/test-engine) for test analytics and insights. - **Branch and tag filters** (`filters.branches` and `filters.tags`) are translated to step [conditionals](/docs/pipelines/configure/conditionals) using `if:` expressions. - **Approval jobs** (jobs with `type: approval`) are translated to [block steps](/docs/pipelines/configure/step-types/block-step). - **Parallelism** is translated using Buildkite Pipelines' native `parallelism` attribute. Test splitting with `circleci tests split` requires [Buildkite Test Engine](/docs/test-engine) for equivalent functionality. - **Scheduled workflows** are documented with guidance on configuring [scheduled builds](/docs/pipelines/configure/workflows/scheduled-builds) through the Buildkite Pipelines web interface. - **Reusable commands** are translated to inline scripts or YAML anchors for simple cases. Complex parameterized commands may require [dynamic pipelines](/docs/pipelines/configure/dynamic-pipelines) for full flexibility. - **Dynamic configuration** (`setup: true`) patterns are translated using [dynamic pipelines](/docs/pipelines/configure/dynamic-pipelines) with `buildkite-agent pipeline upload`. --- ### Bitbucket Pipelines URL: https://buildkite.com/docs/pipelines/converter/bitbucket-pipelines #### Bitbucket Pipelines The [Buildkite pipeline converter](/docs/pipelines/converter) helps you convert your Bitbucket pipelines into Buildkite pipelines. The converter analyzes the Bitbucket Pipelines configuration to understand its structure and intent, and then generates a functionally equivalent Buildkite pipeline. Because Bitbucket configurations can include complex combinations of steps, parallel execution, caching, artifacts, and deployment targets, an AI Large Language Model (LLM) is used to achieve the best results in the translation process. The AI model _does not_ use any submitted data for its own training. The goal of the Buildkite pipeline converter is to give you a starting point, so you can see how patterns you're used to in Bitbucket Pipelines would function in Buildkite Pipelines. In cases where Bitbucket features don't have a direct Buildkite Pipelines equivalent, the pipeline converter includes comments with suggestions about possible solutions and alternatives. ##### Using the Buildkite pipeline converter with Bitbucket Pipelines You can immediately start experimenting with the Buildkite pipeline converter through the [CLI version](/docs/pipelines/converter#cli-buildkite-pipeline-converter-how-to-use-the-cli-buildkite-pipeline-converter) or via an [interactive web-based interface](/docs/pipelines/converter#interactive-web-version-how-to-use-the-web-buildkite-pipeline-converter). ##### How the translation works Here are some examples of translations that the Buildkite pipeline converter will perform: - **Steps** become Buildkite Pipelines [command steps](/docs/pipelines/configure/step-types/command-step). The `name` attribute becomes `label`, and `script` arrays become `command` arrays. Steps that need to be referenced by other steps are assigned a `key` attribute. - **Global images** (`image` at the top level) are translated to the [Docker plugin](https://buildkite.com/resources/plugins/buildkite-plugins/docker-buildkite-plugin/) on each step. Buildkite Pipelines has no global image setting, so the plugin is applied per-step or through a YAML anchor to avoid repetition. - **Branch pipelines** (`pipelines.branches`) are translated using the `branches` attribute on individual steps. Branch patterns support wildcards (such as `release-*` or `feature/*`) and exclusions using the `!` prefix. - **Pull request pipelines** (`pipelines.pull-requests`) require configuration in Buildkite's pipeline settings rather than YAML. PR-specific steps can use `if: build.pull_request.id != null` conditionals. - **Parallel execution** (`parallel` blocks) is handled automatically in Buildkite Pipelines since steps without `depends_on` run in parallel by default. No special syntax is needed. Sequential dependencies are created using `depends_on` attributes. - **Reusable step definitions** (`definitions.steps`) are translated to YAML anchors in a `common` section. Anchor syntax (`&name` and `*name`) works identically in both systems. - **Caching** (`caches` and `definitions.caches`) is translated with TODO comments since Buildkite Pipelines caching requires additional setup. Hosted agents can enable container caching at the cluster level. Self-hosted agents can use the [cache plugin](https://buildkite.com/resources/plugins/buildkite-plugins/cache-buildkite-plugin/). - **Artifacts** (`artifacts` with `download: true`) are translated to `artifact_paths` for uploads and `buildkite-agent artifact download` commands for downloads. Unlike Bitbucket Pipelines, Buildkite Pipelines requires explicit artifact downloads in subsequent steps. - **Path-based conditions** (`condition.changesets.includePaths`) are translated to the native `if_changed` attribute in Buildkite Pipelines. This attribute is processed by the agent during `buildkite-agent pipeline upload`, so the converted YAML must be stored in the repository. - **Step resource sizing** (`size: 2x`) is documented with TODO comments since resource allocation in Buildkite Pipelines depends on your agent infrastructure. Configure appropriately sized agent queues and target them using the `agents` attribute. - **Timeouts** (`max-time` at the step level or `options.max-time` globally) are translated to `timeout_in_minutes` on each step. For global timeouts, configure a default timeout in the pipeline settings or use a YAML anchor. - **Custom pipelines** (`pipelines.custom`) for manual triggers are translated with `if:` conditionals that check `build.source`. Steps can be configured to run only when triggered through the UI, API, or trigger step. - **Variables** defined in Bitbucket's repository settings are documented with guidance on configuring environment variables in Buildkite's pipeline settings. User-prompted variables (`variables` with `allowed-values`) are translated to [input steps](/docs/pipelines/configure/step-types/input-step) with fields. - **After-script** commands are translated using shell `trap` for step-specific cleanup or documented with guidance on using repository hooks (`post-command`) for consistent cleanup across all steps. - **Services** (`services` and `definitions.services`) for running sidecar containers are translated to the [Docker Compose plugin](https://buildkite.com/resources/plugins/buildkite-plugins/docker-compose-buildkite-plugin/) or documented with guidance on configuring service containers. - **Fast-fail behavior** (`fail-fast` in parallel blocks) is translated to `cancel_on_build_failing: true` on steps that should be cancelled when the build enters a failing state. - **Pipes** (Bitbucket's reusable integration components) require case-by-case handling. Common pipes are translated to equivalent Buildkite Pipelines [plugins](/docs/pipelines/integrations/plugins) or native commands. Pipes without direct equivalents include comments indicating manual configuration is required. --- ### Glossary URL: https://buildkite.com/docs/pipelines/glossary #### Pipelines glossary The following terms describe key concepts to help you use Pipelines. ##### Agent An agent is a small, reliable, and cross-platform build runner that connects your infrastructure to Buildkite. It polls Buildkite for work, runs jobs, and reports results. You can install agents on local machines, cloud servers, or other remote machines. You need at least one agent to run builds. To learn more, see the [Agent overview](/docs/agent). ##### Artifact An artifact is a file generated during a build. You can keep artifacts in a Buildkite-managed storage service or a third-party cloud storage service like Amazon S3, Google Cloud Storage, or Artifactory. Common uses include storing assets like logs and reports, or passing files between steps. To learn more, see [Build artifacts](/docs/pipelines/configure/artifacts). ##### Build A build is a single run of a pipeline. You can trigger a build in various ways, including through the dashboard, API, as the result of a webhook, on a schedule, or even from another pipeline using a trigger step. ##### Buildkite organization administrator A Buildkite organization administrator is a user with full administrative control over a Buildkite organization. Organization administrators can manage teams, configure organization-level settings, control pipeline and security permissions, and access usage reports and [audit logs](/docs/platform/audit-log). To learn more, see [User and team permissions](/docs/platform/team-management/permissions). ##### Cluster A cluster groups [queues](#queue) of agents along with pipelines. Clusters allow teams to self-manage their agent pools, let admins create isolated sets of agents and pipelines within the one Buildkite organization, and help to make agents and queues more discoverable across your organization. To learn more, see the [Clusters overview](/docs/pipelines/security/clusters). ##### Dynamic pipeline Dynamic pipelines define their steps at runtime using scripts, giving you the flexibility to only run the steps relevant to particular code changes and workflows. Dynamic pipelines are helpful when you have a complex build process that requires different steps to execute based on runtime conditions, such as the branch, the environment, or the results of previous steps. To learn more, see [Dynamic pipelines](/docs/pipelines/configure/dynamic-pipelines). ##### Ephemeral agent An ephemeral agent is a Buildkite agent that only operates for the duration in which it runs a [job](#job). Such an agent is disconnected either once its job is completed, or the agent's idle time period has been reached. An ephemeral agent is created when one of the following options has been used to [start the Buildkite agent](/docs/agent/cli/reference/start): - `--acquire-job` - `--disconnect-after-job` - `--disconnect-after-idle-timeout` Learn more about ephemeral agents in [Pause and resume an agent](/docs/agent/self-hosted/pausing-and-resuming). ##### Hook A hook is a method of customizing the behavior of Buildkite through lifecycle events. They let you run scripts at different points of the agent or job lifecycle. Using hooks, you can extend the functionality of Buildkite and automate tasks specific to your workflow and requirements. To learn more, see [Hooks](/docs/agent/hooks). ##### Job A job is the execution of a command step during a build. Jobs run the commands, scripts, or plugins defined in the step. A job can be in various states during its lifecycle, such as `pending`, `scheduled`, `running`, `finished`, `failed`, `canceled`, and others. These states represent the execution state of the job as it progresses through the build system. To learn more, see [Job states](/docs/pipelines/configure/defining-steps#job-states). ##### Pipeline A pipeline is a container for modeling and defining workflows. Pipelines contain a series of steps to achieve goals like building, testing, and deploying software. To learn more, see the [Pipeline overview](/docs/pipelines). ##### Plugin Plugins are small, self-contained pieces of extra functionality that help you customize Buildkite to your specific workflow. They modify command steps using hooks to perform actions like checking code quality, deploying to cloud services, or sending notifications. Plugins can be open source and available for anyone to use or private for just your organization. To learn more, see [Plugins](/docs/pipelines/integrations/plugins). ##### Queue A queue defines agents on which pipeline builds can run their jobs. Queues are configured within a [cluster](#cluster), where each queue defines a particular group of agents, isolating a set of your pipeline's jobs and the agents they run on. Typical uses for queues include separating deployment agents and pools of agents for specific pipelines or teams. To learn more, see the [Queues overview](/docs/agent/queues) and [Manage queues](/docs/agent/queues/managing) pages. ##### Step A step describes a single, self-contained task as part of a pipeline. You define a step in the pipeline configuration using one of the following [step types](/docs/pipelines/configure/step-types): - Command step: Runs one or more shell commands on one or more agents. - Wait step: Pauses a build until all previous jobs have completed. - Block step: Pauses a build until it's manually unblocked. - Input step: Pauses a build until information has been collected from a user. - Trigger step: Creates a build on another pipeline. - Group step: Displays a group of sub-steps as one parent step. A step can be in one of the following internal _states_, which the [Buildkite agent can retrieve](/docs/agent/cli/reference/step#getting-a-step), when the step is ready to run, or is currently running: - `ignored`: The step is ignored due to a conditional evaluation. - `waiting_for_dependencies`: The step is waiting for its dependencies to complete. - `ready`: The step is ready to run but hasn't started yet. - `running`: The step is currently running. - `failing`: The step is in the process of failing. - `finished`: The step has completed execution—usually follows either the `running` or `failing` state. - `canceled`: The step has been canceled—follows the `waiting_for_dependencies`, `ready`, `running`, or `failing` state. Once a step's run has completed with a state of `finished`, the [step's outcome](/docs/agent/cli/reference/step#getting-the-outcome-of-a-step) can be one of the following states: - `neutral`: The passing or failure of the step's outcome is not relevant (for example, the outcome of a wait step). - `passed`: The step's outcome is considered successful. - `soft_failed`: The step's outcome is considered successful, but with a warning. - `hard_failed`: The step's outcome is considered failed. - `errored`: The step's outcome is considered failed because something happened to abort the step early. A block or input step tracks the state of the build and its steps that ran before it, which can be `failed`, `passed`, or `running`. To learn more, see [Defining steps](/docs/pipelines/configure/defining-steps). --- ### Overview URL: https://buildkite.com/docs/pipelines/source-control #### Source control Buildkite integrates with several popular code repository management systems (RMS). These are also known as version control systems (VCS). These integrations allow you to trigger builds from on version control events. * [GitHub](/docs/pipelines/source-control/github) * [GitHub Enterprise](/docs/pipelines/source-control/github-enterprise) * [GitLab](/docs/pipelines/source-control/gitlab) * [Bitbucket](/docs/pipelines/source-control/bitbucket) * [Bitbucket Server](/docs/pipelines/source-control/bitbucket-server) * [Phabricator](/docs/pipelines/source-control/phabricator) * [Other Git servers](/docs/pipelines/source-control/git) --- ### GitHub URL: https://buildkite.com/docs/pipelines/source-control/github #### GitHub Buildkite can connect to a GitHub repository in your GitHub account or GitHub organization and use GitHub's [REST API endpoints for commit statuses](https://docs.github.com/en/rest/commits/statuses) to update the status of commits in pull requests. To complete this integration, you need admin privileges for your GitHub repository. ##### Connecting Buildkite and GitHub You can use the [Buildkite app for GitHub](#connect-your-buildkite-account-to-github-using-the-github-app) to connect a Buildkite organization to a GitHub organization. > 📘 Benefits of using the GitHub App > Using the GitHub App removes the reliance on individual user connections to report build statuses. See the [changelog announcement](https://buildkite.com/changelog/102-github-app-integration). If you want to [connect using OAuth](#connect-your-buildkite-account-to-github-using-oauth), you can still do so from your **Personal Settings**. ##### GitHub repository provider options When you connect Buildkite to GitHub through a GitHub App, the **Repository Providers** page in your Buildkite organization settings presents two options: - **GitHub** — a Buildkite GitHub App with full access permissions. This app has read access to your repository code and metadata, plus read and write access to checks, commit statuses, deployments, pull requests, and repository hooks. Use this option if you run builds on [Buildkite-hosted agents](/docs/agent/buildkite-hosted), because Buildkite needs code access to clone your repository. - **GitHub (Limited Access)** — a limited-permissions Buildkite GitHub App. This app does not have code access, but has read access to metadata, plus read and write access to checks, commit statuses, deployments, pull requests, and repository hooks. Use this option if you run builds exclusively on [self-hosted agents](/docs/pipelines/architecture#self-hosted-hybrid-architecture). ###### Permissions comparison | Permission | **GitHub** | **GitHub (Limited Access)** | | --- | --- | --- | | Code | Read | No access | | Metadata | Read | Read | | Checks | Read and write | Read and write | | Commit statuses | Read and write | Read and write | | Deployments | Read and write | Read and write | | Pull requests | Read and write | Read and write | | Repository hooks | Read and write | Read and write | ###### Choosing the right option Select the full-access **GitHub** app if you use [Buildkite-hosted agents](/docs/agent/buildkite-hosted) to run builds. Select the **GitHub (Limited Access)** app if you run builds exclusively on [self-hosted agents](/docs/pipelines/architecture#self-hosted-hybrid-architecture). > 📘 Using both GitHub Apps > If you use both Buildkite-hosted and self-hosted agents, you can install both apps and scope each to the relevant repositories. Alternatively, you can install only the full access **GitHub** app, which works with both agent types. ##### Connect your Buildkite account to GitHub using the GitHub App Connecting Buildkite and GitHub using the GitHub App lets your GitHub organization admins see permissions and manage access on a per-repository basis. > 📘 Required permissions for adding a provider > The user adding the provider needs to be a Buildkite user connected to a GitHub user who has administrative privileges on both Buildkite and the GitHub organizations. 1. Open your Buildkite organization's **Settings**. 1. Select [**Repository Providers**](https://buildkite.com/organizations/~/repository-providers). 1. Select **GitHub** or **GitHub (Limited Access)** depending on your requirements. See [GitHub repository provider options](#github-repository-provider-options) to determine which option is right for you. 1. Select **Connect to a new GitHub Account**. If you have never connected your Buildkite and GitHub accounts before, you will first need to select **Connect** and authorize Buildkite. 1. Select the GitHub organization you want to connect to your Buildkite organization. 1. Choose which repositories Buildkite should have access to, then select **Install**. You can now [set up a pipeline](#set-up-a-new-pipeline-for-a-github-repository). ##### Buildkite GitHub permissions The permissions Buildkite requests depend on which [GitHub repository provider option](#github-repository-provider-options) you select. Both options require the following permissions: - Read access to metadata. Learn more about this from [GitHub's documentation](https://docs.github.com/en/rest/authentication/permissions-required-for-github-apps#repository-permissions-for-metadata). - Read and write access to checks, commit statuses, deployments, pull requests, and repository hooks: this is needed for Buildkite to perform tasks such as running a build on pull requests and reporting that build status directly on the PR on GitHub. The **GitHub** (full access) option additionally requests read access to code, which allows Buildkite-hosted agents to clone your repository. The **GitHub (Limited Access)** option does not request code access. ##### Set up a new pipeline for a GitHub repository 1. Select **Pipelines** > **New pipeline**. 1. Enter your pipeline details, including your GitHub repository URL in the form `git@github.com:your/repo`. 1. If you are still using the web steps visual editor, add at least one step to your pipeline. Refer to [Defining Steps - Adding steps](/docs/pipelines/configure/defining-steps#adding-steps) for more information. 1. Select **Create Pipeline**. 1. Follow the onscreen instructions to set up a webhook: 1. Add a new webhook in GitHub. 1. Paste in the provided webhook URL. 1. Select `application/json` as the content type of the webhook. 1. Select **Deployments**, **Merge groups**, **Pull requests**, and **Pushes** as events to trigger the webhook. The repository webhook is required so that the Buildkite GitHub app does not need read access to your repository. 1. If using the YAML steps editor, add at least one step to your pipeline, then select **Save and Build**. Refer to [Defining Steps - Adding steps](/docs/pipelines/configure/defining-steps#adding-steps) for more information. If you need to set up the webhook again, you can find the instructions linked at the bottom of the pipeline GitHub settings page. You can edit your pipeline configuration at any time in your pipeline's **Settings**. ##### Branch configuration and settings You can edit the version control provider settings for each pipeline from the pipeline's settings page. Go to **Pipelines** > your specific pipeline > **Settings** > your Git service provider. If you need more control over your pipeline configuration, add a [pipeline.yml](/docs/pipelines/configure/defining-steps#adding-steps) to your repository. Then you can use [conditionals](/docs/pipelines/configure/conditionals) and [branch filtering](/docs/pipelines/configure/workflows/branch-configuration) to configure your pipeline. > 📘 Build branches vs build pull requests > If **Build branches** is enabled, Buildkite Pipelines runs builds on branch pushes, and those builds don't include pull request details. That's why pull request variables like `BUILDKITE_PULL_REQUEST_BASE_BRANCH` can be empty, even when the branch has an open pull request. If your pipeline needs pull request information, make sure **Build Pull Requests** is enabled. Consider turning off **Build branches** or limiting it to just your default branch (like `main`) so you don't end up with branch builds when you expect pull request builds. ##### Running builds on pull requests To run builds for GitHub pull requests, edit the GitHub settings for your Buildkite pipeline and select **Build when pull request is opened or updated**. This triggers builds for the `opened` and `synchronize` pull request actions. You can enable additional pull request actions to trigger builds: - **Build when pull request becomes ready for review**: build when a draft pull request is marked ready for review - **Build when pull request is edited**: build when the title, description, or base branch of a pull request is changed. Choose between **Any edit** (triggers on all edits) and **Base branch changed only** (triggers only when the base branch is changed). - **Build when pull request labels are changed**: build when labels are added to or removed from a pull request. Use the `build.pull_request.labels` conditional variable to filter by individual label names. - **Build when pull request is reopened**: build when a closed pull request is reopened - **Build when pull request is converted to draft**: build when a pull request is converted to a draft - **Build when a review is requested**: build when a review is requested on a pull request - **Build when pull request is removed from merge queue**: build when a pull request is dequeued from a GitHub merge queue - **Build when pull request is from third-party forked repository**: build pull requests opened from third-party forks. Make sure to check the [managing secrets](/docs/pipelines/security/secrets/managing) guide if you choose to do this. You can also configure these options: - **Limit pull request branches**: filter which branches trigger pull request builds - **Skip when pull request has existing build for commit and branch**: skip creating a duplicate build if one already exists for the same commit and branch - **Skip when pull request source is default branch**: skip pull request builds when the source branch is the default branch - **Cancel deleted branch builds**: cancel running builds for a branch when the branch is deleted from GitHub If you want to control which third-party forks can trigger builds in GitHub, you can prefix the branches from third-party forks with the contributor's username. For example, the `main` branch from `some-user` becomes `some-user:main`. You can then detect these using a pre-command hook or something similar before running a build. To enable prefixing the branch names, go to the GitHub settings for the pipeline and select **Prefix third-party fork branch names**. If you want to run builds only on pull requests, set the **Branch Filter Pattern** in the pipeline to a branch name that will never occur (such as "this-branch-will-never-occur"). Pull request builds ignore the **Branch Filter Pattern**, and all pushes to other branches that don't match the pattern are ignored. When you create a pull request, two builds are triggered: one for the pull request and one for the most recent commit. However, any commit made after the pull request is created only triggers one build. ##### Running builds on merge queues To enable merge queue builds, edit the GitHub settings for the pipeline and select **Build merge queues**. > 🚧 Ensure GitHub webhook has _Merge groups_ events enabled > Buildkite relies on receiving `merge_group` webhook events from GitHub to create builds for merge groups in the merge queue. Ensure your pipeline's [webhook](/docs/pipelines/source-control/github#set-up-a-new-pipeline-for-a-github-repository) has the _Merge groups_ event enabled before enabling merge queue builds. Enabling this will prevent ordinary code pushes to `gh-readonly-queue/*` branches from creating builds, instead builds will be created in response to `merge_group` webhook events from GitHub. Merge queue builds ignore any pipeline-level branch filter settings and do not support [skipping via a commit message](/docs/pipelines/configure/skipping#ignore-a-commit). To cancel running builds when the corresponding GitHub merge queue entry is destroyed, select the **Cancel builds for destroyed merge groups** option. The way the agent handles the [`if_changed` attribute](/docs/agent/cli/reference/pipeline#apply-if-changed) during pipeline uploads can also be influenced via the **Use base commit when making `if_changed` comparisons** setting. For more information about the interaction between GitHub merge queues and Buildkite, see our [merge queue tutorial](/docs/pipelines/tutorials/github-merge-queue). ##### Running builds on git tags Builds are only run for tags when a [`push` event is triggered](https://docs.github.com/en/developers/webhooks-and-events/webhook-events-and-payloads#push). To enable builds for `push` events for git tags, edit the **GitHub settings** for your Buildkite pipeline, and choose the **Build Tags** checkbox. Before triggering builds for git tags from the [API](/docs/apis/rest-api/builds#create-a-build) or a [scheduled build](/docs/pipelines/configure/workflows/scheduled-builds), make sure your agent is configured to fetch git tags: `BUILDKITE_GIT_FETCH_FLAGS="-v --prune --tags"`. > 📘 Build tags and `BUILDKITE_BRANCH` > When a build is triggered from a GitHub tag `push` event webhook, both the `BUILDKITE_TAG` and `BUILDKITE_BRANCH` environment variables are set to the name of the git tag being built. ##### Disabling GitHub webhooks To stop all GitHub webhook-triggered builds for a pipeline, use the **Disable GitHub Webhooks** button in the **Disable Webhooks** section of your pipeline's GitHub settings. This blocks all webhook processing — no new builds will be created from any GitHub event. Your existing trigger settings are preserved. To resume webhook-triggered builds, select **Enable GitHub Webhooks** and your previous configuration will be restored. ##### Running builds on additional GitHub events Beyond pushes, pull requests, and tags, Buildkite Pipelines can trigger builds from a broader set of GitHub webhook events. These are configured in the **Additional Webhooks** section of your pipeline's GitHub settings and require the **Code** trigger mode (except where noted). - **Pull request reviews**: trigger builds when a review is submitted or dismissed. - **Check runs**: trigger builds when a check run from another GitHub App completes. Check runs from Buildkite Pipelines are automatically skipped to prevent feedback loops. - **Releases**: trigger builds when a GitHub release is published, created, or released. - **Issue comments**: trigger builds from comments on pull requests. Comments must match a configurable command word (default: `/bk`) and come from a trusted author (owner, member, or collaborator). Supports `exact` (default) and `contains` match modes. - **Pull request review comments**: trigger builds from inline diff comments on pull requests. Like issue comments, requires a command word match and trusted author (owner, member, or collaborator). Supports `exact` and `contains` match modes (useful for AI assistant triggers like `@claude`). - **Deployment statuses**: trigger builds when a deployment status changes. Requires the **Deployment** trigger mode. - **Branch and tag creation**: trigger builds when a new branch or tag is created. ##### Environment variables GitHub webhook-triggered builds expose environment variables that you can use at runtime and in [conditionals](/docs/pipelines/configure/conditionals). Some variables are available at runtime (in your build scripts and hooks), conditionals, and pipeline interpolation via `build.env()`, while others are only available in conditionals and pipeline interpolation: **Available at runtime, conditionals, and pipeline interpolation:** - `BUILDKITE_GITHUB_COMMENT_ID`: the comment that triggered the build (issue comments and review comments) - `BUILDKITE_GITHUB_REVIEW_ID`: the review that triggered the build (pull request reviews) - `BUILDKITE_GITHUB_EVENT`: the GitHub webhook event name (for example, `pull_request`, `check_run`, `release`) - `BUILDKITE_GITHUB_ACTION`: the GitHub webhook action (for example, `opened`, `completed`, `published`) - `BUILDKITE_GITHUB_DEPLOYMENT_ID`: the deployment ID (deployment status events) **Available in conditionals and pipeline interpolation only:** - `BUILDKITE_GITHUB_CHECK_RUN_NAME`, `BUILDKITE_GITHUB_CHECK_RUN_CONCLUSION`: check run details - `BUILDKITE_GITHUB_RELEASE_TAG`, `BUILDKITE_GITHUB_RELEASE_DRAFT`, `BUILDKITE_GITHUB_RELEASE_PRERELEASE`: release details - `BUILDKITE_GITHUB_REVIEW_STATE`: the review state (`approved`, `changes_requested`, etc.) - `BUILDKITE_GITHUB_DEPLOYMENT_STATUS_STATE`, `BUILDKITE_GITHUB_DEPLOYMENT_STATUS_ENVIRONMENT`: deployment status details ##### Noreply email handling When you [connect your GitHub account to Buildkite](#connecting-buildkite-and-github) the email address associated with the GitHub account is added to your Buildkite account. If you've got GitHub set not to display your email, `[username]@users.noreply.github.com` or the more recent `[username+id]@users.noreply.github.com` is added instead. The email address of a commit is one of the ways Buildkite matches webhook builds to users. ##### Customizing commit statuses The commit status is the label used to identify the Buildkite checks on your commits and pull requests on GitHub. Normally, Buildkite autogenerates these statuses. For example, if you select **Update commit statuses** in your **Pipeline Settings**: Your checks will appear on your pull request as **buildkite/your-pipeline-name**: You can customize the commit statuses, for example to reuse the same pipeline for multiple components in a monorepo, at both the build and step level, using the [`notify`](/docs/pipelines/configure/notify) attribute in your `pipeline.yml`. ###### Build level 1. Add the following to your `pipeline.yml`, at the top level: ```yaml notify: - github_commit_status: context: "my-custom-status" ``` 1. In **Pipeline** > your specific pipeline > **Settings** > **GitHub**, make sure **Update commit statuses** is not selected. Note that this prevents Buildkite from automatically creating and sending statuses for this pipeline, meaning you will have to handle all commit statuses through the `pipeline.yml`. 1. When you make a new commit or pull request, you should see **my-custom-status** as the commit status: In a setup for a repository containing one codebase and one `pipeline.yml`, this customizes the commit status for the pipeline. However, if you have multiple `pipeline.yml` files in one repo, feeding in to the same Buildkite pipeline, this allows you to have different statuses when building different sections of the repo. For example, if you have a monorepo containing three applications, you could use the same pipeline, with different `pipeline.yml` files for each application. Each `pipeline.yml` can contain a different GitHub status. When a _build level_ GitHub commit status has been set (as part of an [uploaded pipeline YAML file](/docs/agent/cli/reference/pipeline#uploading-pipelines)), as opposed to a _pipeline level_ GitHub commit status, where the `notify` block is defined within the [YAML step editor of the Buildkite Pipelines interface](/docs/pipelines/configure/defining-steps#adding-steps), then the GitHub status is only reported _after_ the build has completed, because the `notify` block is evaluated after the build has started. By moving the GitHub status notification block to the pipeline level (in the YAML step editor of the Buildkite Pipelines interface), the `notify` block will be evaluated when the build starts and sends off the commit status to GitHub. ###### Step level 1. Add `notify` to a command in your `pipeline.yml`: ```yaml steps: - label: "Example Script" command: "script.sh" notify: - github_commit_status: context: "my-custom-status" ``` 1. In **Pipeline** > your specific pipeline > **Settings** > **GitHub**, you can choose to either: + Make sure **Update commit statuses** is not selected. Note that this prevents Buildkite from automatically creating and sending statuses for this pipeline, meaning you will have to handle all commit statuses through the `pipeline.yml`. + Enable both **Update commit statuses** and **Create a status for each job**. Buildkite sends its default statuses as well as your custom status. 1. When you make a new commit or pull request, you should see **my-custom-status** as the commit status: You can also define the commit status in a group step: ```yaml steps: - group: "\:lock_with_ink_pen\: Security Audits" key: "audits" notify: - github_commit_status: context: "group status" steps: - label: "\:brakeman\: Brakeman" command: ".buildkite/steps/brakeman" - label: "\:bundleaudit\: Bundle Audit" command: ".buildkite/steps/bundleaudit" - label: "\:yarn\: Yarn Audit" command: ".buildkite/steps/yarn" - label: "\:yarn\: Outdated Check" command: ".buildkite/steps/outdated" ``` When you set a custom commit status on a group step, GitHub only displays one status for the group. A passing result only shows when all jobs in the group pass. If you want to show custom commit statuses for each job, set them on the individual step. ##### Using one repository in multiple pipelines and organizations If you want to use the same repository in multiple pipelines (including pipelines in different Buildkite organizations), you need to configure a separate webhook for each pipeline. Follow the webhook setup instructions in the Buildkite UI. Buildkite shows you these instructions when you create the pipeline, but you can also find them in **Pipeline** > your specific pipeline > **Settings** > your Git service provider > your Git service provider's **Setup Instructions**. If you want to integrate the same repository into multiple Buildkite organizations, you need to link each organization to GitHub using different Buildkite user accounts. You must use different user accounts because there's a one-to-one relationship between a Buildkite user and a GitHub user. The user needs admin permissions on the GitHub organization to link it to Buildkite. You can only install the Buildkite app for GitHub once per GitHub organization. ##### Build skipping You may not always want to rebuild on every commit, or branch. You can configure Buildkite to ignore [individual commits](/docs/pipelines/configure/skipping#ignore-a-commit) or [branches](/docs/pipelines/configure/workflows/branch-configuration), or to [skip builds](/docs/pipelines/configure/skipping) under certain conditions. ##### Connect your Buildkite account to GitHub using OAuth To connect your GitHub account: 1. Open your [Buildkite **Personal Settings**](https://buildkite.com/user/settings). 1. Select [**Connected Apps**](https://buildkite.com/user/connected-apps). 1. Select the GitHub **Connect** button: 1. Select **Authorize Buildkite**. GitHub redirects you back to your **Connected Apps** page. You can now [set up a pipeline](#set-up-a-new-pipeline-for-a-github-repository). ##### Using GitHub App installation access tokens > 📘 The difference between repository authentication and account connection > Configuring a GitHub App for repository authentication is different from using the [Buildkite GitHub App](#connect-your-buildkite-account-to-github-using-the-github-app) to connect your Buildkite account to GitHub. An alternative to using SSH keys for accessing your private repositories is to use the GitHub App and GitHub's [installation access tokens](https://docs.github.com/en/apps/creating-github-apps/authenticating-with-a-github-app/generating-an-installation-access-token-for-a-github-app). This approach requires a private key for generating a JSON Web Token (JWT) that is exchanged for an installation access token. The repository permissions of the GitHub App can be scoped to read-only, and every generated installation access token can be set to expire after 1 hour. ###### Configuring a GitHub App for repository authentication > 📘 GitHub Organization access prerequisites > You need to be an Admin of the GitHub Organization to be able to create and install the GitHub App and follow the steps outlined in this instruction. ###### Create the GitHub App To register a GitHub App, follow the GitHub [documentation](https://docs.github.com/en/enterprise-cloud@latest/apps/creating-github-apps/registering-a-github-app/registering-a-github-app#registering-a-github-app). Configure a new GitHub App: - GitHub App name: choose a unique name (for example, buildkite-agent-ro-access) - Homepage URL: your company's homepage - Webhook: + Uncheck **Active** (webhooks are not required) + Webhook URL (leave blank) + Secret (leave blank) - Permissions: + Repository Permissions: - Contents: choose either `Read-only` or `Read and write`, depending on whether write access will be required to push files - Metadata: select `Read-only` (required for basic repository info) - Pull requests: choose either `Read-only` or `Read and write`, depending on whether read or write access will be required for pull requests - Where can this GitHub App be installed? + Choose "Only on this account" After the GitHub App has been configured with the settings outlined above, click the **Create GitHub App** button. You will see the **General settings** of the new GitHub App. > 📘 GitHub App's Client ID > The value of the GitHub App's Client ID displayed on the General settings page will be required for generating installation access tokens. Make sure you have this value available. ###### Generate authentication keys In order to create a JWT that can be exchanged for an installation access token, a private key must be generated for the GitHub App. This private key can then be stored in [Buildkite secrets](/docs/pipelines/security/secrets/buildkite-secrets) and securely accessed by a Buildkite agent. To generate a private key: - In your GitHub App's **General settings**, scroll to **Private keys** - Click the **Generate a private key** button - This will download a `.pem` file of the newly generated private key - Create a new [Buildkite Secret](/docs/pipelines/security/secrets/buildkite-secrets) in the [Cluster(s)](/docs/pipelines/security/clusters) containing the Buildkite agents that require access to your private repositories + Add the contents of the `.pem` file as the secret's _Value_ > 📘 Private key Buildkite Secret > The value of the Buildkite Secret's name contains the private key and will be required when generating installation access tokens. Make sure you have this value available as it will be referenced by `buildkite-agent secret get ` command in the agent environment hook. ###### Install the newly created GitHub App After creating the GitHub App, you can install this app into your account. To install the GitHub App, go to the app's settings and select **Install App** from the left-hand menu. Choose the account into which you want to install the GitHub App. Choose the repositories that the GitHub App will have access to, based on the repository permissions selected during the GitHub App's creation. After selecting the GitHub App's repository access, click the **Install** button. > 📘 GitHub App's Installation ID > The value of the GitHub App's Installation ID will be required for generating installation access tokens. This value can be found at the end of the URL after installation is complete: `.../settings/installations/`. Make sure you have this value available. ###### Generating tokens The GitHub documentation describes the [process](https://docs.github.com/en/enterprise-cloud@latest/apps/creating-github-apps/authenticating-with-a-github-app/generating-an-installation-access-token-for-a-github-app#generating-an-installation-access-token) of generating a JWT and then exchanging it for an installation access token. There are a few examples available that show how you can [generate a JWT](https://docs.github.com/en/enterprise-cloud@latest/apps/creating-github-apps/authenticating-with-a-github-app/generating-a-json-web-token-jwt-for-a-github-app#generating-a-json-web-token-jwt) using some common programming languages. The example that follows will be using Bash to configure a `pre-checkout` [agent hook](/docs/agent/hooks#hook-locations-agent-hooks). ###### Configure agent hook > 📘 OpenSSL package requirement > The `pre-checkout` hook example below requires the `openssl` package to be installed and available to the Buildkite agent performing the checkout. In order to have the agent generate a GitHub App installation token, add the following code to your [agent hooks directory](/docs/agent/hooks#hook-locations) as a `pre-checkout` hook, configuring the variables at the beginning of the hook with the GitHub App's Client ID (`client_id`), Installation ID (`installation_id`), and Buildkite Secret name (`private_key_secret_name`): ```bash #!/usr/bin/env bash set -o pipefail echo "~~~ \:lock_with_ink_pen\: Generating JWT for GitHub App access token exchange" client_id= # Client ID of GitHub App private_key_secret_name= # Buildkite Secret containing private key installation_id= # Installation ID of GitHub App pem=$( buildkite-agent secret get ${private_key_secret_name} ) now=$(date +%s) iat=$((${now} - 60)) # Issues 60 seconds in the past exp=$((${now} + 600)) # Expires 10 minutes in the future b64enc() { openssl base64 | tr -d '=' | tr '/+' '_-' | tr -d '\n'; } header_json='{ "typ":"JWT", "alg":"RS256" }' #### Header encode header=$( echo -n "${header_json}" | b64enc ) payload_json="{ \"iat\":${iat}, \"exp\":${exp}, \"iss\":\"${client_id}\" }" #### Payload encode payload=$( echo -n "${payload_json}" | b64enc ) #### Signature header_payload="${header}"."${payload}" signature=$( openssl dgst -sha256 -sign ~/.git-credentials git config --global url."https://github.com/".insteadOf git@github.com: git config --global credential.helper store ``` --- ### GitHub Enterprise URL: https://buildkite.com/docs/pipelines/source-control/github-enterprise #### GitHub Enterprise Server Buildkite can connect to your GitHub Enterprise Server and use the [GitHub Status API](https://docs.github.com/en/rest/commits/statuses) to update the status of commits in pull requests. This guide describes the setup for self-hosted GitHub Enterprise Server. GitHub Enterprise Cloud users should refer to [GitHub](/docs/pipelines/source-control/github). > 📘 Buildkite plan availability and GitHub Enterprise version > GitHub Enterprise is only available to Buildkite customers on [Pro or Enterprise](https://buildkite.com/pricing) plans. > This guide is based on GitHub Enterprise version 2.16.3. Earlier or later versions may have different menus and headings for the OAuth app registration. All of the Buildkite settings will remain the same. ##### Step 1: Register Buildkite as an OAuth app In your GitHub Enterprise organization settings, select **OAuth Apps** under **Developer Settings**: Select **Register an application**. Fill out the form with the following values: * Name: `Buildkite` * URL: `https://buildkite.com` * Callback URL: `https://buildkite.com/user/authorize/github_enterprise/callback` Select **Register application** at the bottom of the form. After successfully registering your application, you can optionally add a logo to your app. Here is a pre-cropped image you can use: Make a note of your Client ID and Client Secret, you will need those to connect your GitHub Enterprise Server with Buildkite in the next step. ##### Step 2: Update your Buildkite organization settings 1. Open your Buildkite organization's Settings and choose [**Repository Providers**](https://buildkite.com/organizations/~/repository-providers). 1. Select **GitHub Enterprise Server**. 1. Enter your settings: - The URL and public proxy URL of your GitHub Enterprise Server - The Client ID and Client Secret from the GitHub OAuth App you created in Step 1 - If you're using self-signed certificates, make sure the **Verify TLS Certificate** checkbox is not selected. 1. Select **Save GitHub Enterprise Settings** to save your settings. After saving, the **Secret** field appears blank. Buildkite has saved it, and will not display it. You can optionally supply a TLS certificate pair to be used by Buildkite as a client certificate when contacting your GitHub Enterprise endpoints. ##### Step 3: Connect your GitHub Enterprise account to Buildkite For Buildkite to mark commits and pull requests as pass or fail, you need to authorize your GitHub Enterprise user account with Buildkite. 1. In your Buildkite **Personal Settings**, select [Connected Apps](https://buildkite.com/user/connected-apps). Here you'll see your GitHub Enterprise Server along with any other connected apps. 1. Select **Connect** next to **GitHub Enterprise**: 1. Buildkite redirects you back to your GitHub Enterprise Server, where it asks you to authorize your new Buildkite OAuth app to use your GitHub Enterprise account. Select **Authorize** to complete your setup: That's it! Next time you create a pipeline with a repository that's either `https://git.mycompany.com/acme-inc/app.git` or `git@git.mycompany.com:acme-inc/app.git`. Buildkite will recognize that it's hosted on your GitHub Enterprise Server, and use your newly created OAuth authorization to update the commit statuses. ##### Known limitations for additional webhook events The Buildkite GHES App manifest subscribes to `create`, `delete`, and `release` webhook events. However, GitHub only delivers these events if the App has `contents: read` permission. In the GHES App manifest, `contents: read` is only included when the customer opts in to code access by choosing "Buildkite (with code access)" during setup. This means GHES installations **without** code access will not receive `create`, `delete`, or `release` events. The corresponding pipeline settings (branch and tag creation and release triggers) will have no effect. The `cancel_deleted_branch_builds` setting is not affected, because branch deletion is also detected through `push` events. > 📘 To enable these events, reinstall the GitHub App with code access enabled. See the [GitHub integration docs](/docs/pipelines/source-control/github#running-builds-on-additional-github-events) for details on additional webhook events. ##### Transferring ownership If you need to leave your current GitHub Enterprise Organization, you need to transfer the OAuth ownership first. Without this, the remaining members of your Buildkite team who are using that GitHub Enterprise Organization for OAuth won't be able to log in. To correctly transfer the OAuth ownership over your GitHub Enterprise Organization, see GitHub's official documentation for [Transferring ownership of an OAuth App](https://docs.github.com/en/developers/apps/managing-oauth-apps/transferring-ownership-of-an-oauth-app) and [Maintaining ownership continuity for your organization](https://docs.github.com/en/organizations/managing-peoples-access-to-your-organization-with-roles/maintaining-ownership-continuity-for-your-organization). ##### Branch configuration and settings You can edit the version control provider settings for each pipeline from the pipeline's settings page. Go to **Pipelines** > your specific pipeline > **Settings** > your Git service provider. If you need more control over your pipeline configuration, add a [pipeline.yml](/docs/pipelines/configure/defining-steps#adding-steps) to your repository. Then you can use [conditionals](/docs/pipelines/configure/conditionals) and [branch filtering](/docs/pipelines/configure/workflows/branch-configuration) to configure your pipeline. ##### Firewalled installs If your GitHub Enterprise is behind a firewall you'll need to allow Buildkite's IP address so we can perform OAuth authentications using GitHub Enterprise API to update your pull request statuses. All Buildkite network traffic to your GitHub Enterprise Server will come from a set list of IP addresses. As the IP addresses are subject to change, it is best to retrieve them directly from the [Meta API endpoint](/docs/apis/rest-api/meta#get-meta-information). Please configure your network to allow traffic from all IP addresses returned by the endpoint. For additional security you can create a proxy that allows only the API endpoints we require: * `/api/v3/repos/.*/.*/statuses/.*` * `/api/v3/user` * `/api/v3/user/emails` * `/login/oauth` The following is an example [NGINX](https://www.nginx.com) server configuration that proxies the required URLs and can be used with the _Public API URL_ GitHub Enterprise setting in Buildkite: ```nginx daemon off; events { worker_connections 1024; } http { server { listen 443 ssl; location / { # Your own IPs allow ...; deny all; } location ~ ^/api/v3/repos/.*/.*/statuses { proxy_pass https://ghe.internal:443; # Allow for OAuth Buildkite App to update commit statuses # IPs Subject to change - https://buildkite.com/docs/apis/rest-api/meta#get-meta-information allow 100.24.182.113; allow 35.172.45.249; allow 54.85.125.32; deny all; } location = /api/v3/user { proxy_pass https://ghe.internal:443; # Allow for OAuth Buildkite App # IPs Subject to change - https://buildkite.com/docs/apis/rest-api/meta#get-meta-information allow 100.24.182.113; allow 35.172.45.249; allow 54.85.125.32; deny all; } location = /api/v3/user/emails { proxy_pass https://ghe.internal:443; # Allow for OAuth Buildkite App # IPs Subject to change - https://buildkite.com/docs/apis/rest-api/meta#get-meta-information allow 100.24.182.113; allow 35.172.45.249; allow 54.85.125.32; deny all; } location /login/oauth { proxy_pass https://ghe.internal:443; # Allow for OAuth Buildkite App to authorize # IPs Subject to change - https://buildkite.com/docs/apis/rest-api/meta#get-meta-information allow 100.24.182.113; allow 35.172.45.249; allow 54.85.125.32; # Your own IPs allow ...; deny all; } } } ``` Learn more about restricting access to your GitHub Enterprise Server on firewalled or proxy services in [Restricting Access to Proxied TCP Resources of the NGINX Docs](https://docs.nginx.com/nginx/admin-guide/security-controls/controlling-access-proxied-tcp/). ##### Multiple GitHub Enterprise integrations You can set up multiple GitHub Enterprise integrations with your Buildkite organization. However, due to the OAuth installation requirements, each integration must be configured by a unique user. Each user must possess admin permissions in both Buildkite and GitHub. ##### Using one repository in multiple pipelines and organizations If you want to use the same repository in multiple pipelines (including pipelines in different Buildkite organizations), you need to configure a separate webhook for each pipeline. Follow the webhook setup instructions in the Buildkite UI. Buildkite shows you these instructions when you create the pipeline, but you can also find them in **Pipeline** > your specific pipeline > **Settings** > your Git service provider > your Git service provider's **Setup Instructions**. If you want to integrate the same repository into multiple Buildkite organizations, you need to link each organization to GitHub using different Buildkite user accounts. You must use different user accounts because there's a one-to-one relationship between a Buildkite user and a GitHub user. The user needs admin permissions on the GitHub organization to link it to Buildkite. You can only install the Buildkite app for GitHub once per GitHub organization. ##### Build skipping You may not always want to rebuild on every commit, or branch. You can configure Buildkite to ignore [individual commits](/docs/pipelines/configure/skipping#ignore-a-commit) or [branches](/docs/pipelines/configure/workflows/branch-configuration), or to [skip builds](/docs/pipelines/configure/skipping) under certain conditions. --- ### GitLab URL: https://buildkite.com/docs/pipelines/source-control/gitlab #### GitLab You can use Buildkite to run builds on [GitLab](https://about.gitlab.com/) commits. ##### GitLab repositories If you host your repositories on [gitlab.com](https://gitlab.com/) enter your gitlab.com repository URL when you create your pipeline in Buildkite (for example, `git@gitlab.com:your/repo.git`) and follow the instructions provided on that page to set up webhooks. ##### GitLab Self-Managed repositories You can also use repositories from your own self-managed GitLab service but you'll need to connect it to Buildkite first. >📘 > The earliest supported version of GitLab is 7.4. 1. Open your Buildkite organization's **Settings** and choose [**Repository Providers**](https://buildkite.com/organizations/-/repository-providers). 1. Select **GitLab Self-Managed**. 1. Enter the URL to your GitLab installation (for example, `https://git.example.org`). 1. You can optionally specify a list of IP addresses to restrict where builds can be triggered from. This field accepts a space separated list of networks in [CIDR notation](https://en.wikipedia.org/wiki/Classless_Inter-Domain_Routing). 1. Select **Save Settings** before leaving this page. 1. Create a new pipeline on Buildkite using your GitLab repository's URL (for example, `git@git.mycompany.com:your/repo.git`) and follow the instructions on the pipeline creation page. > 📘 Verify your GitLab account > To ensure that the commit author from GitLab is a verified Buildkite account user, a public email must be specified in the user's GitLab account. This public email must match their Buildkite user account email. ##### Branch configuration and settings You can edit the version control provider settings for each pipeline from the pipeline's settings page. Go to **Pipelines** > your specific pipeline > **Settings** > your Git service provider. If you need more control over your pipeline configuration, add a [pipeline.yml](/docs/pipelines/configure/defining-steps#adding-steps) to your repository. Then you can use [conditionals](/docs/pipelines/configure/conditionals) and [branch filtering](/docs/pipelines/configure/workflows/branch-configuration) to configure your pipeline. ##### Using one repository in multiple pipelines and organizations If you want to use the same repository in multiple pipelines (including pipelines in different Buildkite organizations), you need to configure a separate webhook for each pipeline. Follow the webhook setup instructions in the Buildkite UI. Buildkite shows you these instructions when you create the pipeline, but you can also find them in **Pipeline** > your specific pipeline > **Settings** > your Git service provider > your Git service provider's **Setup Instructions**. ##### Build skipping You may not always want to rebuild on every commit, or branch. You can configure Buildkite to ignore [individual commits](/docs/pipelines/configure/skipping#ignore-a-commit) or [branches](/docs/pipelines/configure/workflows/branch-configuration), or to [skip builds](/docs/pipelines/configure/skipping) under certain conditions. ##### Commit statuses Buildkite Pipelines can update commit statuses in GitLab. You can then see the status of your builds from your GitLab.com commits and merge requests with direct links back to your Buildkite Pipelines build. For GitLab.com, connect your Buildkite and GitLab user accounts by going to your Buildkite user account's **Personal Settings** from the global navigation > **Connected Apps** page: Next, in your Buildkite organization, go to **Pipelines** > your specific pipeline > **Settings** > **GitLab**, and make sure the **Update commit statuses** checkbox is selected: For a self-managed GitLab service, ensure you have configured API authentication for your Buildkite organization's GitLab repository provider. To do this, select **Settings** from the global navigation > **Repository Providers** > **GitLab Self-Managed** page: Then update your pipeline's repository settings as above. --- ### Bitbucket URL: https://buildkite.com/docs/pipelines/source-control/bitbucket #### Bitbucket Buildkite integrates with [Bitbucket](https://bitbucket.org/) to provide automated builds based on your source control. You can run a build every time you push code to Bitbucket, and pull requests can have their build status live-updated as builds progress. This guide shows you how to set up your Bitbucket builds with Buildkite. ##### Set up the Bitbucket webhook Once you've created a pipeline in Buildkite and copied in your Bitbucket repository URL, Buildkite shows you setup instructions for configuring your Bitbucket webhooks. You can also find these instructions by following the **Bitbucket Setup Instructions** link on your Buildkite pipeline's **Settings** page: The setup instructions give you: - A direct link to your Bitbucket repository's **Webhooks** settings - Instructions - A custom webhook URL for the pipeline Once you've followed the link you can add a new webhook: After filling out the webhook details using the instructions from your Buildkite pipeline settings, select **Save**, and you're ready to trigger a build. ##### Enable commit status updates If you want your Bitbucket pull request's build status icons to update as builds progress, you need to connect your Bitbucket account with Buildkite. You only need to do this once, and if you don't need build status updates you can skip this step altogether. To connect your Bitbucket account: 1. Open Buildkite's **Personal Settings**. 2. Choose **Connected Apps**. 3. Select **Connect** next to **Bitbucket**. Buildkite prompts you to give permission for Buildkite to post status updates, then redirects back to your **Connected Apps** page. ##### Branch configuration and settings You can edit the version control provider settings for each pipeline from the pipeline's settings page. Go to **Pipelines** > your specific pipeline > **Settings** > your Git service provider. If you need more control over your pipeline configuration, add a [pipeline.yml](/docs/pipelines/configure/defining-steps#adding-steps) to your repository. Then you can use [conditionals](/docs/pipelines/configure/conditionals) and [branch filtering](/docs/pipelines/configure/workflows/branch-configuration) to configure your pipeline. ##### Using one repository in multiple pipelines and organizations If you want to use the same repository in multiple pipelines (including pipelines in different Buildkite organizations), you need to configure a separate webhook for each pipeline. Follow the webhook setup instructions in the Buildkite UI. Buildkite shows you these instructions when you create the pipeline, but you can also find them in **Pipeline** > your specific pipeline > **Settings** > your Git service provider > your Git service provider's **Setup Instructions**. ##### Build skipping You may not always want to rebuild on every commit, or branch. You can configure Buildkite to ignore [individual commits](/docs/pipelines/configure/skipping#ignore-a-commit) or [branches](/docs/pipelines/configure/workflows/branch-configuration), or to [skip builds](/docs/pipelines/configure/skipping) under certain conditions. --- ### Bitbucket Server URL: https://buildkite.com/docs/pipelines/source-control/bitbucket-server #### Bitbucket Server Buildkite integrates with Bitbucket Server to provide automated builds based on your source control. This guide shows you how to set up your Bitbucket Server builds with Buildkite. You can run a build every time you push code to Bitbucket Server, using a webhook that you create in your Bitbucket Server. > 📘 Buildkite plan availability and Bitbucket Server version > Bitbucket Server is only available to Buildkite customers on [Pro or Enterprise](https://buildkite.com/pricing) plans. > This guide is based on Bitbucket Server version 7.11.1. Earlier or later versions may have variations in the interface. ##### Step 1: connect Bitbucket Server and set up a pipeline 1. Select **Settings** to open the **Organization Settings** page. 1. Navigate to **Repository Providers**. 1. Select **Bitbucket Server**. 1. In **URLs**, enter the address of your Bitbucket Server, including a port if needed. For example, `localhost:8000`. You can also restrict which network addresses are allowed to trigger builds using webhooks in **Allowed IP Addresses** in **Network Settings**. 1. Select **Save Settings**. 1. Set up a pipeline as normal. Refer to [Pipelines](/docs/pipelines) for more information. ##### Step 2: confirm your setup If your configuration worked, Buildkite automatically recognizes your repository URL as a Bitbucket Server repository. To check this, go to **Pipelines** > your specific pipeline > **Settings**. You should see **Bitbucket Server** on the side as a configurable area for your pipeline. ##### Step 3: work through the in-app guide to set up your webhook Buildkite includes built-in instructions on how to set up a Bitbucket Server webhook. This webhook allows Bitbucket Server to trigger Buildkite builds in response to events like code pushes and pull requests. 1. Navigate to **Pipelines** > your specific pipeline > **Settings** > **Bitbucket Server**. 1. Select **Bitbucket Server Setup Instructions**. 1. Follow the on screen instructions to configure your webhook. ##### Branch configuration and settings You can edit the version control provider settings for each pipeline from the pipeline's settings page. Go to **Pipelines** > your specific pipeline > **Settings** > your Git service provider. If you need more control over your pipeline configuration, add a [pipeline.yml](/docs/pipelines/configure/defining-steps#adding-steps) to your repository. Then you can use [conditionals](/docs/pipelines/configure/conditionals) and [branch filtering](/docs/pipelines/configure/workflows/branch-configuration) to configure your pipeline. ##### Using one repository in multiple pipelines and organizations If you want to use the same repository in multiple pipelines (including pipelines in different Buildkite organizations), you need to configure a separate webhook for each pipeline. Follow the webhook setup instructions in the Buildkite UI. Buildkite shows you these instructions when you create the pipeline, but you can also find them in **Pipeline** > your specific pipeline > **Settings** > your Git service provider > your Git service provider's **Setup Instructions**. ##### Build skipping You may not always want to rebuild on every commit, or branch. You can configure Buildkite to ignore [individual commits](/docs/pipelines/configure/skipping#ignore-a-commit) or [branches](/docs/pipelines/configure/workflows/branch-configuration), or to [skip builds](/docs/pipelines/configure/skipping) under certain conditions. --- ### Phabricator URL: https://buildkite.com/docs/pipelines/source-control/phabricator #### Phabricator [Phabricator](https://phacility.com/phabricator/) can trigger Buildkite builds on new revisions through Harbormaster. Phabricator and Buildkite integrate using webhooks. Phabricator triggers builds in Buildkite with webhooks, then Buildkite reports the status back to Phabricator also using webhooks. ##### Before you start Check that your repository is activated in your Phabricator instance and Harbormaster is installed. Configure a pipeline for that repository in Buildkite. You'll also need to create a Buildkite API access token. >📘 > Admin access in both Phabricator and your Buildkite organization will be required. ##### Step 1: New Phabricator build step Create a new Build Plan in Harbormaster. Inside Harbormaster open **Manage Build Plans**. Click **Create Build Plan** located in the upper right. Provide a name then click **Create Build Plan**. Next, add a step to the Build Plan. A Phabricator **Build Step** creates a build on one Buildkite **pipeline**. If you need to create builds on multiple pipelines, create multiple Build Steps. Click **Add Build Step** on the next screen. Then select **Build with Buildkite**. Keep this screen open while you configure Buildkite. ##### Step 2: Configure Buildkite notification webhook Create a [new API access token](https://buildkite.com/user/api-access-tokens/new) in your Buildkite Personal Settings. Provide a description, then select your organization. Select the `read_builds` and `write_builds` scopes. Create the token, then copy the token from the subsequent screen. Next, add a **Webhook Notification** in your Buildkite organization's Notification Services. From Phabricator's **Webhook Configuration**, copy the **Webhook URL** (it should end in `/harbormaster/hook/buildkite/`) and paste it into the **Webhook URL** field in Buildkite. Take a copy of the autogenerated value in the **Token** field in Buildkite. Select the `build.finished` Event, then save the notification settings. ##### Step 3: Complete Phabricator build step In Phabricator, click the **Add New Credential** button next to **API Token**. Provide a name and use the Buildkite API access token that you created earlier for the **Token** field. Fill in the Buildkite **Organization Name** and **Pipeline Name**. Use the Token from Buildkite's Notification Webhook for **Webhook Token** in Phabricator. Finally, click **Create Build Step**. ##### Step 4: Test with a manual build Click **Run Plan Manually** on the next screen. Provide a revision ID from the repository inside Phabricator. Your Buildkite pipeline should run, then report back to Phabricator. ##### Step 4: Configure builds on new commits The Herald application acts as a "trigger" inside Phabricator. You can create a Herald rule that runs your Build Step on new commits. Open Herald, and click **Create Herald Rule**. Select **Commits** on the next screen, then select **Object** on the screen after that. This allows you to connect a rule to a specific repository which you'll configure on the next screen. Fill in the repository name. It should start with `r`, so this example is using `rDEMO`. Finally, configure an action associated with this rule. Adding conditions is optional, but you could use this to, for example, limit builds to specific branches. Select **Run Build Plan**, then provide the Build Plan created in the previous step. Finally, press **Save Rule**. Now that your setup is complete, every time there is a new commit in your repository a build will be run on your Buildkite pipeline and the status will appear in Phabricator. [phabricator]: https://phacility.com/phabricator/ --- ### Other Git servers URL: https://buildkite.com/docs/pipelines/source-control/git #### Other Git servers If your Git server isn't an integrated repository provider, then you can trigger builds using Git hook scripts and the Buildkite REST API. This guide explains how to trigger builds when you push to a Git server. For example, if you're using a proprietary Git server, then you can trigger builds on push with a post-receive hook. This method can be adapted for other Git events or for running Buildkite builds from arbitrary scripts and services. ##### Before you start To follow along with the steps in this guide, you need the following: - An [API access token](/docs/apis/managing-api-tokens) - The ability to run server-side Git hooks If your Git server is hosted on a platform that restricts or prohibits running arbitrary scripts, such as GitHub, then this approach won't work. - Familiarity with the concepts of executable shell scripts, Buildkite pipelines and builds, and REST APIs ##### Git hooks at a glance Git runs hooks — specially named executables — at certain Git lifecycle events, such as before a commit or after a push. Git runs executables found in: - The `hooks` directory of a [bare repository](https://git-scm.com/docs/gitglossary#Documentation/gitglossary.txt-aiddefbarerepositoryabarerepository) (more common on servers) - The `.git/hooks` directory of a repository with a [worktree](https://git-scm.com/docs/gitglossary#Documentation/gitglossary.txt-aiddefworktreeaworktree) (less common on servers) - A directory set by the [`core.hooksPath`](https://git-scm.com/docs/git-config#Documentation/git-config.txt-corehooksPath) configuration variable For example, after a push to the bare repository at the path `/repos/demo-repo/`, Git checks for the existence of an executable file `/repos/demo-repo/hooks/post-receive`. If it exists, it runs the file with arguments containing details about the push. The post-recieve hook is a convenient place to trigger builds using the Buildkite REST API. ##### Step 1: Create a pipeline If you haven't already, create [a pipeline to run](/docs/pipelines/configure/defining-steps) for the repository. After you've created the pipeline, make a note of the organization slug and pipeline slug in the pipeline URL. You need both for the next step. For example, in the pipeline settings URL `https://buildkite.com/example-org/git-pipeline-demo/settings`, `example-org` and `git-pipeline-demo` are the organization and pipeline slugs, respectively. ##### Step 2: Create a Git hook to react to pushes On your Git server, create a `post-receive` hook script in your repository's `hooks` directory that calls the Buildkite REST API's [Create a build](/docs/apis/rest-api/builds#create-a-build) endpoint. For example, in a bare repository, create a file named `hooks/post-receive` with the following contents: ```bash #!/usr/bin/env bash BUILDKITE_ORG_SLUG="example-org" BUILDKITE_PIPELINE_SLUG="git-hook-demo" BUILDKITE_PAYLOAD_FORMAT='{ "commit": "%s", "branch": "%s", "message": "%s", "author": { "name": "%s", "email": "%s" } }\n' while read -r _oldrev newrev ref; do branch=$(git rev-parse --abbrev-ref "$ref") author=$(git log -1 HEAD --format="format:%an") email=$(git log -1 HEAD --format="format:%ae") message=$(git log -1 HEAD --format="format:%B") curl -X POST \ "https://api.buildkite.com/v2/organizations/$BUILDKITE_ORG_SLUG/pipelines/$BUILDKITE_PIPELINE_SLUG/builds" \ -H "Authorization: Bearer $BUILDKITE_API_TOKEN" \ -H "Content-Type: application/json" \ -d "$(printf "$BUILDKITE_PAYLOAD_FORMAT" "$newrev" "$branch" "$message" "$author" "$email")" done ``` To use this script: - Set the `BUILDKITE_API_TOKEN` environment variable to an [API access token](/docs/apis/managing-api-tokens). The token is a privileged secret. A best practice for secret storage is to use your own secrets storage service, such as [AWS Secrets Manager](https://aws.amazon.com/secrets-manager/) or [Hashicorp Vault](https://www.vaultproject.io). - Set a valid `BUILDKITE_ORG_SLUG` and `BUILDKITE_PIPELINE_SLUG`, or replace them with environment variables. - Make the file executable (for example, in the `hooks` directory, run `chmod +x post-receive`). You can also adapt this script for your application. For example, you can modify it to selectively trigger builds for certain branches, trigger multiple builds, save log output, or to respond to other Git events. ##### Step 3: Test the hook To test the hook, push to the Git server. If you've configured your hook successfully, a new build is scheduled for the specified pipeline. ##### Learn more - For more on how to control builds with the REST API, read [Builds API](/docs/apis/rest-api/builds). - For a complete list of Git hooks, read [githooks](https://git-scm.com/docs/githooks) in the [Git reference](https://git-scm.com/docs) (or run `man githooks`). - For an overview of Git hooks, read the [Customizing Git - Git Hooks](https://git-scm.com/book/en/Customizing-Git-Git-Hooks) chapter of [Pro Git](https://git-scm.com/book/en/). --- ### Overview URL: https://buildkite.com/docs/pipelines/migration #### Migrate to Buildkite Pipelines overview Migrating to Buildkite Pipelines is a smooth process with the right context and planning. This page covers the tools and guides that allow you get familiar with Buildkite Pipelines, help you plan a migration from your existing CI/CD platform, and introduce you to the Buildkite Migration Services. ##### Pipeline converter The Buildkite pipeline converter is designed to help you understand Buildkite Pipelines by providing a hands-on, high-level overview of how workflows from other CI/CD platforms map to Buildkite Pipelines' concepts and architecture. Rather than serving as a complete automated pipeline conversion solution, the Buildkite pipeline pipeline converter helps you visualize how configurations from the following CI/CD platforms could be structured in the Buildkite Pipelines configuration format: - [GitHub Actions](/docs/pipelines/migration/tool/github-actions) - [CircleCI](/docs/pipelines/migration/tool/circleci) - [Bitbucket Pipelines](/docs/pipelines/migration/tool/bitbucket-pipelines) - [Jenkins](/docs/pipelines/migration/tool/jenkins) - Bitrise (beta) - GitLab CI (beta) - Harness (beta) Using the Buildkite pipeline converter will accelerate your understanding of Buildkite Pipelines concepts, allowing you to make informed decisions about how to rearchitect and optimize your workflows for the Buildkite platform. Use the tool's output as a learning foundation, then iterate and refine your pipeline designs before beginning the actual pipeline conversion process. You can immediately start experimenting with the Buildkite pipeline converter through the [CLI version](/docs/pipelines/converter#cli-buildkite-pipeline-converter-how-to-use-the-cli-buildkite-pipeline-converter) or via an [interactive web-based interface](https://buildkite.com/resources/convert/). ##### Migration guides The guides walk through the entire process step by step, covering the key aspects of migration, such as: 1. Understanding the differences. 1. Trying out Buildkite. 1. Provisioning agent infrastructure. 1. Translating pipeline definitions. 1. Integrating with your tools. 1. Sharing your setup. To get started, choose the guide that corresponds to the CI/CD tool you are migrating from: - [Migration from GitHub Actions](/docs/pipelines/migration/from-githubactions) - [Migrate from CircleCI](/docs/pipelines/migration/from-circleci) - [Migrate from Jenkins](/docs/pipelines/migration/from-jenkins) - [Migrate from Bitbucket Pipelines](/docs/pipelines/migration/from-bitbucket-pipelines) - [Migrate from Bamboo](/docs/pipelines/migration/from-bamboo) ##### Plan your migration There are multiple approaches you can adopt when planning your migration. Take a look at the following content to understand the common strategies customers usually use to migrate to Buildkite, and the potential pros, cons, and pitfalls of each strategy. - [Webinar: Strategies for migrating your CI/CD pipelines to Buildkite](https://www.youtube.com/watch?v=nV8u3dnEHZ0). ##### Migration services If you would like to receive assistance when migrating from your existing CI/CD provider to Buildkite Pipelines, you can work with the [Buildkite Migration Services](https://buildkite.com/resources/migrations/) team. The Migration Services team works directly with your organization to provide strategic planning, implementation guidance, and proven best practices. If you need further help, guidance, or have any questions, please reach out to support at support@buildkite.com. We're here to help you make a smooth transition to Buildkite. --- ### From GitHub Actions URL: https://buildkite.com/docs/pipelines/migration/from-githubactions #### Migrate from GitHub Actions This guide helps [GitHub Actions](https://github.com/features/actions) users migrate to Buildkite Pipelines, and covers key differences between the platforms. ##### Understand the differences Most concepts will feel familiar, but there are some differences to understand about the approaches. ###### System architecture GitHub Actions is fully hosted by GitHub. Buildkite Pipelines offers a hybrid model, consisting of the following components: - A SaaS platform (the _Buildkite dashboard_) for visualization and pipeline management. - [Buildkite agents](/docs/agent) for executing jobs—through [Buildkite hosted agents](/docs/agent/buildkite-hosted) as a fully-managed service, or [self-hosted](/docs/agent/self-hosted) agents (hybrid model architecture) that you manage in your own infrastructure. The [Buildkite agent](https://github.com/buildkite/agent) is open source and can run on local machines, cloud servers, or containers. See [Buildkite Pipelines architecture](/docs/pipelines/architecture) for more details. ###### The difference in default checkout behaviors The checkout process in Buildkite Pipelines is fundamentally different from GitHub Actions due to different default checkout strategies. GitHub Actions' `actions/checkout@v4` uses a shallow clone (`--depth=1`) and skips Git LFS by default. In Buildkite Pipelines: - Git LFS is enabled by default. You can disable it with `GIT_LFS_SKIP_SMUDGE=1`. - Agents check out the full repository. However, you can configure shallow clones using the [Git Shallow Clone plugin](https://buildkite.com/resources/plugins/peakon/git-shallow-clone-buildkite-plugin/) or an agent checkout hook with `--depth=1`, `--single-branch`, and `--no-recurse-submodules`. For further checkout optimization in Buildkite Pipelines, you can use additional plugins: [Sparse Checkout](https://buildkite.com/resources/plugins/buildkite-plugins/sparse-checkout-buildkite-plugin/) and [Custom Checkout](https://buildkite.com/resources/plugins/buildkite-plugins/custom-checkout-buildkite-plugin/). Learn more in [Git checkout optimization](/docs/pipelines/best-practices/git-checkout-optimization). ###### Security The hybrid architecture of Buildkite Pipelines, which combines the centralized Buildkite SaaS platform with your own Buildkite agents, provides a unique approach to security. Buildkite takes care of the security of the SaaS platform, including user authentication, pipeline management, and the web interface. The Buildkite agents, which run on your infrastructure, allow you to maintain control over the environment, security, and other build-related resources. While Buildkite Pipelines provides its own secrets management capabilities, you are also able to configure Buildkite Pipelines so that it doesn't store your secrets. Buildkite Pipelines does not have or need access to your source code. Only the agents you host within your infrastructure would need access to clone your repositories, and your secrets that provide this access can also be managed through secrets management tools hosted within your infrastructure. See the [Security](/docs/pipelines/security) and [Secrets](/docs/pipelines/security/secrets) to learn more. ###### Pipeline configuration concepts Like GitHub Actions, Buildkite Pipelines lets you define pipelines in the web interface or in files checked into a repository. The equivalent of `.github/workflows/*.yml` is a `pipeline.yml` (typically in `.buildkite/`). See [Files and syntax](#pipeline-translation-fundamentals-files-and-syntax) for details. In GitHub Actions, the core description of work is a _workflow_ containing _jobs_, each with multiple _steps_. In Buildkite Pipelines, a [_pipeline_](/docs/pipelines/glossary#pipeline) is the core description of work. A Buildkite pipeline contains different types of [_steps_](/docs/pipelines/configure/step-types) for different tasks: - [Command step](/docs/pipelines/configure/step-types/command-step): Runs one or more shell commands on one or more agents. - [Wait step](/docs/pipelines/configure/step-types/wait-step): Pauses a build until all previous jobs have completed. - [Block step](/docs/pipelines/configure/step-types/block-step): Pauses a build until unblocked. - [Input step](/docs/pipelines/configure/step-types/input-step): Collects information from a user. - [Trigger step](/docs/pipelines/configure/step-types/trigger-step): Creates a build on another pipeline. - [Group step](/docs/pipelines/configure/step-types/group-step): Displays a group of sub-steps as one parent step. Triggering a Buildkite pipeline creates a [_build_](/docs/pipelines/glossary#build), and any command steps are dispatched as [_jobs_](/docs/pipelines/glossary#job) to run on agents. A common practice is to define a pipeline with a single step that uploads the `pipeline.yml` file in the code repository. The `pipeline.yml` contains the full pipeline definition and can be generated [dynamically](/docs/pipelines/configure/dynamic-pipelines). ##### Provision agent infrastructure Buildkite agents run your builds, tests, and deployments. They can run as [Buildkite hosted agents](/docs/agent/buildkite-hosted) where the infrastructure is provided for you, or on your own infrastructure ([self-hosted](/docs/agent/self-hosted)), similar to self-hosted runners in GitHub Actions. For self-hosted agents, consider: - **Infrastructure type:** On-premises, cloud ([AWS](/docs/agent/self-hosted/aws), [GCP](/docs/agent/self-hosted/gcp)), or container platforms ([Docker](/docs/agent/self-hosted/install/docker), [Kubernetes](/docs/agent/self-hosted/agent-stack-k8s)). - **Resource usage:** Evaluate CPU, memory, and disk requirements based on your current runner usage. - **Platform dependencies:** Ensure agents have required tools and libraries (note dependencies from `actions/setup-*` actions). - **Network:** Agents poll Buildkite's [agent API](/docs/apis/agent-api) over HTTPS so no incoming firewall access is needed. - **Scaling:** Scale agents independently based on concurrent job requirements. - **Build isolation:** Use [agent tags](/docs/agent/cli/reference/start#setting-tags) and [clusters](/docs/pipelines/security/clusters) to target specific agents. See the [Getting started](/docs/agent/buildkite-hosted#getting-started-with-buildkite-hosted-agents) guide for Buildkite hosted agents or [Installation](/docs/agent/self-hosted/install/) guides for your infrastructure type for self-hosted agents. ##### Pipeline translation fundamentals Before translating workflows, understand these key differences: ###### Files and syntax | Pipeline aspect | GitHub Actions | Buildkite Pipelines | |-----------------|----------------|-----------| | **Configuration file** | `.github/workflows/*.yml` | `pipeline.yml` (typically in `.buildkite/`) | | **Syntax** | YAML with GitHub-specific expressions | YAML | | **Expressions** | `${{ expression }}` syntax | Shell variables and Buildkite interpolation | | **Triggers** | Defined in workflow file (`on:` block) | Configured in Buildkite UI or API | The syntax used in Buildkite Pipelines is simpler. You can also generate pipeline definitions at build-time with [dynamic pipelines](/docs/pipelines/configure/dynamic-pipelines). ###### Step execution By default, GitHub Actions runs jobs in parallel (unless you specify `needs`), while steps within a job run sequentially. Buildkite Pipelines runs all steps in parallel by default, on any available agents that can run them. To make a Buildkite pipeline run its steps in a specific order, use the [`depends_on` attribute](/docs/pipelines/configure/depends-on#defining-explicit-dependencies) or a [`wait` step](/docs/pipelines/configure/depends-on#implicit-dependencies-with-wait-and-block). For instance, in the following Buildkite pipeline example, the `Lint` and `Test` steps are run in parallel (by default) first, whereas the `Build` step is run after the `Lint` and `Test` steps have completed. ```yaml #### Buildkite Pipelines: Explicit sequencing is required to make steps run in sequence steps: - label: "Lint" key: lint command: npm run lint - label: "Test" key: test command: npm test - label: "Build" depends_on: [lint, test] # Explicit dependency command: npm run build ``` ###### Workspace state In GitHub Actions, all steps within a job share the same workspace. In Buildkite Pipelines, each step runs in a fresh workspace on potentially different agents. Artifacts from previous steps aren't automatically available. Options for sharing state between steps: - **Reinstall per step:** Simple for fast-installing dependencies like `npm ci`. - **Buildkite artifacts:** Upload [build artifacts](/docs/pipelines/configure/artifacts) from one step for use in subsequent steps. Best for small files and build outputs. - **Cache plugin:** Similar to `actions/cache`, use the [Buildkite cache plugin](https://buildkite.com/resources/plugins/buildkite-plugins/cache-buildkite-plugin/) for larger dependencies using cloud storage (S3, GCS). - **External storage:** Custom solutions for complex state management. ###### Agent targeting GitHub Actions uses `runs-on` to select runners by labels: ```yaml jobs: build: runs-on: ubuntu-latest deploy: runs-on: [self-hosted, linux, production] ``` Buildkite Pipelines uses a pull-based model where agents poll queues for work using the `agents` attribute. This provides better security (no incoming connections), easier scaling with [ephemeral agents](/docs/pipelines/glossary#ephemeral-agent), and more resilient networking: ```yaml steps: - label: "Build" command: "make build" agents: queue: "default" - label: "Deploy" command: "make deploy" agents: queue: "production" ``` ##### Translate an example GitHub Actions workflow This section translates a GitHub Actions workflow (building a Node.js app) into a Buildkite pipeline. ###### Step 1: Understand the source workflow Consider the following GitHub Actions workflow: ```yaml name: CI on: push: branches: [main] pull_request: branches: [main] jobs: lint: runs-on: ubuntu-latest steps: - uses: actions/checkout@v4 - uses: actions/setup-node@v4 with: node-version: '20' cache: 'npm' - run: npm ci - run: npm run lint test: runs-on: ubuntu-latest strategy: matrix: node-version: [18, 20, 22] steps: - uses: actions/checkout@v4 - uses: actions/setup-node@v4 with: node-version: ${{ matrix.node-version }} cache: 'npm' - run: npm ci - run: npm test # ... artifact upload build: needs: [lint, test] runs-on: ubuntu-latest steps: - uses: actions/checkout@v4 - uses: actions/setup-node@v4 with: node-version: '20' cache: 'npm' - run: npm ci - run: npm run build # ... artifact upload ``` ###### Step 2: Create a basic Buildkite pipeline structure Create a `.buildkite/pipeline.yml` file in your repository. Start with a basic structure that maps each GitHub Actions job to a Buildkite Pipelines step: ```yaml steps: - label: "\:eslint\: Lint" key: lint command: - echo "Lint step placeholder" - label: "\:test_tube\: Test" key: test command: - echo "Test step placeholder" - label: "\:package\: Build" key: build command: - echo "Build step placeholder" ``` Notice the immediate differences in this pipeline syntax from GitHub Actions: - No `on:` block—triggers are configured in the Buildkite UI or API. - No `actions/checkout` — Buildkite Pipelines checks out code automatically. - Emoji support in labels using [emoji syntax](/docs/pipelines/emojis). - Key assignment for dependency references. ###### Step 3: Configure the step dependencies The build step should run only after lint and test complete successfully. Configure explicit dependencies on the build step: ```yaml - label: "\:package\: Build" key: build depends_on: - lint - test command: - echo "Build step placeholder" ``` Without this [`depends_on` attribute](/docs/pipelines/configure/depends-on#defining-explicit-dependencies), all three steps would run simultaneously, due to [Buildkite Pipelines parallel-by-default behavior](#pipeline-translation-fundamentals-step-execution). ###### Step 4: Add the actual commands Replace the placeholder commands with real commands. Since Buildkite Pipelines assumes tools are pre-installed on agents (or you use Docker), there's no equivalent to `actions/setup-node`: ```yaml - label: "\:eslint\: Lint" key: lint command: - npm ci - npm run lint ``` > 📘 > Buildkite agents should be pre-configured with required tools. Alternatively, use the [Docker plugin](https://github.com/buildkite-plugins/docker-buildkite-plugin) with an appropriate image like `node:20`. ###### Step 5: Implement a build matrix Now, implement the [build matrix](/docs/pipelines/configure/workflows/build-matrix) for Node.js 18, 20, and 22: ```yaml - label: "\:test_tube\: Test (Node {{matrix.node_version}})" key: test matrix: setup: node_version: - "18" - "20" - "22" command: - npm ci - npm test ``` The `{{matrix.node_version}}` template variable gets replaced at runtime, creating separate jobs for each Node.js version. ###### Step 6: Implement artifact collection Add [artifact collection](/docs/pipelines/configure/artifacts) using the `artifact_paths` attribute: ```yaml artifact_paths: - coverage/**/* # Collect test coverage ``` No separate upload action is required—just specify glob patterns. ###### Step 7: Add caching Replace `actions/cache` (or the cache option in `actions/setup-node`) with the [cache plugin](https://buildkite.com/resources/plugins/buildkite-plugins/cache-buildkite-plugin/): ```yaml - label: "\:eslint\: Lint" key: lint plugins: - cache#v1.10.0: manifest: package-lock.json path: node_modules command: - npm ci - npm run lint ``` ###### Step 8: Review the complete pipeline Here's the complete translated pipeline: ```yaml steps: - label: "\:eslint\: Lint" key: lint plugins: - cache#v1.10.0: manifest: package-lock.json path: node_modules command: - npm ci - npm run lint - label: "\:test_tube\: Test (Node {{matrix.node_version}})" key: test matrix: setup: node_version: - "18" - "20" - "22" plugins: - cache#v1.10.0: manifest: package-lock.json path: node_modules command: - npm ci - npm test artifact_paths: - coverage/**/* - label: "\:package\: Build" depends_on: - lint - test plugins: - cache#v1.10.0: manifest: package-lock.json path: node_modules command: - npm ci - npm run build artifact_paths: - dist/**/* ``` ###### Step 9: Refactor with YAML aliases To eliminate duplication, you can use YAML aliases: ```yaml common: cache: &cache - cache#v1.10.0: manifest: package-lock.json path: node_modules steps: - label: "\:eslint\: Lint" key: lint plugins: *cache command: - npm ci - npm run lint - label: "\:test_tube\: Test (Node {{matrix.node_version}})" key: test matrix: setup: node_version: - "18" - "20" - "22" plugins: *cache command: - npm ci - npm test artifact_paths: - coverage/**/* - label: "\:package\: Build" depends_on: - lint - test plugins: *cache command: - npm ci - npm run build artifact_paths: - dist/**/* ``` ##### Key mappings reference This table provides quick mappings between common GitHub Actions concepts and their Buildkite Pipelines equivalents: | GitHub Actions | Buildkite Pipelines | |----------------|-----------| | `jobs.` | `steps` array item with `key: ""` | | `jobs..name` | `label` | | `jobs..runs-on` | `agents: { queue: "..." }` | | `jobs..env` | `env` | | `jobs..timeout-minutes` | `timeout_in_minutes` | | `needs` | `depends_on` | | `continue-on-error: true` | `soft_fail: true` | | `${{ secrets.NAME }}` | `${NAME}` (configured on agent) | | `working-directory: ./dir` | Prepend `cd dir &&` to commands | | `actions/upload-artifact` | `artifact_paths` on the step | | `actions/download-artifact` | `buildkite-agent artifact download` command | | `actions/cache` | `cache` plugin | | `strategy.matrix` | `matrix` attribute | | `${{ github.sha }}` | `${BUILDKITE_COMMIT}` | | `${{ github.ref }}` | `${BUILDKITE_BRANCH}` | | `${{ github.event.pull_request.number }}` | `${BUILDKITE_PULL_REQUEST}` | ##### Translating triggers GitHub Actions supports many webhook event triggers through the `on:` block. Buildkite Pipelines natively supports: - `push` (branches) - `pull_request` - `tag` (via "Build tags" setting) - `schedule` (cron) These are configured in the Buildkite UI under Pipeline Settings, not in the YAML file. | GitHub Actions trigger | Buildkite Pipelines configuration | |------------------------|------------------------| | `push` | UI → Pipeline Settings → GitHub | | `pull_request` | UI → Pipeline Settings → GitHub | | `schedule` | UI → Pipeline Settings → Schedules | | `workflow_dispatch` | `input` step + "New Build" button/API | | `release` / `create` (tags) | UI → Build tags setting | For triggers not natively supported by Buildkite Pipelines (`issues`, `issue_comment`, `workflow_run`, etc.), you can: 1. **Keep in GitHub Actions:** Best for GitHub-specific automation. 2. **Configure webhook:** Set up an endpoint to call the Buildkite API. 3. **Use trigger step:** Chain from another pipeline. ##### Translating context variables GitHub Actions provides context objects (`github.*`, `runner.*`, `env.*`). Buildkite Pipelines provides [environment variables](/docs/pipelines/configure/environment-variables): | GitHub Actions context | Buildkite Pipelines environment variable | |------------------------|-------------------------------| | `github.repository` | `BUILDKITE_REPO` or `BUILDKITE_PIPELINE_SLUG` | | `github.sha` | `BUILDKITE_COMMIT` | | `github.ref` | `BUILDKITE_BRANCH` | | `github.ref_name` | `BUILDKITE_BRANCH` | | `github.actor` | `BUILDKITE_BUILD_CREATOR` | | `github.run_id` | `BUILDKITE_BUILD_ID` | | `github.run_number` | `BUILDKITE_BUILD_NUMBER` | | `github.job` | `BUILDKITE_STEP_KEY` | | `github.workflow` | `BUILDKITE_PIPELINE_SLUG` | | `github.event.pull_request.number` | `BUILDKITE_PULL_REQUEST` | ##### Translating conditionals GitHub Actions conditionals use the `if:` attribute with expressions. Buildkite Pipelines also supports `if:` but with different syntax: | GitHub Actions | Buildkite Pipelines | |----------------|-------------------| | `if: github.ref == 'refs/heads/main'` | `if: build.branch == "main"` | | `if: github.event_name == 'push'` | `if: build.source == "webhook"` | | `if: github.event_name == 'pull_request'` | `if: build.pull_request.id != null` | | `if: contains(github.ref, 'release')` | `if: build.branch =~ /release/` | For complex conditionals that can't be expressed using `if:` syntax in Buildkite Pipelines, use shell conditionals in your commands or [dynamic pipeline uploads](/docs/pipelines/configure/dynamic-pipelines). ##### Translating matrix builds Buildkite has native matrix support that maps directly to GitHub Actions' `strategy.matrix`: | GitHub Actions | Buildkite Pipelines | |----------------|-------------------| | `strategy.matrix` | `matrix.setup` | | `strategy.matrix.include` | `matrix.adjustments` (add combinations) | | `strategy.matrix.exclude` | `matrix.adjustments` with `skip: true` | | `${{ matrix. }}` | `{{matrix.}}` | | `continue-on-error` per matrix combo | `soft_fail` in `adjustments` | | `fail-fast: false` | Default behavior (sibling jobs aren't cancelled) | An example of multi-dimensional matrix in GitHub Actions: ```yaml #### GitHub Actions strategy: matrix: os: [ubuntu-latest, macos-latest] node: [18, 20] ``` This is how it translates to Buildkite Pipelines: ```yaml #Buildkite Pipelines steps: - label: "test {{matrix.os}} node-{{matrix.node}}" command: npm test agents: queue: "{{matrix.os}}" matrix: setup: os: - "linux" - "macos" node: - "18" - "20" ``` ##### Translating services GitHub Actions provides a `services` key that allows you to run containerized services (such as databases, caches, or message queues) alongside your job. These service containers are automatically started before your job runs and are accessible via their service name as a hostname. Buildkite Pipelines handles service containers differently. Instead of a built-in `services` key, Buildkite Pipelines uses the [Docker Compose plugin](https://buildkite.com/resources/plugins/buildkite-plugins/docker-compose-buildkite-plugin/) to manage multi-container environments. This approach gives you full control over container orchestration using standard Docker Compose configuration files. To migrate your GitHub Actions services: 1. Create a `docker-compose.ci.yml` file that defines your application and service containers. 1. Configure dependencies and health checks to ensure services are ready before your tests run. 1. Reference this configuration file in your Buildkite pipeline using the Docker Compose plugin. The following example shows a Docker Compose configuration with a PostgreSQL service: ```yaml #### docker-compose.ci.yml services: app: build: . depends_on: postgres: condition: service_healthy environment: DATABASE_URL: postgres://postgres:postgres@postgres:5432/test postgres: image: postgres:15 environment: POSTGRES_PASSWORD: postgres healthcheck: test: ["CMD", "pg_isready"] interval: 10s timeout: 5s retries: 5 ``` The following Buildkite pipeline configuration uses the Docker Compose plugin to run your tests. The `run` attribute specifies which service container to execute your commands in, while `config` points to your Docker Compose file. The plugin automatically starts all dependent services (in this case, PostgreSQL) and waits for health checks to pass before running your commands: ```yaml #### Buildkite Pipelines steps: - label: "test" plugins: - docker-compose#v5.12.1: run: app config: docker-compose.ci.yml command: - npm test ``` ##### Translating job outputs GitHub Actions uses `$GITHUB_OUTPUT` and `jobs..outputs` to pass data between jobs: ```yaml #### GitHub Actions jobs: setup: outputs: version: ${{ steps.get-version.outputs.version }} steps: - id: get-version run: echo "version=1.2.3" >> $GITHUB_OUTPUT ``` Buildkite Pipelines uses [meta-data](/docs/pipelines/configure/build-meta-data): ```yaml #### Buildkite Pipelines steps: - label: "setup" key: "setup" command: - buildkite-agent meta-data set "version" "1.2.3" - label: "build" depends_on: "setup" command: - VERSION=$(buildkite-agent meta-data get "version") - echo "Building version $VERSION" ``` ##### Translating step summaries GitHub Actions uses `$GITHUB_STEP_SUMMARY` to add content to the workflow summary: ```yaml #### GitHub Actions - run: echo "## Build Complete" >> $GITHUB_STEP_SUMMARY ``` Buildkite Pipelines uses [annotations](/docs/agent/cli/reference/annotate): ```yaml #### Buildkite Pipelines - command: - echo "## Build Complete" | buildkite-agent annotate --style "success" ``` ##### Key differences and benefits of migrating to Buildkite Pipelines This [example pipeline translation](#translate-an-example-github-actions-workflow) demonstrates several important advantages of Buildkite's approach: - **Simpler pipeline configuration:** Buildkite YAML is straightforward with fewer special syntax rules. - **Execution model:** Buildkite Pipelines steps are parallel by default with explicit sequencing, similar to GitHub Actions jobs but applied at the step level. - **Native features:** Buildkite Pipelines provides native artifact handling and build visualization without additional actions. - **Agent flexibility:** Full control over your build environment with self-hosted agents. For larger deployments, these differences become more significant: - The fresh workspace model avoids state leakage between builds. - The pull-based agent model simplifies scaling and security. - Pipeline-specific plugin versioning eliminates dependency conflicts. Be aware of common pipeline-translation mistakes, which might include: - Forgetting about fresh workspaces (leading to missing dependencies). - Assuming tools are installed (when you need Docker or pre-configured agents). - Over-parallelizing interdependent steps. ##### Next steps Explore these resources to enhance your migrated pipelines: - [Defining your pipeline steps](/docs/pipelines/defining-steps) - [Buildkite agent overview](/docs/agent) - [Plugins directory](https://buildkite.com/resources/plugins/) - [Dynamic pipelines](/docs/pipelines/configure/dynamic-pipelines) and the [Buildkite SDK](/docs/pipelines/configure/dynamic-pipelines/sdk) - [Buildkite agent hooks](/docs/agent/hooks) - [Using conditions](/docs/pipelines/configure/conditionals) - [Annotations](/docs/agent/cli/reference/annotate) - [Security](/docs/pipelines/security), [Secrets](/docs/pipelines/security/secrets), and [permissions](/docs/pipelines/security/permissions) - [Integrations](/docs/pipelines/integrations) - [Test Engine](/docs/test-engine) for test insights For hands-on practice, try the [Buildkite pipeline converter](/docs/pipelines/migration/converter/github-actions). For migration assistance, contact support@buildkite.com. --- ### From CircleCI URL: https://buildkite.com/docs/pipelines/migration/from-circleci #### Migrate from CircleCI This guide helps [CircleCI](https://circleci.com/) users migrate to Buildkite Pipelines, and covers key differences between the platforms. ##### Understand the differences Most CircleCI concepts translate to Buildkite Pipelines directly, but there are still differences that need to be understood to ensure a comfortable transition from one platform to another. ###### System architecture CircleCI is a fully hosted CI/CD platform that runs jobs on CircleCI-managed or self-hosted runners. Buildkite Pipelines offers a hybrid model, consisting of the following components: - A SaaS platform (the _Buildkite dashboard_) for visualization and pipeline management. - [Buildkite agents](/docs/agent) for executing jobs—through [Buildkite hosted agents](/docs/agent/buildkite-hosted) as a fully-managed service, or [self-hosted](/docs/agent/self-hosted) agents (hybrid model architecture) that you manage in your own infrastructure. The [Buildkite agent](https://github.com/buildkite/agent) is open source and can run on local machines, cloud servers, or containers. See [Buildkite Pipelines architecture](/docs/pipelines/architecture) for more details. ###### Security The hybrid architecture of Buildkite Pipelines provides a unique approach to security. Buildkite Pipelines takes care of the security of its SaaS platform, including user authentication, pipeline management, and the web interface. The Buildkite agents, which run on your infrastructure, allow you to maintain control over the environment, security, and other build-related resources. While Buildkite Pipelines provides its own secrets management capabilities, you can also configure Buildkite Pipelines so that it doesn't store your secrets. Buildkite Pipelines does not have or need access to your source code. Only the agents you host within your infrastructure would need access to clone your repositories, and your secrets that provide this access can also be managed through secrets management tools hosted within your infrastructure. Learn more about [Security](/docs/pipelines/security) and [Secrets](/docs/pipelines/security/secrets) in Buildkite Pipelines. ###### Pipeline configuration concepts In CircleCI, the core description of work is a _workflow_ defined in `.circleci/config.yml`, containing _jobs_ with multiple _steps_. In Buildkite Pipelines, a [_pipeline_](/docs/pipelines/glossary#pipeline) is the core description of work, typically defined in a `pipeline.yml` file (usually in `.buildkite/`). A Buildkite pipeline contains different types of [_steps_](/docs/pipelines/configure/step-types) for different tasks: - [Command step](/docs/pipelines/configure/step-types/command-step): Runs one or more shell commands on one or more agents. - [Wait step](/docs/pipelines/configure/step-types/wait-step): Pauses a build until all previous jobs have completed. - [Block step](/docs/pipelines/configure/step-types/block-step): Pauses a build until unblocked. - [Input step](/docs/pipelines/configure/step-types/input-step): Collects information from a user. - [Trigger step](/docs/pipelines/configure/step-types/trigger-step): Creates a build on another pipeline. - [Group step](/docs/pipelines/configure/step-types/group-step): Displays a group of sub-steps as one parent step. While CircleCI traditionally maps each project one-to-one to a repository, Buildkite pipelines are fully decoupled from repositories. You can create multiple pipelines per repository, trigger pipelines across repositories, or run pipelines independently of any repository. Triggering a Buildkite pipeline creates a [_build_](/docs/pipelines/glossary#build), and any command steps are dispatched as [_jobs_](/docs/pipelines/glossary#job) to run on agents. A common practice is to define a pipeline with a single step that uploads the `pipeline.yml` file in the code repository. The `pipeline.yml` contains the full pipeline definition and can be generated [dynamically](/docs/pipelines/configure/dynamic-pipelines). ###### Plugin system CircleCI uses _orbs_, which are reusable packages that bundle jobs, commands, and executors together. Buildkite Pipelines uses [Buildkite plugins](https://buildkite.com/resources/plugins/), which are referenced directly in pipeline definitions. Unlike orbs, Buildkite plugins focus on modifying agent behavior at the step level. They are shell-based, run on individual agents, and are pipeline- or step-specific with independent versioning. Plugin failures are isolated to individual builds, and compatibility issues are rare. ##### Provision agent infrastructure Buildkite agents run your builds, tests, and deployments. They can run as [Buildkite hosted agents](/docs/agent/buildkite-hosted) where the infrastructure is provided for you, or on your own infrastructure ([self-hosted](/docs/agent/self-hosted)), similar to self-hosted runners in CircleCI. For self-hosted agents, consider: - **Infrastructure type:** On-premises, cloud ([AWS](/docs/agent/self-hosted/aws), [GCP](/docs/agent/self-hosted/gcp)), or container platforms ([Docker](/docs/agent/self-hosted/install/docker), [Kubernetes](/docs/agent/self-hosted/agent-stack-k8s)). - **Resource usage:** Evaluate CPU, memory, and disk requirements based on your current CircleCI resource class usage. - **Platform dependencies:** Ensure agents have required tools and libraries. Unlike CircleCI, where Docker images provide pre-configured environments, Buildkite agents require explicit tool installation or pre-built agent images. - **Network:** Agents poll the Buildkite [agent API](/docs/apis/agent-api) over HTTPS so no incoming firewall access is needed. - **Scaling:** Scale agents independently based on concurrent job requirements. - **Build isolation:** Use [agent tags](/docs/agent/cli/reference/start#setting-tags) and [clusters](/docs/pipelines/security/clusters) to target specific agents. For Buildkite hosted agents, see the [Getting started](/docs/agent/buildkite-hosted#getting-started-with-buildkite-hosted-agents) guide. For self-hosted agents, see the [Installation](/docs/agent/self-hosted/install/) guides for your infrastructure type. ##### Pipeline translation fundamentals Before translating your CircleCI configuration, note the key differences in pipeline syntax, step execution, artifact handling, Docker configuration, and agent targeting between CircleCI and Buildkite Pipelines. ###### Files and syntax This table outlines the fundamental differences in pipeline files and their syntax between CircleCI and Buildkite Pipelines. | Pipeline aspect | CircleCI | Buildkite Pipelines | |-----------------|----------|---------------------| | **Configuration file** | `.circleci/config.yml` | `pipeline.yml` (typically in `.buildkite/`) | | **Reusable logic** | Orbs, commands, executors | [Plugins](https://buildkite.com/resources/plugins/), YAML aliases, scripts | | **Dynamic configuration** | Pipeline parameters for conditional workflows | [Dynamic pipelines](/docs/pipelines/configure/dynamic-pipelines) generate steps at runtime using any language | | **Triggers** | Defined in config file or API | Configured in the web interface or API | The YAML-based pipeline syntax of Buildkite Pipelines is simpler. Where CircleCI relies on `parameters` to conditionally include or exclude jobs and workflows, Buildkite Pipelines uses dynamic pipelines to generate the entire pipeline definition at build time using scripts written in any language. This approach provides more flexibility without the complexity of parameter declarations and conditional logic scattered throughout your configuration. ###### Step execution Both CircleCI and Buildkite Pipelines run steps in parallel by default. In CircleCI, steps within a job run sequentially, but jobs within a workflow run in parallel unless you specify `requires`. In Buildkite Pipelines, all steps run in parallel unless you add explicit ordering. Each Buildkite step is fully isolated from other steps, similar to how CircleCI jobs are isolated from each other. Steps can run on different agents, with no shared filesystem or state between them. To make a Buildkite pipeline run its steps in a specific order, use the [`depends_on` attribute](/docs/pipelines/configure/depends-on#defining-explicit-dependencies) or a [`wait` step](/docs/pipelines/configure/depends-on#implicit-dependencies-with-wait-and-block). ```yaml #### Buildkite Pipelines: Explicit sequencing with depends_on steps: - label: "Lint" key: lint command: npm run lint - label: "Test" key: test command: npm test - label: "Build" depends_on: [lint, test] command: npm run build ``` ###### Workspace and artifact handling In CircleCI, `persist_to_workspace` and `attach_workspace` share files between jobs. In Buildkite Pipelines, each step runs in a fresh workspace on potentially different agents. Use `buildkite-agent artifact upload` and `buildkite-agent artifact download` to share [artifacts](/docs/pipelines/configure/artifacts): ```yaml #### Buildkite Pipelines steps: - label: "Build" key: "build" command: - npm run build - buildkite-agent artifact upload "dist/**/*" - label: "Deploy" depends_on: "build" command: - buildkite-agent artifact download "dist/**/*" . - npm run deploy ``` Other options for sharing state between steps: - **Reinstall per step:** Simple for fast-installing dependencies like `npm ci`. - **Meta-data:** Use [meta-data](/docs/pipelines/configure/build-meta-data) to exchange lightweight key-value pairs between steps at runtime without file-based sharing. - **Cache plugin:** Similar to CircleCI's `save_cache`/`restore_cache`, use the [Buildkite cache plugin](https://buildkite.com/resources/plugins/buildkite-plugins/cache-buildkite-plugin/) for larger dependencies using cloud storage (S3, GCS). The plugin's `manifest` attribute works like CircleCI's `{{ checksum }}` to generate a cache key from a file. - **External storage:** Custom solutions for complex state management. > 📘 Hosted agents cache volumes > If using Buildkite hosted agents, [cache volumes](/docs/agent/buildkite-hosted/cache-volumes) provide a native caching mechanism. Cache volumes are retained up to 14 days and attached on a best-effort basis. ###### Docker images and executors CircleCI executors define the execution environment (Docker image, resource class, environment variables). Buildkite Pipelines separates these concerns: | Executor component | Buildkite Pipelines equivalent | |-------------------|-------------------------------| | `docker[].image` | [Docker plugin](https://buildkite.com/resources/plugins/docker) `image` | | `docker[].environment` | Docker plugin `environment` | | `resource_class` | `agents: { queue: "..." }` | | `working_directory` | Docker plugin `workdir` | | `machine` executor | VM-based agent queue | For example, in Buildkite Pipelines: ```yaml #### Buildkite Pipelines steps: - label: "Test" env: NODE_ENV: test plugins: - docker#v5.13.0: image: node:18 command: npm test ``` For Docker-based builds, use the [Docker plugin](https://buildkite.com/resources/plugins/docker) or the [Docker Compose plugin](https://buildkite.com/resources/plugins/docker-compose) to run commands inside containers. > 🚧 CircleCI `cimg/*` images require workarounds > CircleCI convenience images (`cimg/node`, `cimg/python`, and so on) install runtimes using version managers that require login shells. The Buildkite Docker plugin defaults to `/bin/sh -e -c`, which does not source these profiles. Add a `shell` option for bash login shells and `propagate-uid-gid: true` to avoid permission errors: > > ```yaml > plugins: > - docker#v5.13.0: > image: cimg/node:18.17 > shell: ["/bin/bash", "-l", "-e", "-c"] > propagate-uid-gid: true > ``` > > Consider replacing `cimg/*` images with standard Docker Hub images (`node:18`, `python:3.11`) which don't require these workarounds. ###### Agent targeting CircleCI uses `resource_class` and executors to control where jobs run: ```yaml #### CircleCI jobs: build: docker: - image: cimg/node:20.0 resource_class: large steps: - checkout - run: make build deploy: machine: image: ubuntu-2204:current resource_class: medium steps: - checkout - run: make deploy ``` Buildkite Pipelines uses a pull-based model where agents poll [queues](/docs/agent/queues) for work using the `agents` attribute. Map CircleCI resource classes to queues with agents sized to match your workload requirements. This model provides better security (no incoming connections), easier scaling with [ephemeral agents](/docs/pipelines/glossary#ephemeral-agent), and more resilient networking: ```yaml #### Buildkite Pipelines steps: - label: "Build" command: "make build" agents: queue: "large" - label: "Deploy" command: "make deploy" agents: queue: "production" ``` You can also use custom [agent tags](/docs/agent/cli/reference/start#setting-tags) beyond `queue` to [target agents](/docs/agent/cli/reference/start#agent-targeting) by capability, for example: ```yaml agents: os: "linux" arch: "arm64" ``` For Windows or macOS jobs, route to platform-specific queues, for example: ```yaml agents: queue: "windows" ``` ##### Translate an example CircleCI configuration This section guides you through the process of translating a CircleCI configuration example (which builds a [Node.js](https://nodejs.org/) app) into a Buildkite pipeline. If you want to see the finished result first, skip to the [complete pipeline](/docs/pipelines/migration/from-circleci#translate-an-example-circleci-configuration-step-7-review-the-complete-pipeline) or the [refactored version with YAML aliases](/docs/pipelines/migration/from-circleci#translate-an-example-circleci-configuration-step-8-refactor-with-yaml-aliases). ###### Step 1: Understand the source configuration The following CircleCI configuration shows an example: ```yaml version: 2.1 orbs: node: circleci/node@5.2 executors: node-executor: docker: - image: cimg/node:20.0 jobs: lint: executor: node-executor steps: - checkout - node/install-packages - run: npm run lint test: executor: node-executor steps: - checkout - node/install-packages - run: npm test - store_test_results: path: test-results - store_artifacts: path: coverage build: executor: node-executor steps: - checkout - node/install-packages - run: npm run build - store_artifacts: path: dist workflows: ci: jobs: - lint - test - build: requires: - lint - test ``` This workflow lints, tests, and builds a Node.js application, with the build job depending on lint and test completing first. ###### Step 2: Create a basic Buildkite pipeline structure Create a `.buildkite/pipeline.yml` file in your repository. Start with a basic structure that maps each CircleCI job to a Buildkite Pipelines step: ```yaml steps: - label: "\:eslint\: Lint" key: lint command: - echo "Lint step placeholder" - label: "\:test_tube\: Test" key: test command: - echo "Test step placeholder" - label: "\:wrench\: Build" key: build command: - echo "Build step placeholder" ``` Notice the immediate differences in this pipeline syntax from CircleCI: - No `version:` declaration needed. - No `orbs:`, `executors:`, or `jobs:` blocks. - No `checkout` step as Buildkite agents check out code automatically. - Emoji support in labels without plugins. - Key assignment for dependency references. ###### Step 3: Configure the step dependencies The build step should run only after lint and test complete successfully. Configure explicit dependencies on the build step, which prevents it from running if either the lint or test steps fail: ```yaml - label: "\:wrench\: Build" key: build depends_on: [lint, test] command: - echo "Build step placeholder" ``` Without this [`depends_on` attribute](/docs/pipelines/configure/depends-on#defining-explicit-dependencies), all three steps would run simultaneously, due to the [parallel-by-default behavior of Buildkite Pipelines](#pipeline-translation-fundamentals-step-execution). ###### Step 4: Add the actual commands Replace the placeholder commands with real commands. The `node/install-packages` orb command becomes `npm ci`: ```yaml - label: "\:eslint\: Lint" key: lint commands: - npm ci - npm run lint ``` > 📘 > Unlike CircleCI, where orbs like `circleci/node` handle package installation, Buildkite Pipelines requires explicit commands. Tools should be pre-installed on agents or provided through Docker images. ###### Step 5: Add the Docker plugin for container builds Replace the CircleCI executor with the [Docker plugin](https://buildkite.com/resources/plugins/docker) to run commands inside a container: ```yaml - label: "\:eslint\: Lint" key: lint plugins: - docker#v5.13.0: image: "node:20" commands: - npm ci - npm run lint ``` ###### Step 6: Implement artifact collection Add [artifact collection](/docs/pipelines/configure/artifacts) using the `artifact_paths` attribute. This replaces CircleCI's `store_artifacts`: ```yaml - label: "\:test_tube\: Test" key: test plugins: - docker#v5.13.0: image: "node:20" commands: - npm ci - npm test artifact_paths: - coverage/**/* ``` ###### Step 7: Review the complete pipeline The complete example CircleCI pipeline translated to a Buildkite pipeline: ```yaml steps: - label: "\:eslint\: Lint" key: lint plugins: - docker#v5.13.0: image: "node:20" commands: - npm ci - npm run lint - label: "\:test_tube\: Test" key: test plugins: - docker#v5.13.0: image: "node:20" commands: - npm ci - npm test artifact_paths: - coverage/**/* - label: "\:wrench\: Build" depends_on: [lint, test] plugins: - docker#v5.13.0: image: "node:20" commands: - npm ci - npm run build artifact_paths: - dist/**/* ``` ###### Step 8: Refactor with YAML aliases Eliminate the duplication using YAML aliases: ```yaml common: docker: &docker docker#v5.13.0: image: "node:20" steps: - label: "\:eslint\: Lint" key: lint plugins: - *docker commands: - npm ci - npm run lint - label: "\:test_tube\: Test" key: test plugins: - *docker commands: - npm ci - npm test artifact_paths: - coverage/**/* - label: "\:wrench\: Build" depends_on: [lint, test] plugins: - *docker commands: - npm ci - npm run build artifact_paths: - dist/**/* ``` By anchoring the plugin map rather than the entire `plugins` array, individual steps can override or extend their plugin list when needed. The final result is shorter than the original CircleCI configuration, with no duplication and a cleaner, more readable structure. ##### Translating common patterns This section covers translation patterns for CircleCI features not covered in the [example walkthrough](/docs/pipelines/migration/from-circleci#translate-an-example-circleci-configuration). ###### Environment variables CircleCI job-level `environment` maps to the `env` attribute in Buildkite Pipelines. For more information, see [environment variables](/docs/pipelines/configure/environment-variables). For pipeline-wide variables, use a top-level `env` attribute. You can also define environment variables at the agent level using [agent hooks](/docs/agent/hooks), making them available to all pipelines running on those agents. If you use the [Elastic CI Stack for AWS](/docs/agent/self-hosted/aws/elastic-ci-stack), you can scope agent-level variables to specific pipelines. ```yaml #### Buildkite Pipelines env: NODE_ENV: production steps: - label: "Build" command: npm run build ``` > 🚧 Docker plugin environment variables > When using the Docker plugin, step-level `env:` variables are not automatically available inside the container. Use the Docker plugin's `environment:` list instead. ###### Contexts and secrets CircleCI contexts are named collections of environment variables attached to jobs at the workflow level. Translate based on content type: - **For secrets:** Use Buildkite [cluster secrets](/docs/pipelines/security/secrets) with the `secrets:` attribute in pipeline YAML, or an external secrets manager. - **For non-secret variables:** Use the `env:` attribute directly. Use YAML anchors to share variables across steps. ###### Approval jobs CircleCI's `type: approval` jobs create manual gates in a workflow. The equivalent in Buildkite Pipelines is a [block step](/docs/pipelines/configure/step-types/block-step): ```yaml #### Buildkite Pipelines steps: - label: "Build" key: "build" command: npm run build - block: ":rocket: Deploy to production?" key: "hold" depends_on: "build" - label: "Deploy" depends_on: "hold" command: npm run deploy ``` ###### Matrix builds CircleCI matrix jobs translate to the native [build matrix](/docs/pipelines/configure/workflows/build-matrix) support in Buildkite Pipelines: | CircleCI | Buildkite Pipelines | |----------|---------------------| | `matrix.parameters` | `matrix.setup` | | `matrix.exclude` | `matrix.adjustments` with `skip: true` | | `>` | `{{matrix.X}}` | ```yaml #### Buildkite Pipelines steps: - label: "Test (Node {{matrix.node_version}})" plugins: - docker#v5.13.0: image: node:{{matrix.node_version}} command: npm test matrix: setup: node_version: - "18" - "20" - "22" ``` ###### Parallelism CircleCI's `parallelism` key maps to the [`parallelism` attribute](/docs/pipelines/configure/step-types/command-step#parallelism) in Buildkite Pipelines: ```yaml #### Buildkite Pipelines steps: - label: "Test" parallelism: 4 command: npm test ``` Buildkite Pipelines parallelism creates multiple jobs with `BUILDKITE_PARALLEL_JOB` and `BUILDKITE_PARALLEL_JOB_COUNT` environment variables. For intelligent test distribution based on timing data (equivalent to `circleci tests split --split-by=timings`), use [Test Engine](/docs/test-engine). ###### Branch and tag filtering Buildkite Pipelines offers two approaches to step-level filtering: - **`branches:` attribute:** For simple patterns, with `!` prefix for negation (for example, `branches: "!dev !staging"`). - **`if:` conditionals:** For complex patterns or regex matching (for example, `if: build.branch !~ /^feature\/experimental-/`). The `branches:` and `if:` attributes cannot be used together on the same step. For tag-only builds, use `if: build.tag =~ /^v/`. For pipeline-wide branch restrictions that prevent builds from being created entirely, configure this in **Pipeline Settings** under **Branch Limiting**, not in YAML. ###### Scheduled workflows CircleCI supports scheduled pipelines configured through the CircleCI UI, API, or the legacy `triggers:` key in YAML. In Buildkite Pipelines, [scheduled builds](/docs/pipelines/configure/workflows/scheduled-builds) are configured in the Buildkite UI under your pipeline's **Settings** > **Schedules**. ###### Dynamic configuration CircleCI's dynamic configuration pattern uses `setup: true` with the continuation orb to generate pipelines at runtime. Buildkite Pipelines handles this natively with `buildkite-agent pipeline upload`: ```yaml #### Buildkite Pipelines steps: - label: ":pipeline: Generate pipeline" command: | ./generate-config.sh | buildkite-agent pipeline upload ``` For path-based dynamic configuration (similar to CircleCI's `path-filtering` orb), use [conditionals](/docs/pipelines/configure/conditionals), [dynamic pipelines](/docs/pipelines/configure/dynamic-pipelines) with change detection logic, or the declarative [`if_changed` attribute](/docs/pipelines/configure/step-types/command-step#agent-applied-attributes). For monorepos, the [Monorepo Diff plugin](https://buildkite.com/resources/plugins/monorepo-diff) watches for changes across directories and triggers the appropriate pipelines automatically. ###### Reusable commands CircleCI `commands` are reusable step sequences with parameters. For simple reuse in Buildkite Pipelines, use [YAML anchors](/docs/pipelines/integrations/plugins/using#using-yaml-anchors-with-plugins). For parameterized reuse, use [dynamic pipelines](/docs/pipelines/configure/dynamic-pipelines) where a step generates and uploads pipeline YAML at runtime using `buildkite-agent pipeline upload`. ##### Concept mapping reference This table provides a mapping between CircleCI concepts and their Buildkite Pipelines equivalents: | CircleCI | Buildkite Pipelines | |----------|---------------------| | `.circleci/config.yml` | `.buildkite/pipeline.yml` | | Workflow | Pipeline | | Job | [Command step](/docs/pipelines/configure/step-types/command-step) | | Step (`run:`) | Shell command within a command step | | Executor | Agent queue or [Docker plugin](https://buildkite.com/resources/plugins/docker) | | Orb | [Plugin](https://buildkite.com/resources/plugins/) | | `requires` | [`depends_on`](/docs/pipelines/configure/depends-on) | | `type: approval` | [Block step](/docs/pipelines/configure/step-types/block-step) | | `store_artifacts` | [`artifact_paths`](/docs/pipelines/configure/artifacts) | | `store_test_results` | [Test Engine](/docs/test-engine) | | `persist_to_workspace` | `buildkite-agent artifact upload` | | `attach_workspace` | `buildkite-agent artifact download` | | `save_cache` / `restore_cache` | [Cache plugin](https://buildkite.com/resources/plugins/buildkite-plugins/cache-buildkite-plugin/) | | `when` conditions | [Conditionals](/docs/pipelines/configure/conditionals) | | `matrix` | [Build matrix](/docs/pipelines/configure/workflows/build-matrix) | | Contexts | [Cluster secrets](/docs/pipelines/security/secrets) and `env` | | `resource_class` | `agents: { queue: "..." }` | | Serial groups (pipeline-number ordering) | [`priority`](/docs/pipelines/configure/step-types/command-step#priority) attribute | | Scheduled workflows | [Scheduled builds](/docs/pipelines/configure/workflows/scheduled-builds) | | Pipeline parameters (`>`) | [Environment variables](/docs/pipelines/configure/environment-variables) or [dynamic pipelines](/docs/pipelines/configure/dynamic-pipelines) | | `setup: true` + continuation orb | `buildkite-agent pipeline upload` ([dynamic pipelines](/docs/pipelines/configure/dynamic-pipelines)) | | `when: always` | `depends_on` with `allow_failure: true` | | `$CIRCLE_SHA1` | `$BUILDKITE_COMMIT` | | `$CIRCLE_BRANCH` | `$BUILDKITE_BRANCH` | | `$CIRCLE_BUILD_NUM` | `$BUILDKITE_BUILD_NUMBER` | | `$CIRCLE_PR_NUMBER` | `$BUILDKITE_PULL_REQUEST` | ##### Key differences and benefits of migrating to Buildkite Pipelines Beyond the syntax differences shown in the [example pipeline translation](#translate-an-example-circleci-configuration), migrating to Buildkite Pipelines provides several architectural advantages: - **Hybrid architecture with infrastructure control:** Run builds on [Buildkite hosted agents](/docs/agent/buildkite-hosted), [self-hosted agents](/docs/agent/self-hosted) in your own environment, or a mix of both. Source code and secrets stay within your infrastructure, and agents connect outbound over HTTPS with no incoming firewall access required. - **No concurrency limits:** Scale by adding agents without platform-imposed concurrency caps. Buildkite Pipelines supports 100,000+ concurrent agents with turnkey autoscaling through the [Elastic CI Stack for AWS](/docs/agent/self-hosted/aws/elastic-ci-stack) or [Agent Stack for Kubernetes](/docs/agent/self-hosted/agent-stack-k8s). - **Dynamic pipelines:** Instead of relying on parameters and static conditional logic, [dynamic pipelines](/docs/pipelines/configure/dynamic-pipelines) generate and modify steps at runtime using any language. This is especially valuable for [monorepos](/docs/pipelines/best-practices/working-with-monorepos) and complex build graphs. - **Predictable pricing:** No credit-based billing or per-user charges based on commit activity. See [pricing](https://buildkite.com/pricing/) for details. For a detailed comparison, see [Advantages of migrating from CircleCI](/docs/pipelines/advantages/buildkite-vs-circleci). Be aware of common pipeline-translation mistakes, which might include: - Forgetting about fresh workspaces (leading to missing dependencies). - Using `cimg/*` Docker images without login shell and UID/GID workarounds. - Over-parallelizing interdependent steps. - Assuming tools from orbs are available (when you need explicit installation). ##### Next steps Explore these resources to enhance your migrated pipelines: - [Defining your pipeline steps](/docs/pipelines/defining-steps) - [Buildkite agent overview](/docs/agent) - [Plugins directory](https://buildkite.com/resources/plugins/) - [Dynamic pipelines](/docs/pipelines/configure/dynamic-pipelines) and the [Buildkite SDK](/docs/pipelines/configure/dynamic-pipelines/sdk) - [Buildkite agent hooks](/docs/agent/hooks) - [Using conditions](/docs/pipelines/configure/conditionals) - [Annotations](/docs/agent/cli/reference/annotate) - [Security](/docs/pipelines/security), [Secrets](/docs/pipelines/security/secrets), and [permissions](/docs/pipelines/security/permissions) - [Integrations](/docs/pipelines/integrations) - [Test Engine](/docs/test-engine) for test insights You can try the [Buildkite pipeline converter](/docs/pipelines/migration/pipeline-converter) to see how your existing CircleCI configuration might look converted to Buildkite Pipelines. With a basic understanding of the differences between Buildkite Pipelines and CircleCI, if you haven't already done so, run through the [Getting started with Pipelines](/docs/pipelines/getting-started) guide to get yourself set up to run pipelines in Buildkite Pipelines, and [create your own pipeline](/docs/pipelines/create-your-own). For migration assistance, contact support@buildkite.com. --- ### From Jenkins URL: https://buildkite.com/docs/pipelines/migration/from-jenkins #### Migrate from Jenkins This guide helps [Jenkins](https://www.jenkins.io) users migrate to Buildkite Pipelines, and covers key differences between the products. Buildkite Pipelines is a modern and flexible continuous integration and deployment (CI/CD) platform that provides a powerful and scalable build infrastructure for your applications. While Jenkins and Buildkite Pipelines have similar goals as CI/CD products, their approach differs. Buildkite Pipelines offers a hybrid model consisting of the following: - A software-as-a-service (SaaS) platform for visualization and management of CI/CD pipelines. - Agents for executing jobs—hosted by you, either on-premises or in the cloud. Buildkite addresses the pain points of Jenkins' users, namely its security issues (both in its [base code](https://www.cvedetails.com/vulnerability-list/vendor_id-15865/product_id-34004/Jenkins-Jenkins.html) and [plugins](https://securityaffairs.co/wordpress/132836/security/jenkins-plugins-zero-day-flaws.html)), time-consuming setup, and speed. This approach makes Buildkite more secure, scalable, and flexible. Follow the steps in this guide for a smooth migration from Jenkins to Buildkite Pipelines. ##### Understand the differences Most of the concepts will likely be familiar, but there are some differences to understand about the approaches. ###### System architecture While Jenkins is a general automation engine with plugins to add additional features, Buildkite Pipelines is a product specifically aimed at CI/CD. You can think of Buildkite Pipelines like Jenkins with the Pipeline suite of plugins. To simplify it, we'll refer to Jenkins Pipeline as just _Jenkins_ and Buildkite Pipelines as _Buildkite_. At a high level, Buildkite follows a similar architecture to Jenkins: - A central control plane that coordinates work and displays results through a web interface. * **Jenkins:** A _controller_. * **Buildkite:** The _Buildkite dashboard_. - A program that executes the work it receives from the control plane. * **Jenkins:** A combination of _nodes_, _executors_, and _agents_. * **Buildkite:** _Buildkite agents_. However, while you're responsible for scaling and operating both components in Jenkins, Buildkite manages the control plane as a SaaS offering (through the Buildkite dashboard). This reduces the operational burden on your team, as Buildkite takes care of platform maintenance, updates, and availability. The Buildkite dashboard also handles monitoring tools like logs, user access, and notifications. The program that executes work is called an _agent_ in Buildkite (also known as the [_Buildkite agent_](/docs/agent)). An agent is a small, reliable, and cross-platform build runner that connects your infrastructure to Buildkite. The Buildkite agent polls Buildkite for work, runs jobs, and reports results. You can install these agents on local machines, cloud servers, or other remote machines. The Buildkite agent code is open-source, and is [accessible from GitHub](https://github.com/buildkite/agent). The following diagram shows the split in Buildkite between its SaaS platform and Buildkite agents running in your infrastructure. The diagram shows that Buildkite provides a web interface, handles integrations with third-party tools, and offers APIs and webhooks. By design, sensitive data, such as source code and secrets, remain within your environment and are not seen by Buildkite. This decoupling provides flexibility, as you can scale your agents and build environment independently, while Buildkite manages the coordination of these agents, build scheduling, as well as associated metrics and insights available through its web interface. In Jenkins, concurrency is managed through multiple executors within a single node. In Buildkite, multiple agents can run on either a single machine or across multiple machines. More recently, Buildkite has provided its own [hosted agents](/docs/pipelines/architecture#buildkite-hosted-architecture) feature (as an alternative to this self-hosted, hybrid architecture, described above), as a managed solution that suits smaller teams, including those wishing to get up and running with Pipelines more rapidly. See [Buildkite Pipelines architecture](/docs/pipelines/architecture) to learn more about how you can set up Buildkite to work with your organization. ###### Security Security is crucial in CI/CD, protecting sensitive information, system integrity, and compliance with industry standards. Jenkins and Buildkite have different approaches to security, which impacts how you manage your CI/CD pipelines' security. Securing a Jenkins instance requires: - Careful configuration. - Plugin management. - Regular updates to address security vulnerabilities. You must consider vulnerabilities in both Jenkins' own [code base](https://www.cvedetails.com/vulnerability-list/vendor_id-15865/product_id-34004/Jenkins-Jenkins.html) and [plugins](https://securityaffairs.co/wordpress/132836/security/jenkins-plugins-zero-day-flaws.html). Additionally, since Jenkins is a self-hosted solution, you are responsible for securing the underlying infrastructure, network, and storage. Some updates require you to take Jenkins offline to perform them, leaving your team without access to CI/CD resources during that period. Buildkite's hybrid architecture, which combines the centralized Buildkite SaaS platform with your own [self-hosted Buildkite agents](/docs/pipelines/architecture#self-hosted-hybrid-architecture), provides a unique approach to security. Buildkite takes care of the security of the SaaS platform, including user authentication, pipeline management, and the web interface. Self-hosted Buildkite agents, which run on your infrastructure, allow you to maintain control over the environment, security, and other build-related resources. This separation reduces the operational burden and allows you to focus on securing the environments where your code is built and tested. While Buildkite provides its own secrets management capabilities through the Buildkite platform, the Buildkite platform can also be configured so that it doesn't store your secrets. Furthermore, Buildkite does not have or need access to your source code. Only the agents you host within your infrastructure would need access to clone your repositories, and your secrets that provide this access can also be managed through secrets management tools hosted within your infrastructure. This gives you all the benefits of a SaaS platform without many of the common security concerns. Both Jenkins and Buildkite support multiple authentication providers and offer granular access control. However, Buildkite's SaaS platform provides a more centralized and streamlined approach to user management, making it easier to enforce security policies and manage user access across your organization. See the [Security](/docs/pipelines/security) and [Secrets](/docs/pipelines/security/secrets) sections of these docs to learn more about how you can secure your Buildkite build environment, as well as manage secrets in your own infrastructure. ###### Pipeline configuration concepts When migrating your CI/CD pipelines from Jenkins to Buildkite, it's important to understand the differences in pipeline configuration concepts. Like Jenkins, Buildkite lets you create pipeline definitions in the web interface or one or more related files checked into a repository. Most people prefer the latter, which allows pipeline definitions to be kept with the code base it builds, managed in source control. The equivalent of a `Jenkinsfile` in Buildkite is a `pipeline.yml`. You'll learn more about these differences further on in the [Files and syntax of Pipeline translation fundamentals](#pipeline-translation-fundamentals-files-and-syntax). In Jenkins, the core description of work is a _job_. A job contains stages with steps and can trigger other jobs. You use a job to upload a `Jenkinsfile` from a repository. Installing the [Pipeline plugin](https://plugins.jenkins.io/workflow-aggregator/) lets you describe a workflow of jobs as a pipeline. Buildkite uses similar terms in different ways, where a _pipeline_ is the core description of work. A Buildkite pipeline contains different types of [_steps_](/docs/pipelines/configure/step-types) for different tasks: - [Command step](/docs/pipelines/configure/step-types/command-step): Runs one or more shell commands on one or more agents. - [Wait step](/docs/pipelines/configure/step-types/wait-step): Pauses a build until all previous jobs have completed. - [Block step](/docs/pipelines/configure/step-types/block-step): Pauses a build until unblocked. - [Input step](/docs/pipelines/configure/step-types/input-step): Collects information from a user. - [Trigger step](/docs/pipelines/configure/step-types/trigger-step): Creates a build on another pipeline. - [Group step](/docs/pipelines/configure/step-types/group-step): Displays a group of sub-steps as one parent step. Triggering a Buildkite pipeline creates a _build_, and any command steps are dispatched as _jobs_ to run on agents. A common practice is to define a pipeline with a single step that uploads the `pipeline.yml` file in the code repository. The `pipeline.yml` contains the full pipeline definition and can be generated dynamically. Unlike the terms _job_ and _pipeline_, the concept of a _step_ in both Jenkins and Buildkite is analogous. ###### Plugin system Plugins are an essential part of both Jenkins and Buildkite, and they help you extend these products to further customize your CI/CD workflows. Rather than managing plugins through a web-based system like Jenkins, in Buildkite, you manage plugins directly in pipeline definitions. This means that teams can manage plugins at the pipeline level, rather than there being a need to manage plugins at a monolithic system-wide level. Jenkins plugins are typically Java-based, run in the Jenkins controller's Java virtual machine, and are shared across all pipelines. Therefore, a failure with one of these plugins can crash your entire Jenkins instance. Furthermore, since Jenkins plugins are closely integrated with Jenkins core, compatibility issues can often be encountered when either Jenkins core or its plugins are upgraded. Buildkite plugins are shell-based, run on individual Buildkite agents, and are pipeline- or even step-specific with independent versioning, such that plugins are only loosely coupled with Buildkite. Therefore, plugin failures are isolated to individual builds, and issues are rare whenever you use newer versions of plugins in Buildkite pipelines. ###### Try out Buildkite With a basic understanding of the differences between Buildkite and Jenkins, if you haven't already done so, run through the [Getting started with Pipelines](/docs/pipelines/getting-started) guide to get yourself set up to run pipelines in Buildkite, and [create your own pipeline](/docs/pipelines/create-your-own). ##### Provision agent infrastructure Buildkite agents: - Are where your builds, tests, and deployments run. - Can either run as [Buildkite hosted agents](/docs/agent/buildkite-hosted), or on your infrastructure (known as _self-hosted_), providing flexibility and control over the environment and resources. Operating agents in a self-hosted environment is similar in approach to hosting nodes in Jenkins. If running self-hosted Buildkite agents, you'll need to consider the following: - **Infrastructure type:** Agents can run on various infrastructure types, including on-premises, cloud (AWS, GCP, Azure), or container platforms (Docker, Kubernetes). Based on your analysis of the existing Jenkins nodes, choose the infrastructure type that best suits your organization's needs and constraints. - **Resource usage:** Agent infrastructure is similar to the requirements for nodes in Jenkins, without operating the controller. Evaluate your Jenkins nodes' resource usage (CPU, memory, and disk space) to determine the requirements for your Buildkite agent infrastructure. - **Platform dependencies:** To run your pipelines, you'll need to ensure the agents have the necessary dependencies, such as programming languages, build tools, and libraries. Take note of the operating systems, libraries, tools, and dependencies installed on your Jenkins nodes. This information will help you configure your Buildkite agents. - **Network configurations:** Review the network configurations of your Jenkins nodes, including firewalls, proxy settings, and network access to external resources. These configurations will guide you in setting up the network environment for your Buildkite agents. The Buildkite agent works by polling Buildkite's [agent API](/docs/apis/agent-api) over HTTPS. There is no need to forward ports or provide incoming firewall access. - **Agent scaling:** Evaluate the number of concurrent builds and the build queue length in your Jenkins nodes to estimate the number of Buildkite agents needed. Keep in mind that you can scale Buildkite agents independently, allowing you to optimize resource usage and reduce build times. - **Build isolation and security:** Consider using separate agents for different projects or environments to ensure build isolation and security. You can use [agent tags](/docs/agent/cli/reference/start#setting-tags) and [clusters](/docs/pipelines/security/clusters) to target specific agents for specific pipeline steps, allowing for fine-grained control over agent allocation. You'll continue to adjust the agent configuration as you monitor performance to optimize build times and resource usage for your needs. See the [Installation](/docs/agent/self-hosted/install/) guides when you're ready to install an agent and follow the instructions for your infrastructure type. ##### Pipeline translation fundamentals A pipeline is a container for modeling and defining workflows. Both Jenkins and Buildkite can read a pipeline (configuration) file checked into a repository, which defines a workflow. Before translating any pipeline over from Jenkins to Buildkite, you should be aware of the following fundamental differences in how pipelines are written, and how their steps are executed and built by agents. You can then assess the goals of your Jenkins pipelines to see how you can translate them to achieve the same goals with Buildkite. ###### Files and syntax This table outlines the fundamental differences in pipeline files and their syntax between Jenkins and Buildkite. | Pipeline aspect | Jenkins | Buildkite | |-----------------|---------|-----------| | **Configuration file** | `Jenkinsfile` | `pipeline.yml` | | **Syntax** | Groovy-based domain-specific language (DSL) | YAML | | **Structure** | Strong hierarchy | Flat structure (more readable) | Buildkite's YAML-based pipeline syntax and definitions, along with its flat structure, is simpler, more human-readable, and easier to understand. Furthermore, you can even generate pipeline definitions at build-time with the power and flexibility of [dynamic pipelines](/docs/pipelines/configure/dynamic-pipelines). ###### Step execution By default, Jenkins runs its pipeline steps in sequence, whereas Buildkite runs steps simultaneously (that is, in parallel) on any available agents that can run them. However, you can achieve the opposite of these default behaviors in each products' pipelines. To make a Jenkins pipeline run its steps in parallel, the [`parallel` directive](https://www.jenkins.io/doc/book/pipeline/syntax/#parallel) is used explicitly in the Jenkins pipeline. For instance, in the following Jenkins pipeline snippet, the `Lint` and `Unit Tests` steps are run simultaneously. ```groovy // Jenkins: Explicit parallelization required to run steps in parallel parallel( "Lint": { sh 'npm run lint' }, "Unit Tests": { sh 'npm test' } ) ``` Conversely, to make a Buildkite pipeline run its steps in a specific order, use the [`depends_on` attribute](/docs/pipelines/configure/depends-on#defining-explicit-dependencies) in the step you want to run after others have run first. For instance, in the following Buildkite pipeline example, the `Lint` and `Test` steps are run in parallel (by default) first, whereas the `Build` step is run after the `Lint` and `Test` steps have completed. ```yaml #### Buildkite: Explicit sequencing is required to make steps run in sequence steps: - label: "Lint" id: lint commands: [npm run lint] - label: "Test" id: test commands: [npm test] - label: "Build" depends_on: [lint, test] # Explicit dependency commands: [npm run build] ``` ###### Workspace state In Jenkins, all stages and steps in a pipeline share the same workspace. This means that dependencies installed in one stage are automatically available in subsequent stages. For instance, in the following Jenkins pipeline snippet, the `Test` stage's step can make use of the `node_modules` artifacts installed by the previously executed `Install` stage's step. ```groovy // Jenkins: All stages share the same workspace. stage('Install') { sh 'npm install' // Creates node_modules } stage('Test') { sh 'npm test' // Uses the node_modules installed in the 'Install' stage } ``` In Buildkite, each step is executed in a fresh workspace. Therefore, even if you implement a [`wait` step](/docs/pipelines/configure/depends-on#implicit-dependencies-with-wait-and-block), artifacts from previously processed steps won't be available in subsequent steps. ```yaml #### This won't work in Buildkite steps: - label: Install dependencies command: npm install - wait - label: Run tests command: npm test # Fails because node_modules won't be there ``` However, there are have several options for sharing state between steps: - **Reinstall per step**: Simple for fast-installing dependencies like `npm ci` (instead of `npm install`). For instance, from the example above: ```yaml steps: # Install dependencies step - label: Run tests commands: # Obtain the required version of Node.js (22.x) - curl -fsSL https://deb.nodesource.com/setup_22.x | bash - sudo apt install nodejs # Installs this version of nodejs on the agent - npm ci # (Re-)installs node_modules - npm test # node_modules will be available ``` - **Buildkite artifacts**: You can upload [build artifacts](/docs/pipelines/configure/artifacts) from one step, which can be used in a subsequently processed step. This works best with small files and build outputs. - **Cache plugin**: Similar to build artifacts, you can also use the [Buildkite cache plugin](https://buildkite.com/resources/plugins/buildkite-plugins/cache-buildkite-plugin/), which is ideal for larger dependencies using cloud storage (S3, GCS). - **External storage**: Custom solutions for complex state management. ###### Agent targeting Jenkins uses a push-based agent targeting model, where the controller assigns work to pre-registered agents based on labels. Conversely, Buildkite uses a pull-based agent targeting model, where agents poll queues for work. This pull-based agent targeting model approach provides better security (no incoming connections to agents), easier scaling (through [ephemeral agents](/docs/pipelines/glossary#ephemeral-agent)), and more resilient networking. However, this difference between Jenkins and Buildkite may likely require you to rethink your agent topology when [provisioning your agent infrastructure](#provision-agent-infrastructure). ###### Plugins Following on from [Plugin system](#understand-the-differences-plugin-system), many popular Jenkins plugins become unnecessary in Buildkite due to native features like artifact handling, build visualization, and emoji support. ##### Translate an example Jenkins pipeline This section guides you through the process of translating a [declarative Jenkins pipeline](https://www.jenkins.io/doc/book/pipeline/syntax/#declarative-pipeline) example (which builds a [Node.js](https://nodejs.org/) app) into a Buildkite pipeline. This pipeline demonstrates typical features found in many Jenkins pipelines, which include: - Multiple [`stage` directives](https://www.jenkins.io/doc/book/pipeline/syntax/#stage) to group [steps](https://www.jenkins.io/doc/book/pipeline/syntax/#steps) for install, test, and build stages of the pipeline (executed sequentially). - The [`matrix` directive](https://www.jenkins.io/doc/book/pipeline/syntax/#declarative-matrix) to process this set of sequential stages in parallel using different versions of a build tool (that is, [Node.js](https://nodejs.org/)). - A [`post` section](https://www.jenkins.io/doc/book/pipeline/syntax/#post) that uses the Jenkins core [`archiveArtifacts` step](https://www.jenkins.io/doc/pipeline/steps/core/#archiveartifacts-archive-the-artifacts) to save the artifact to storage. ###### Step 1: Copy or fork the jenkins-to-buildkite repository The declarative Jenkins pipeline example can be found in the [jenkins-to-buildkite](https://github.com/buildkite/jenkins-to-buildkite) repository. Make a copy or fork this repository (within your own GitHub account) to examine it further. This repository has its own containerized version of Jenkins, which you can run locally to see how it builds the Jenkins pipeline and app included within this repository. In your Buildkite organization, which you would have created or began working with when [trying out Buildkite](#understand-the-differences-try-out-buildkite), [create a new pipeline](/docs/pipelines/create-your-own#create-a-pipeline) for this jenkins-to-buildkite repository, so that you can see and compare the same Node.js project being built in both Jenkins and Buildkite. ###### Step 2: Examine the Jenkins pipeline 1. Open your [Jenkinsfile](https://github.com/buildkite/jenkins-to-buildkite/blob/main/app/Jenkinsfile) to examine its stages, steps, its matrix and post-processed steps. 1. Identify the typical features of this pipeline: * **Matrix builds**: The pipeline is built twice—once with Node.js version 20.x and the other with version 22.x. * **Agent targeting**: Agents are targeted using label-based selection, which are the Node.js versions defined within the [`axes` section](https://www.jenkins.io/doc/book/pipeline/syntax/#matrix-axes). * **Tool management**: Node.js capabilities within the pipeline steps handled by Jenkins' own built-in `nodejs` tool. * **Sequential stages**: Each `stage` within the [`stages` section](https://www.jenkins.io/doc/book/pipeline/syntax/#stages) is executed sequentially, with one stage containing parallel sub-steps. Also note that since the `stages` section is wrapped in a `matrix` directive, the entire stages section is run in parallel (that is, twice, once using each Node.js version). * **Plugin usage**: The [`options` directive](https://www.jenkins.io/doc/book/pipeline/syntax/#options) uses the [Jenkins AnsiColor plugin](https://plugins.jenkins.io/ansicolor/) for output colorization. * **Artifact archiving**: Artifacts from the test coverage and build process are saved in the pipeline's `post` section. The execution flow of this pipeline follows a typical pattern: install dependencies, run lint and tests in parallel, and then build and archive artifacts. ###### Step 3: Plan the pipeline translation Now that you understand the pipeline's overall structure, execution flow and its goals, you can now plan the translation of the Jenkins pipeline into a Buildkite one: 1. Create a basic `.buildkite/pipeline.yml` structure for your Buildkite pipeline, along with the three main Jenkins stages as Buildkite steps (lint, test, build). 1. Configure step dependencies in your Buildkite pipeline to ensure that steps which depend on others passing first, are only run if those other steps do pass. 1. Add a Node.js installation command to each step (to address [Buildkite's steps executing in fresh workspaces](#pipeline-translation-fundamentals-workspace-state)). 1. Implement the required [build matrix](/docs/pipelines/configure/workflows/build-matrix) configuration into your Buildkite pipeline. 1. Implement [build artifact](/docs/pipelines/configure/artifacts) collection to retain test coverage and build outputs. 1. Review the verbose result and understand why it works. 1. Refactor using YAML aliases to follow the DRY principle. This approach maintains functional equivalence while taking advantage of Buildkite's strengths, such as parallel execution and native artifact support. ###### Step 4: Create a basic Buildkite pipeline structure Begin by creating your initial `.buildkite/pipeline.yml` file with the basic step structure for the three main stages of the Jenkins pipeline, using [`command` steps](/docs/pipelines/configure/step-types/command-step) with [`label`](/docs/pipelines/configure/step-types/command-step#label) and `id` attributes: ```yaml steps: - label: "\:eslint\: Lint" id: lint commands: - echo "Lint step placeholder" - label: "\:vitest\: Test" id: test commands: - echo "Test step placeholder" - label: "\:wrench\: Build" commands: - echo "Build step placeholder" ``` > 📘 > Be aware that `commands` is an alias for `command`. Notice the immediate differences in this pipeline syntax from Jenkins: - YAML format instead of Groovy DSL. Each stage in the Jenkins pipeline is replaced by a single [`command` step](/docs/pipelines/configure/step-types/command-step) in the Buildkite pipeline (which may consist of one or more shell commands, executable files or scripts). Each of these three steps will be dispatched as a single job to an available Buildkite agent. - Emoji support in labels without plugins. - ID assignment for dependency references. **You should now see** a clean YAML structure that's more readable than the Groovy DSL. If you save this file and commit it to your repository, Buildkite will detect it automatically. ###### Step 5: Configure the step dependencies The build step should run only after lint and test complete successfully. Otherwise, running the build step when either the lint or test steps fail is a waste of resources that could result in longer running builds. Therefore, you should configure explicit dependencies on the build step, which will prevent it from running if either the lint or test steps fail: ```yaml - label: "\:wrench\: Build" depends_on: [lint, test] # Explicit dependencies commands: - echo "Build step placeholder" ``` Without this [`depends_on` attribute](/docs/pipelines/configure/depends-on#defining-explicit-dependencies), all three steps would run simultaneously, due to [Buildkite's parallel-by-default behavior](#pipeline-translation-fundamentals-step-execution). **You should now see** that the build step will wait for both lint and test to complete. This is the key difference from Jenkins' sequential-by-default model. ###### Step 6: Install Node.js and dependencies Now replace the [three placeholder commands you began with earlier](#translate-an-example-jenkins-pipeline-step-4-create-a-basic-buildkite-pipeline-structure) with real commands that install Node.js and its dependencies. Since each step begins with a fresh workspace when it dispatch as a job to run on a Buildkite agent, Node.js and its dependencies must be installed on every step: ```yaml - label: "\:eslint\: Lint" id: lint commands: - curl -fsSL https://deb.nodesource.com/setup_{{matrix.node_version}}.x | bash - sudo apt install nodejs - cd app && npm ci - npm run lint ``` This highlights a key difference: In Jenkins, you can install plugins like [NodeJS](https://plugins.jenkins.io/nodejs/) to then leverage their use within pipelines, while Buildkite requires explicit installation of such tools as part of the pipeline's build. In this example, `npm ci` is being used instead of `npm install` for faster, reproducible builds. **You should now see** the pattern emerging: every step needs to set up its own environment. However, you can address this repetition later using YAML aliases. ###### Step 7: Implement a build matrix configuration Now implement the [build matrix](/docs/pipelines/configure/workflows/build-matrix) for Node.js 20 and 22: ```yaml - label: "\:eslint\: Lint (Node {{matrix.node_version}})" id: lint matrix: setup: node_version: [20, 22] commands: - curl -fsSL https://deb.nodesource.com/setup_{{matrix.node_version}}.x | bash - sudo apt install nodejs - cd app && npm ci - npm run lint ``` Buildkite's build matrix syntax is simpler than Jenkins—just specify the values in an array. The `{{matrix.node_version}}` template variable gets replaced at runtime, creating separate jobs for each Node.js version. **You should now see** that this single step definition will create two separate jobs: one for Node.js version 20 and one for Node.js version 22. The label will show "Lint (Node 20)" and "Lint (Node 22)" respectively. ###### Step 8: Implement artifact collection Now implement [build artifact](/docs/pipelines/configure/artifacts) collection to capture test coverage and build outputs using the [`artifact_paths` attribute](/docs/pipelines/configure/artifacts#upload-artifacts-with-a-command-step): ```yaml - label: "\:vitest\: Test (Node {{matrix.node_version}})" id: test matrix: setup: node_version: [20, 22] commands: - curl -fsSL https://deb.nodesource.com/setup_{{matrix.node_version}}.x | bash - sudo apt install nodejs - cd app && npm ci - npm test artifact_paths: - app/coverage/**/* # Collect test coverage - label: "\:wrench\: Build (Node {{matrix.node_version}})" matrix: setup: node_version: [20, 22] commands: - curl -fsSL https://deb.nodesource.com/setup_{{matrix.node_version}}.x | bash - sudo apt install nodejs - cd app && npm ci - npm run build depends_on: [lint, test] artifact_paths: - app/dist/**/* # Collect build outputs ``` Buildkite provides native artifact support, which means that no plugins are required for this functionality—just specify the glob patterns for files you want to preserve. **You should now see** that test coverage files are automatically collected and made available for download after each test run. Unlike Jenkins, this requires no additional plugin configuration. ###### Step 9: Review the verbose result Now, look at the complete verbose version to understand exactly what's being built. This shows the full working pipeline you should now have created (along with some minor syntactical tweaks) before optimization—a crucial checkpoint to ensure everything functions correctly: ```yaml steps: - label: "\:eslint\: Lint (Node {{matrix.node_version}})" id: lint matrix: setup: node_version: [20, 22] commands: - | curl -fsSL https://deb.nodesource.com/setup_{{matrix.node_version}}.x | bash sudo apt install nodejs cd app && npm ci - npm run lint - label: "\:vitest\: Test (Node {{matrix.node_version}})" id: test matrix: setup: node_version: [20, 22] commands: - | curl -fsSL https://deb.nodesource.com/setup_{{matrix.node_version}}.x | bash sudo apt install nodejs cd app && npm ci - npm test artifact_paths: - app/coverage/**/* - label: "\:wrench\: Build (Node {{matrix.node_version}})" matrix: setup: node_version: [20, 22] commands: - | curl -fsSL https://deb.nodesource.com/setup_{{matrix.node_version}}.x | bash sudo apt install nodejs cd app && npm ci - npm run build depends_on: - lint - test artifact_paths: - app/dist/**/* ``` While this Buildkite pipeline YAML syntax is substantially shorter than the original Jenkins declarative pipeline's Groovy DSL syntax, there still remains clear duplication in the YAML pipeline. However, this verbose version demonstrates that the translation of this pipeline from Jenkins to Buildkite works correctly—each step properly installs Node.js onto its Buildkite agent, sets up dependencies, and executes its remaining required commands. **You should now see** a fully functional pipeline that will create a total of six jobs: lint, test, and build for each of the two Node.js versions. The build jobs will wait for their corresponding lint and test jobs to complete. ###### Step 10: Refactor with YAML aliases Now that you've verified that the pipeline works, you can eliminate the duplication using YAML aliases. This refactoring maintains the same functionality while dramatically improving the pipeline code's maintainability: ```yaml common: install: &install | curl -fsSL https://deb.nodesource.com/setup_{{matrix.node_version}}.x | bash sudo apt install nodejs cd app && npm ci matrix: &matrix setup: node_version: [20, 22] steps: - label: "\:eslint\: Lint (Node {{matrix.node_version}})" id: lint matrix: *matrix commands: - *install - npm run lint - label: "\:vitest\: Test (Node {{matrix.node_version}})" id: test matrix: *matrix commands: - *install - npm test artifact_paths: - app/coverage/**/* - label: "\:wrench\: Build (Node {{matrix.node_version}})" matrix: *matrix commands: - *install - npm run build depends_on: - lint - test artifact_paths: - app/dist/**/* ``` The final result is now dramatically shorter than the original Jenkins pipeline, with no duplication and cleaner, more readable structure. **You should now see** a maintainable pipeline where changes to the Node.js installation process or matrix configuration only need to be made in one place. The `&install` creates an alias, and `*install` references it. ##### Key differences and benefits of migrating to Buildkite This [example pipeline translation](#translate-an-example-jenkins-pipeline) demonstrates several important advantages of Buildkite's approach: - **Simpler pipeline configuration**: The resulting Buildkite YAML syntax is much smaller than its Jenkins Groovy DSL. - **Execution model**: Buildkite's steps are parallel by default with explicit sequencing vs Jenkins' stages, which are sequential by default with explicit parallelization. - **Plugin usage**: Buildkite required no plugins, whereas Jenkins required two plugins ([AnsiColor](https://plugins.jenkins.io/ansicolor/) and [Build Name and Description Setter](https://plugins.jenkins.io/build-name-setter/)) - **Tool Management**: Buildkite requires explicit tool installation for each step, with full control, whereas Jenkins manages tools through the use of plugins. - **Artifact Handling**: Buildkite provides native archiving and glob pattern support vs plugin-based archiving For larger deployments, these differences become more significant: - The fresh workspace model avoids state leakage between builds. - The pull-based agent model simplifies scaling and security. - Pipeline-specific plugin versioning eliminates dependency conflicts. Be aware of common pipeline-translation mistakes, which might include: - Forgetting about fresh workspaces (leading to missing dependencies). - Over-parallelizing interdependent steps. - Misunderstanding the queue-based agent targeting model. These Buildkite-specific patterns, however, force better pipeline design that's more resilient and scalable. ##### Audit your Jenkins pipelines Now that you've run through the process of translating a declarative Jenkins pipeline over to Buildkite, take an inventory of your existing Jenkins pipelines, plugins, and integrations. Determine which parts of your Jenkins setup are essential and which can be replaced or removed. This will help you decide what needs to be migrated to Buildkite. ##### Next steps Explore these Buildkite resources to learn more about Buildkite's features and functionality, and how to enhance your Buildkite pipelines translated from Jenkins: - [Defining your pipeline steps](/docs/pipelines/defining-steps) for an advanced guide on how to configure Buildkite pipeline steps. - [Buildkite agent overview](/docs/agent/cli/reference/step) page for more information about the Buildkite agent guidance on how to configure it. - [Plugins directory](https://buildkite.com/resources/plugins/) for a catalog of Buildkite- as well as community-developed plugins to enhance your pipeline functionality. - [Dynamic pipelines](/docs/pipelines/configure/dynamic-pipelines) to learn more about how to generate pipeline definitions at build-time with the power, and how to facilitate this feature with the [Buildkite SDK](/docs/pipelines/configure/dynamic-pipelines/sdk). - [Buildkite agent hooks](/docs/agent/hooks) to extend or override the default behavior of Buildkite agents at different stages of its lifecycle. - [Using conditions](/docs/pipelines/configure/conditionals) to run pipeline builds or steps, only when specific conditions have been met. - [Annotations](/docs/agent/cli/reference/annotate) that allow you to add additional information to your build result pages using Markdown. - [Security](/docs/pipelines/security) and [Secrets](/docs/pipelines/security/secrets) overview pages, which lead to details on how to manage secrets within your Buildkite infrastructure, as managing [permissions](/docs/pipelines/security/permissions) for your teams and Buildkite pipelines themselves. - [Integrations](/docs/pipelines/integrations) to integrate Buildkite's functionality with other third-party tools, for example, notifications that automatically let your team know about the success of your pipeline builds. - After configuring Buildkite Pipelines for your team, learn how to obtain actionable insights from the tests running in pipelines using [Test Engine](/docs/test-engine). If you need further assistance with your Jenkins migration processes and plans, please don't hesitate to reach out to our Buildkite support team at support@buildkite.com. We're here to help you use Buildkite to build your dream CI/CD workflows. --- ### From Bitbucket Pipelines URL: https://buildkite.com/docs/pipelines/migration/from-bitbucket-pipelines #### Migrate from Bitbucket Pipelines This guide helps [Bitbucket Pipelines](https://bitbucket.org/product/features/pipelines) users migrate to Buildkite Pipelines, and covers key differences between the platforms. Bitbucket Pipelines is a CI/CD service built into [Bitbucket Cloud](https://bitbucket.org/product/) that uses a `bitbucket-pipelines.yml` file in your repository to define your build configuration. Buildkite Pipelines uses a similar YAML-based approach with `pipeline.yml`, but differs in its [hybrid architecture offering](/docs/pipelines/architecture), execution model, and how it handles containers and caching. Follow the steps in this guide for a smooth migration from Bitbucket Pipelines to Buildkite Pipelines. ##### Understand the differences Most Bitbucket Pipelines concepts translate to Buildkite Pipelines directly, but there are key differences to understand before migrating. ###### System architecture Bitbucket Pipelines is a fully hosted CI/CD service that runs jobs on Atlassian-managed infrastructure using Docker containers. Buildkite Pipelines offers a hybrid model, consisting of the following components: - A SaaS platform (the _Buildkite dashboard_) for visualization and pipeline management. - [Buildkite agents](/docs/agent) for executing jobs—through [Buildkite hosted agents](/docs/agent/buildkite-hosted) as a fully-managed service, or [self-hosted](/docs/agent/self-hosted) agents (hybrid model architecture) that you manage in your own infrastructure. The [Buildkite agent](https://github.com/buildkite/agent) is open source and can run on local machines, cloud servers, or containers. The hybrid model gives you more control over your build environment, scaling, and security compared to Bitbucket Pipelines' fully hosted approach. See [Buildkite Pipelines architecture](/docs/pipelines/architecture) for more details. ###### Security The hybrid architecture of Buildkite Pipelines provides a unique approach to security. Buildkite Pipelines takes care of the security of its SaaS platform, including user authentication, pipeline management, and the web interface. Self-hosted Buildkite agents, which run on your infrastructure, allow you to maintain control over the environment, security, and other build-related resources. Buildkite does not have or need access to your source code. Only the agents you host within your infrastructure need access to clone your repositories. Your secrets can be managed through the Buildkite Pipelines [secrets management](/docs/pipelines/security/secrets/buildkite-secrets) feature, or through secrets management tools hosted within your infrastructure. Learn more about [Security](/docs/pipelines/security) and [Secrets](/docs/pipelines/security/secrets) in Buildkite Pipelines. ###### Pipeline configuration concepts The following table maps key Bitbucket Pipelines concepts to their Buildkite Pipelines equivalents. These are covered in more detail in [Pipeline translation fundamentals](#pipeline-translation-fundamentals). | Bitbucket Pipelines | Buildkite Pipelines | |---------------------|---------------------| | `bitbucket-pipelines.yml` | `pipeline.yml` | | `name` | `label` | | `script` | `command` | | `image` (global or per step) | [Docker plugin](https://buildkite.com/resources/plugins/buildkite-plugins/docker-buildkite-plugin/) per step | | `parallel` | Steps without `depends_on` (parallel by default) | | `caches` | [Cache plugin](https://buildkite.com/resources/plugins/buildkite-plugins/cache-buildkite-plugin) | | `artifacts` | `artifact_paths` / `buildkite-agent artifact` | | `deployment` | `concurrency_group` + `block` step | | `size` | `agents` with `queue` attribute | | `condition.changesets` | `if_changed` | | `max-time` | `timeout_in_minutes` | | `pipelines.pull-requests` | Pipeline settings + `if` conditionals | | `definitions.steps` | YAML anchors in `common:` section | | `pipelines.custom` | `block` step or triggered pipeline | | `variables` (custom pipelines) | [`input` step](/docs/pipelines/configure/step-types/input-step) with `fields` | | `clone` | `BUILDKITE_GIT_CLONE_FLAGS` or `skip_checkout` | A Buildkite pipeline contains different types of [steps](/docs/pipelines/configure/step-types): - [Command step](/docs/pipelines/configure/step-types/command-step): Runs one or more shell commands on one or more agents. - [Wait step](/docs/pipelines/configure/step-types/wait-step): Pauses a build until all previous jobs have completed. - [Block step](/docs/pipelines/configure/step-types/block-step): Pauses a build until unblocked. - [Input step](/docs/pipelines/configure/step-types/input-step): Collects information from a user. - [Trigger step](/docs/pipelines/configure/step-types/trigger-step): Creates a build on another pipeline. - [Group step](/docs/pipelines/configure/step-types/group-step): Displays a group of sub-steps as one parent step. Triggering a Buildkite pipeline creates a [_build_](/docs/pipelines/glossary#build), and any command steps are dispatched as [_jobs_](/docs/pipelines/glossary#job) to run on agents. A common practice is to define a pipeline with a single step that uploads the `pipeline.yml` file in the code repository. The `pipeline.yml` contains the full pipeline definition and can be generated [dynamically](/docs/pipelines/configure/dynamic-pipelines). ###### Plugin system Bitbucket Pipelines extends its built-in functionality through [Pipes](https://bitbucket.org/product/features/pipelines/integrations)—pre-packaged Docker containers for common tasks. Buildkite Pipelines uses shell-based [plugins](/docs/pipelines/integrations/plugins) that hook into the agent's [job lifecycle](/docs/agent/hooks#job-lifecycle-hooks) and are versioned per step. Both are declared directly in pipeline YAML. For detailed comparisons and examples, see [Plugins](#pipeline-translation-fundamentals-plugins) in [Pipeline translation fundamentals](#pipeline-translation-fundamentals). ###### Try out Buildkite With a basic understanding of the differences between Buildkite and Bitbucket Pipelines, if you haven't already done so, run through the [Getting started with Pipelines](/docs/pipelines/getting-started) guide to get yourself set up to run pipelines in Buildkite, and [create your own pipeline](/docs/pipelines/create-your-own). ##### Provision agent infrastructure Buildkite agents run your builds, tests, and deployments. They can run as [Buildkite hosted agents](/docs/agent/buildkite-hosted) where the infrastructure is provided for you, or on your own infrastructure ([self-hosted](/docs/pipelines/architecture#self-hosted-hybrid-architecture)), similar to self-hosted runners in Bitbucket Pipelines. For self-hosted agents, consider: - **Infrastructure type:** On-premises, cloud ([AWS](/docs/agent/self-hosted/aws), [GCP](/docs/agent/self-hosted/gcp)), or container platforms ([Docker](/docs/agent/self-hosted/install/docker), [Kubernetes](/docs/agent/self-hosted/agent-stack-k8s)). - **Resource usage:** Evaluate CPU, memory, and disk requirements based on your current Bitbucket Pipelines runner usage. - **Platform dependencies:** Ensure agents have required tools and libraries. Unlike Bitbucket Pipelines, where Docker images provide pre-configured environments, Buildkite agents require explicit tool installation or pre-built agent images. - **Network:** Agents poll the Buildkite [agent API](/docs/apis/agent-api) over HTTPS so no incoming firewall access is needed. - **Scaling:** Scale agents independently based on concurrent job requirements. - **Build isolation:** Use [agent tags](/docs/agent/cli/reference/start#setting-tags) and [clusters](/docs/pipelines/security/clusters) to target specific agents. For Buildkite hosted agents, see the [Getting started](/docs/agent/buildkite-hosted#getting-started-with-buildkite-hosted-agents) guide. For self-hosted agents, see the [Installation](/docs/agent/self-hosted/install/) guides for your infrastructure type. ##### Pipeline translation fundamentals Before translating any pipeline from Bitbucket Pipelines to Buildkite Pipelines, be aware of the following fundamental differences. ###### Files and syntax | Pipeline aspect | Bitbucket Pipelines | Buildkite Pipelines | |-----------------|---------------------|---------------------| | **Configuration file** | `bitbucket-pipelines.yml` | `pipeline.yml` | | **Syntax** | YAML | YAML | | **Location** | Repository root | `.buildkite/` directory (by convention) | Both platforms use YAML, making the syntax transition straightforward. The main differences are in the attribute names and structure. Unlike Bitbucket Pipelines, where the pipeline configuration is static, Buildkite Pipelines also supports [dynamic pipelines](/docs/pipelines/configure/dynamic-pipelines)—the ability to generate pipeline definitions programmatically at build-time. ###### Step execution Bitbucket Pipelines runs steps sequentially by default, requiring an explicit `parallel` block for concurrent execution. Buildkite Pipelines runs steps in parallel by default, requiring explicit `depends_on` for sequential execution. **Bitbucket Pipelines:** ```yaml #### Bitbucket: Explicit parallel block required - parallel: - step: name: Unit tests script: - npm run test:unit - step: name: Integration tests script: - npm run test:integration - step: name: Deploy script: - ./deploy.sh ``` **Buildkite Pipelines:** ```yaml #### Buildkite: Steps run in parallel by default steps: # These run in parallel (no depends_on) - label: "\:test_tube\: Unit tests" key: "unit-tests" command: "npm run test:unit" - label: "\:test_tube\: Integration tests" key: "integration-tests" command: "npm run test:integration" # This waits for the parallel steps - label: "\:rocket\: Deploy" command: "./deploy.sh" depends_on: - "unit-tests" - "integration-tests" ``` **Buildkite Pipelines (with group for visual organization):** ```yaml steps: - group: "\:test_tube\: Tests" key: "tests" steps: - label: "Unit tests" command: "npm run test:unit" - label: "Integration tests" command: "npm run test:integration" - label: "\:rocket\: Deploy" command: "./deploy.sh" depends_on: "tests" ``` ###### Container images Bitbucket Pipelines supports a global `image` that applies to all steps, with the option to override it on individual steps. Buildkite Pipelines has no global image setting. Instead, use the [Docker plugin](https://buildkite.com/resources/plugins/buildkite-plugins/docker-buildkite-plugin/) on each step. To reduce repetition, use a YAML anchor for your default image and override it on steps that need a different image. **Bitbucket Pipelines:** ```yaml image: node:20 pipelines: default: - step: name: Build script: - npm run build - step: name: Deploy image: amazon/aws-cli:latest script: - ./deploy.sh ``` **Buildkite Pipelines (with YAML anchor):** ```yaml common: - docker_plugin: &docker docker#v5.12.0: image: node:20 steps: - label: "Build" command: - npm run build plugins: - *docker - label: "Deploy" command: - ./deploy.sh plugins: - docker#v5.12.0: image: amazon/aws-cli:latest ``` ###### Workspace state In Bitbucket Pipelines, artifacts can be passed between steps automatically. In Buildkite Pipelines, each step runs in a fresh workspace, requiring explicit artifact upload and download. **Bitbucket Pipelines:** ```yaml - step: name: Build script: - npm run build artifacts: - dist/** - step: name: Deploy script: - ./deploy.sh ``` **Buildkite Pipelines:** ```yaml steps: - label: "Build" key: "build" command: - npm run build artifact_paths: - "dist/**" - label: "Deploy" depends_on: "build" command: - buildkite-agent artifact download "dist/**" . - ./deploy.sh ``` Unlike Bitbucket Pipelines, where artifacts from previous steps are automatically available, Buildkite Pipelines requires you to explicitly manage state between steps. There are several options for sharing state: - **Reinstall per step**: For fast-installing dependencies like `npm ci`, reinstall them in each step rather than sharing `node_modules` between steps. - **Buildkite artifacts**: Upload [build artifacts](/docs/pipelines/configure/artifacts) from one step using `artifact_paths`, then download them in a subsequent step with `buildkite-agent artifact download`. This works best with small files and build outputs. - **Cache plugin**: Use the [cache plugin](https://buildkite.com/resources/plugins/buildkite-plugins/cache-buildkite-plugin) for larger dependencies using cloud storage (S3, GCS). This is the closest equivalent to Bitbucket's built-in `caches` feature. - **External storage**: Custom solutions for complex state management. ###### Branch filtering Bitbucket Pipelines uses `pipelines.branches` to define different step lists per branch. In Buildkite Pipelines, use the `branches` attribute on individual steps. **Bitbucket Pipelines:** ```yaml pipelines: branches: main: - step: name: Deploy to production script: - ./deploy.sh prod develop: - step: name: Run tests script: - npm test ``` **Buildkite Pipelines:** ```yaml steps: - label: "Deploy to production" command: "./deploy.sh prod" branches: "main" - label: "Run tests" command: "npm test" branches: "develop" ``` ###### Deployment environments Bitbucket Pipelines uses `deployment` to tag steps for environment tracking, and `trigger: manual` for manual approval. In Buildkite Pipelines, use `concurrency_group` for deployment serialization and [`block` steps](/docs/pipelines/configure/step-types/block-step) for manual approval. **Bitbucket Pipelines:** ```yaml - step: name: Deploy to production deployment: production trigger: manual script: - ./deploy.sh production ``` **Buildkite Pipelines:** ```yaml steps: - block: "Deploy to production?" branches: "main" - label: "Deploy to production" command: "./deploy.sh production" branches: "main" concurrency: 1 concurrency_group: "deploy-production" ``` ###### Agent targeting Bitbucket Pipelines uses the `size` attribute to select larger runners (for example, `size: 2x` for double the resources). Buildkite Pipelines uses [queues](/docs/agent/queues) to route jobs to agents with the right resources. Use the `agents` attribute on a step to target a specific queue. Map Bitbucket runner sizes to queues with agents sized to match your workload requirements. **Bitbucket Pipelines:** ```yaml - step: name: Build size: 2x script: - npm run build ``` **Buildkite Pipelines:** ```yaml steps: - label: "Build" command: "npm run build" agents: queue: "large" ``` You can also use custom [agent tags](/docs/agent/cli/reference/start#setting-tags) beyond `queue` to [target agents](/docs/agent/cli/reference/start#agent-targeting) by capability, for example: ```yaml agents: os: "linux" arch: "arm64" ``` ###### Plugins Bitbucket [Pipes](https://bitbucket.org/product/features/pipelines/integrations) (see [Plugin system](#understand-the-differences-plugin-system) for context) may have an equivalent Buildkite Pipelines [plugin](/docs/pipelines/integrations/plugins), which are shell-based extensions that hook into the agent's [job lifecycle](/docs/agent/hooks#job-lifecycle-hooks). **Bitbucket Pipelines:** A Pipe that performs a common task like deploying to AWS or sending Slack notifications, is referenced directly in the pipeline YAML: ```yaml #### Bitbucket Pipelines: Using a Pipe - pipe: atlassian/aws-s3-deploy:1.1.0 variables: AWS_DEFAULT_REGION: "us-east-1" S3_BUCKET: "my-bucket" LOCAL_PATH: "dist" ``` **Buildkite Pipelines:** The equivalent plugin would be referenced directly in your pipeline YAML and versioned per step: ```yaml #### Buildkite Pipelines: Using a plugin steps: - label: "Deploy to S3" plugins: - aws-s3-deploy#v1.0.0: bucket: "my-bucket" local-path: "dist" ``` Key differences between the two approaches: - Bitbucket Pipes run as separate Docker containers within a step. Buildkite plugins are shell-based hooks that run directly on the agent, giving them more flexibility to modify the build environment. - Bitbucket bakes many capabilities into the platform natively (caching, artifacts, services, deployments). In Buildkite Pipelines, some of these capabilities are provided through plugins, such as the [Docker plugin](https://buildkite.com/resources/plugins/buildkite-plugins/docker-buildkite-plugin/), [cache plugin](https://buildkite.com/resources/plugins/buildkite-plugins/cache-buildkite-plugin), and [Docker Compose plugin](https://buildkite.com/resources/plugins/buildkite-plugins/docker-compose-buildkite-plugin/). - Buildkite plugin failures are isolated to individual builds, with no system-wide plugin management required. Browse available plugins in the [plugins directory](https://buildkite.com/resources/plugins/). ##### Translate an example Bitbucket Pipelines configuration This section walks through translating a typical Bitbucket Pipelines configuration for a Node.js application into a Buildkite pipeline. The example Bitbucket pipeline demonstrates common features, including: - A global `image` applied to all steps. - A `parallel` block for running tests concurrently. - Built-in `caches` for `node_modules`. - `artifacts` passed between steps. - A `deployment` step with `trigger: manual` for production releases. ###### The original Bitbucket pipeline Here is the complete `bitbucket-pipelines.yml`: ```yaml image: node:20 definitions: caches: app-node: app/node_modules pipelines: branches: main: - parallel: - step: name: Lint caches: - app-node script: - cd app && npm ci - npm run lint - step: name: Test caches: - app-node script: - cd app && npm ci - npm test artifacts: - app/coverage/** - step: name: Build caches: - app-node script: - cd app && npm ci - npm run build artifacts: - app/dist/** - step: name: Deploy to production deployment: production trigger: manual script: - ./deploy.sh production ``` This pipeline runs lint and test in parallel, builds the application after both pass, and then waits for manual approval before deploying to production. ###### Step 1: Create a basic Buildkite pipeline structure Start by creating a `.buildkite/pipeline.yml` file with the basic step structure, translating `name` to `label` and `script` to `command`: ```yaml steps: - label: "Lint" command: - echo "Lint placeholder" - label: "Test" command: - echo "Test placeholder" - label: "Build" command: - echo "Build placeholder" ``` Notice that there is no `parallel` block. In Buildkite Pipelines, these three steps will run in parallel by default. ###### Step 2: Add step dependencies The build step should only run after lint and test complete. Add `key` attributes and a `depends_on` to the build step: ```yaml steps: - label: "Lint" key: "lint" command: - echo "Lint placeholder" - label: "Test" key: "test" command: - echo "Test placeholder" - label: "Build" depends_on: - "lint" - "test" command: - echo "Build placeholder" ``` Without `depends_on`, all three steps would run simultaneously. This gives you the same execution order as the Bitbucket pipeline: lint and test in parallel, then build. ###### Step 3: Add the Docker plugin for the container image Bitbucket Pipelines' global `image: node:20` must be applied per step in Buildkite Pipelines using the [Docker plugin](https://buildkite.com/resources/plugins/buildkite-plugins/docker-buildkite-plugin/). Use a YAML anchor to avoid repetition: ```yaml common: - docker_plugin: &docker docker#v5.12.0: image: node:20 steps: - label: "Lint" key: "lint" command: - cd app && npm ci - npm run lint plugins: - *docker - label: "Test" key: "test" command: - cd app && npm ci - npm test plugins: - *docker - label: "Build" depends_on: - "lint" - "test" command: - cd app && npm ci - npm run build plugins: - *docker ``` The `common` section is ignored by Buildkite but allows you to define reusable YAML anchors. This replaces Bitbucket's global `image` with an equivalent per-step configuration. ###### Step 4: Add artifact handling Bitbucket Pipelines makes artifacts automatically available to subsequent steps. In Buildkite Pipelines, use `artifact_paths` to upload and `buildkite-agent artifact download` to retrieve them: ```yaml - label: "Test" key: "test" command: - cd app && npm ci - npm test plugins: - *docker artifact_paths: - "app/coverage/**" - label: "Build" depends_on: - "lint" - "test" command: - cd app && npm ci - npm run build plugins: - *docker artifact_paths: - "app/dist/**" ``` The deploy step will need to explicitly download the build artifacts before deploying: ```yaml - label: "Deploy to production" command: - buildkite-agent artifact download "app/dist/**" . - ./deploy.sh production ``` ###### Step 5: Add the deployment gate Bitbucket Pipelines uses `trigger: manual` for manual approval. In Buildkite Pipelines, use a [`block` step](/docs/pipelines/configure/step-types/block-step) and `concurrency_group` to serialize deployments: ```yaml - block: "Deploy to production?" depends_on: "build" - label: "Deploy to production" command: - buildkite-agent artifact download "app/dist/**" . - ./deploy.sh production concurrency: 1 concurrency_group: "deploy-production" ``` ###### Step 6: Add branch filtering The Bitbucket pipeline runs only on the `main` branch. In Buildkite Pipelines, add `branches: "main"` to each step, or configure branch filtering in your pipeline settings. ###### Step 7: Review the complete result Here is the complete translated Buildkite pipeline: ```yaml common: - docker_plugin: &docker docker#v5.12.0: image: node:20 steps: - label: "\:eslint\: Lint" key: "lint" branches: "main" command: - cd app && npm ci - npm run lint plugins: - *docker - label: "\:test_tube\: Test" key: "test" branches: "main" command: - cd app && npm ci - npm test plugins: - *docker artifact_paths: - "app/coverage/**" - label: "\:package\: Build" key: "build" branches: "main" depends_on: - "lint" - "test" command: - cd app && npm ci - npm run build plugins: - *docker artifact_paths: - "app/dist/**" - block: "\:rocket\: Deploy to production?" branches: "main" depends_on: "build" - label: "\:rocket\: Deploy to production" branches: "main" command: - buildkite-agent artifact download "app/dist/**" . - ./deploy.sh production concurrency: 1 concurrency_group: "deploy-production" ``` Compared to the original Bitbucket pipeline, this Buildkite pipeline: - Replaces the global `image` with a Docker plugin YAML anchor. - Removes the explicit `parallel` block, since Buildkite steps run in parallel by default. - Uses `depends_on` for sequential ordering instead of relying on step position. - Makes artifact passing explicit with `artifact_paths` and `buildkite-agent artifact download`. - Replaces `trigger: manual` with a `block` step for deployment approval. - Replaces `deployment: production` with `concurrency_group` for deployment serialization. > 📘 Caching > The Bitbucket pipeline used built-in `caches` for `node_modules`. For Buildkite hosted agents, use [cache volumes](/docs/agent/buildkite-hosted/cache-volumes) and [enable container caching](/docs/agent/buildkite-hosted/cache-volumes#container-cache-volumes-enabling-container-cache-volumes). For self-hosted agents, use the [cache plugin](https://buildkite.com/resources/plugins/buildkite-plugins/cache-buildkite-plugin). Since each step already runs `npm ci`, caching is an optimization you can add later. ##### Translating common patterns This section covers additional Bitbucket Pipelines features and patterns not demonstrated in the [example translation above](#translate-an-example-bitbucket-pipelines-configuration). ###### Pull request pipelines Bitbucket Pipelines uses `pipelines.pull-requests` to define steps that run only on pull requests. In Buildkite Pipelines, pull request (PR) triggers are configured in the pipeline settings rather than in YAML. **Bitbucket Pipelines:** ```yaml pipelines: pull-requests: "**": - step: name: Test script: - npm test ``` **Buildkite Pipelines:** Configure PR builds in your pipeline settings under the source control provider section (for example, in your pipeline's **Settings** > **GitHub** or **Bitbucket** page) by enabling **Build pull requests**. Steps defined in your pipeline YAML will run for both branch pushes and PRs unless filtered. For PR-only steps, use an `if` conditional: ```yaml steps: - label: "PR-only tests" command: "npm run test:pr" if: build.pull_request.id != null ``` ###### Reusable step definitions Bitbucket Pipelines defines reusable steps under `definitions.steps` using YAML anchors. Buildkite Pipelines supports the same pattern using a `common` section (which Buildkite Pipelines ignores—any top-level keys are ignored) to hold YAML anchors. **Bitbucket Pipelines:** ```yaml definitions: steps: - step: &build-step name: Build script: - npm run build - step: &test-step name: Test script: - npm test pipelines: branches: main: - step: *build-step - step: *test-step develop: - step: *build-step ``` **Buildkite Pipelines:** ```yaml common: - build_step: &build-step label: "Build" command: - npm run build - test_step: &test-step label: "Test" command: - npm test steps: - 📘 if_changed requires dynamic pipeline upload > The `if_changed` attribute is processed only by `buildkite-agent pipeline upload`. Store your pipeline YAML in the repository (for example, `.buildkite/pipeline.yml`) and use a pipeline upload step. ###### Service containers Bitbucket Pipelines uses `definitions.services` and `services` to run sidecar containers. In Buildkite Pipelines, use the [Docker Compose plugin](https://buildkite.com/resources/plugins/buildkite-plugins/docker-compose-buildkite-plugin/) with a `docker-compose.yml` file. **Bitbucket Pipelines:** ```yaml definitions: services: mysql: image: mysql:5.7 environment: MYSQL_DATABASE: test_db MYSQL_ROOT_PASSWORD: password pipelines: default: - step: name: Integration tests services: - mysql script: - npm run test:integration ``` **Buildkite Pipelines:** The service definition moves out of the pipeline YAML and into a separate `docker-compose.test.yml` file: ```yaml services: app: build: . depends_on: [mysql] mysql: image: mysql:5.7 environment: MYSQL_DATABASE: test_db MYSQL_ROOT_PASSWORD: password ``` The pipeline step references this file through the Docker Compose plugin: ```yaml steps: - label: "Integration tests" plugins: - docker-compose#v5.5.0: run: app config: docker-compose.test.yml command: - npm run test:integration ``` ###### Timeouts Bitbucket Pipelines uses `options.max-time` for a global timeout and `max-time` per step. In Buildkite Pipelines, configure a default timeout in your pipeline's **Settings** > **Builds** > **Default command step timeout**, or use `timeout_in_minutes` on individual steps. **Bitbucket Pipelines:** ```yaml options: max-time: 40 pipelines: default: - step: name: Build script: - npm run build - step: name: Test max-time: 10 script: - npm test ``` **Buildkite Pipelines (pipeline settings default):** If your pipeline's default command step timeout is 40 minutes, for example, the `npm run build` command step will time out in this time, but the `npm test` one will time out in 10 minutes: ```yaml steps: - label: "Build" command: "npm run build" - label: "Test" command: "npm test" timeout_in_minutes: 10 ``` **Buildkite Pipelines (YAML anchor):** Alternatively, to keep the global timeout in version control, use a YAML anchor: ```yaml common: - timeout: &default-timeout timeout_in_minutes: 40 steps: - label: "Build" command: "npm run build" **Pipelines** > **Variables** and referenced in scripts as environment variables. **Buildkite Pipelines (static variables):** ```yaml env: ENVIRONMENT: staging steps: - label: "Build" command: "./build.sh" - label: "Test" env: ENVIRONMENT: test command: "./test.sh" ``` Top-level `env` applies to all steps. Per-step `env` overrides or extends the top-level values. You can also configure pipeline-level environment variables in **Pipeline Settings** > **Environment Variables**. **Bitbucket Pipelines (user-prompted variables):** ```yaml pipelines: custom: deploy: - variables: - name: Environment default: staging allowed-values: - staging - production - step: name: Deploy script: - ./deploy.sh $Environment ``` **Buildkite Pipelines (user-prompted variables):** ```yaml steps: - input: "Configure deployment" fields: - select: "Environment" key: "deploy-environment" default: "staging" options: - label: "Staging" value: "staging" - label: "Production" value: "production" - label: "Deploy" command: - ./deploy.sh $(buildkite-agent meta-data get "deploy-environment") ``` ###### Clone settings Bitbucket Pipelines uses `clone` to configure clone depth, Git LFS, or to skip cloning entirely. In Buildkite Pipelines, configure clone behavior using environment variables or the `skip_checkout` attribute. **Bitbucket Pipelines:** ```yaml clone: depth: 1 lfs: true ``` **Buildkite Pipelines:** For shallow clones, set the `BUILDKITE_GIT_CLONE_FLAGS` environment variable: ```yaml env: BUILDKITE_GIT_CLONE_FLAGS: "--depth=1" steps: - label: "Build" command: "npm run build" ``` To skip cloning entirely, use `skip_checkout: true` on the step: ```yaml steps: - label: "Notify" command: "./send-notification.sh" plugins: - skip-checkout#v1.0.0: ~ ``` For Git LFS, ensure `git-lfs` is installed on your agents. ###### Fail-fast behavior Bitbucket Pipelines uses `fail-fast: true` on a `parallel` block to cancel remaining steps when one fails. In Buildkite Pipelines, use `cancel_on_build_failing: true` on each step that should be canceled when the build is failing. **Bitbucket Pipelines:** ```yaml - parallel: fail-fast: true steps: - step: name: Test 1 script: - npm run test:1 - step: name: Test 2 script: - npm run test:2 ``` **Buildkite Pipelines:** ```yaml steps: - label: "Test 1" command: "npm run test:1" cancel_on_build_failing: true - label: "Test 2" command: "npm run test:2" cancel_on_build_failing: true ``` ###### Cleanup commands Bitbucket Pipelines uses `after-script` for commands that run regardless of step success or failure. In Buildkite Pipelines, use a shell `trap` within your command or a [job lifecycle `post-command` hook](/docs/agent/hooks#job-lifecycle-hooks). **Bitbucket Pipelines:** ```yaml - step: name: Build script: - npm install - npm run build after-script: - echo "Cleaning up..." - rm -rf temp/ ``` **Buildkite Pipelines (shell trap—per step):** ```yaml steps: - label: "Build" command: | cleanup() { echo "Cleaning up..."; rm -rf temp/; } trap cleanup EXIT npm install npm run build ``` **Buildkite Pipelines (job lifecycle hook):** For cleanup that applies to every step, create a `.buildkite/hooks/post-command` file in your repository: ```bash #!/bin/bash echo "Cleaning up..." rm -rf temp/ ``` ##### Next steps Explore these Buildkite resources to help you enhance your migrated pipelines: - [Defining your pipeline steps](/docs/pipelines/defining-steps) for an advanced guide on how to configure Buildkite pipeline steps. - [Dynamic pipelines](/docs/pipelines/configure/dynamic-pipelines) to learn how to generate pipeline definitions at build-time. - [Plugins directory](https://buildkite.com/resources/plugins/) for a catalog of Buildkite- and community-developed plugins to enhance your pipeline functionality. - [Buildkite agent hooks](/docs/agent/hooks) to extend or override the default behavior of Buildkite agents. - [Using conditionals](/docs/pipelines/configure/conditionals) to run pipeline builds or steps only when specific conditions are met. - [Security](/docs/pipelines/security) and [Secrets](/docs/pipelines/security/secrets) overview pages for managing security and secrets within your Buildkite infrastructure. - After configuring Buildkite Pipelines for your team, learn how to obtain actionable insights from the tests running in pipelines using [Test Engine](/docs/test-engine). If you need further assistance with your Bitbucket Pipelines migration, reach out to the Buildkite support team at support@buildkite.com. --- ### From Bamboo URL: https://buildkite.com/docs/pipelines/migration/from-bamboo #### Migrate from Bamboo Migrating continuous integration tools can be challenging, so we've put together a guide to help you transition your Bamboo skills to Buildkite Pipelines. ##### Plans to pipelines You can easily map most Bamboo workflows to Buildkite Pipelines. _Projects and Plans_ in Bamboo are called [pipelines](/docs/pipelines) in Buildkite (and **Pipelines** in the Buildkite dashboard). Bamboo deployments also become Buildkite's pipelines. A Buildkite pipeline contains different types of [_steps_](/docs/pipelines/configure/step-types) for different tasks: - [Command step](/docs/pipelines/configure/step-types/command-step): Runs one or more shell commands on one or more agents. - [Wait step](/docs/pipelines/configure/step-types/wait-step): Pauses a build until all previous jobs have completed. - [Block step](/docs/pipelines/configure/step-types/block-step): Pauses a build until unblocked. - [Input step](/docs/pipelines/configure/step-types/input-step): Collects information from a user. - [Trigger step](/docs/pipelines/configure/step-types/trigger-step): Creates a build on another pipeline. - [Group step](/docs/pipelines/configure/step-types/group-step): Displays a group of sub-steps as one parent step. For example, a test and deploy pipeline might consist of the following steps: ```yaml steps: # First stage - command: test_1.sh - command: test_2.sh - wait # Second stage - command: deploy.sh ``` Instead of the `wait` step above, you could use a `block` step to stop the build and require a user to manually _unblock_ the pipeline by clicking the **Continue** button in the Buildkite dashboard, or use the [Unblock Job](/docs/apis/rest-api/jobs#unblock-a-job) REST API endpoint. This is the equivalent of a _Manual Stage_ in Bamboo. ```yaml steps: - command: test_1.sh - command: test_2.sh - block: 'Deploy to Production' - command: deploy.sh ``` Let's look at an example Bamboo Plan: You can map this plan to a Buildkite pipeline using a combination of `command`, `wait`, and `block` steps: You could also define this Bamboo Plan using the following `pipeline.yml` file: ```yaml steps: # The first stage is to run the "make" command - which will compile # the application and store the binaries in a `build` folder. Upload the # contents of that folder as an Artifact to Buildkite. - command: "make" artifact_paths: "build/*" # To prevent the "make test" stage from running before "make" has finished, # separate the command with a "wait" step. - wait # Before running `make test`, download the artifacts created in # the previous step. To do this, use `buildkite-agent artifact # download` command. - command: | mkdir build buildkite-agent artifact download "build/*" "build/" make test # By putting commands next to each other, you can make them run in parallel. - command: | mkdir build buildkite-agent artifact download "build/*" "build/" make lint - block: "Deploy to production" - command: "scripts/deploy.sh" ``` Once your build pipelines are set up, you can update step labels to something more fun than plain text (see our [extensive list of supported emojis](https://github.com/buildkite/emojis)). :smiley: If you have many pipelines to migrate or manage at once, you can use the [Update pipeline](/docs/apis/rest-api/pipelines#update-a-pipeline) REST API. ##### Steps and tasks `command` steps are Buildkite's version of the _Command Task_ in Bamboo. They can run any commands you like on your build server, whether it's `rake test` or `make`. Buildkite doesn't have the concept of _Tasks_ in general. It's up to you to write scripts that perform the same tasks that your Bamboo Jobs have. For example, you had the following set of Bamboo Tasks: You can rewrite this as a single script and then commit it to your repository. The Buildkite agent takes care of checking out the repository for you before each step, so the script would be as follows: ```bash #!/bin/bash #### These commands are run within the context of your repository echo "--- Running specs" rake specs echo "--- Running cucumber tests" rake cucumber ``` If you'd like to learn more about how to write build scripts, see [Writing build scripts](/docs/pipelines/configure/writing-build-scripts). To trigger builds in other pipelines, you can use `trigger` steps. This way, you can create dependent pipelines. See the [trigger steps docs](/docs/pipelines/configure/step-types/trigger-step) for more information. ##### Remote and Elastic agents The [Buildkite agent](/docs/agent) replaces your Bamboo _Remote Agents_. You can install agents onto any server to run your builds. In Bamboo, you can target specific agents for your jobs using their _Capabilities_, and in Buildkite, you target them using [meta-data](/docs/agent/cli/reference/meta-data). Like _Elastic Bamboo_, Buildkite can also manage a fleet of agents for you on AWS using the [Elastic CI Stack for AWS](/docs/agent/self-hosted/aws/elastic-ci-stack). Buildkite doesn't limit the number of agents you can run at any one time, so by using the AWS Stack, you can auto-scale your build infrastructure, going from 0 to 1000s of agents within moments. ##### Authentication and permissions Buildkite supports SSO with a variety of different providers, as well as custom SAML setups. See the [SSO support guide](/docs/platform/sso) for detailed information. For larger teams, it can be useful to control what users have access to which pipelines. Organization admins can enable Teams in the [organization's team settings](https://buildkite.com/organizations/~/teams). --- ### Overview URL: https://buildkite.com/docs/pipelines/best-practices #### Best practices The _Best practices_ section outlines recommended practices and guidelines for getting set up, designing, operating, and scaling Buildkite Pipelines effectively. Implementing these practices and guidelines will help you get up to speed, help ensure reliability and maintainability, and help you avoid common pitfalls, when using Buildkite Pipelines (Pipelines). These guidelines assume familiarity with Pipelines [terminology](/docs/pipelines/glossary) and an understanding of [Pipelines architecture](/docs/pipelines/architecture). --- ### Pipeline design and structure URL: https://buildkite.com/docs/pipelines/best-practices/pipeline-design-and-structure #### Pipeline design and structure This guide distills practical patterns for designing Buildkite pipelines that are maintainable and scalable as your codebase and teams grow. ##### Keep pipelines focused and modular - Start simple, then evolve: * Begin with [static pipelines](/docs/pipelines/create-your-own) for clarity and quick onboarding. * Move to [dynamic pipelines](/docs/pipelines/configure/dynamic-pipelines) as your repositories and requirements grow to avoid YAML sprawl and enable conditional generation of steps at runtime. - Separate concerns: * Keep build, test, security, packaging, and deploy concerns in distinct steps or groups. * Use small, composable scripts called by steps rather than embedding complex logic inline. > 📘 > If you are coming to Buildkite Pipelines from a different CI/CD platform and would like to continue using matrix steps, know that [matrix steps](/docs/pipelines/configure/step-types/command-step#matrix-attributes) in Buildkite Pipelines don't work exactly the same way - not all steps in the matrix will always be executed. Instead, we recommend re-formatting your matrix steps as dynamic steps. ##### Optimize monorepo builds using change scoping - Use the agent's `if_changed` feature or the official [Monorepo diff plugin](https://buildkite.com/resources/plugins/buildkite-plugins/monorepo-diff-buildkite-plugin/) to selectively build and test affected components. Learn more in [Working with monorepos](/docs/pipelines/best-practices/working-with-monorepos). - Use the `skip` condition to programmatically bypass individual steps, or use conditional logic in dynamic pipeline uploads to selectively generate only the necessary steps. ##### Prioritize fast feedback loops - Maximize [parallelism](/docs/pipelines/configure/workflows/controlling-concurrency#concurrency-and-parallelism) - split independent jobs and shards. Use parallelism for test sharding and cache warmers. - Put quick, failure-prone checks first - for example, schema validations, `codegen`, linting, type checks, and the fastest unit tests. Use the `depends_on` attribute to run independent fast checks in parallel before slower dependent steps. Use the [fast-fail](/docs/pipelines/configure/step-types/command-step#fast-fail-running-jobs) feature to automatically cancel any remaining jobs as soon as any job in the build fails. - Use [branch filters](/docs/pipelines/configure/workflows/branch-configuration#pipeline-level-branch-filtering) and `if` conditions for conditional execution - to skip unnecessary work in forks, release branches, draft PRs, and so on. Minimize [wait steps](/docs/pipelines/configure/step-types/wait-step) as they serialize execution - only use them when dependencies truly require it. Consider whether `depends_on` can replace `wait` for more granular parallelism in your pipelines. - Use [annotations](/docs/agent/cli/reference/annotate) for build summaries that help with debugging - for example, link to logs, JUnit pass/fail overviews, and flake reports. - Customize error codes for auto-retries to disable auto-retries on legitimately failed builds. - Use [auto-retry](/docs/pipelines/configure/retry#retry-attributes-automatic-retry-attributes) strategically to identify _all_ kinds of flakiness - beyond just flaky tests (that can be identified using [Test Engine](/docs/test-engine)). Example retry configuration: ```yaml retry: automatic: - exit_status: -1 # agent lost limit: 2 - exit_status: 255 # infrastructure issue limit: 1 ``` ###### Structure YAML for clarity - Use short, clear, human-readable labels with consistent prefixes and emoji for quick scanning. - Group steps to collect related phases and present a clean top-level pipeline. - Use descriptive `key` attributes on possible steps to enable clear dependency declarations with `depends_on` and make selective reruns easier. - Leave comments for non-obvious logic and custom exit codes, explain tricky `if` conditions, environment dependencies, or ordering constraints. Design steps to be independently runnable where possible. Here's an example group step for security tests that demonstrates clear labels and helpful comments: ```yaml steps: - group: ":lock: Security Tests" key: "security-tests" steps: - label: ":microscope: Dependency Scan · Snyk" key: "dependency-scan" command: | snyk test --json-file-output=snyk-results.json artifact_paths: - "snyk-results.json" - label: ":package: Container Scan · Trivy" key: "container-scan" command: | trivy image --format json --output trivy-results.json myapp:latest artifact_paths: - "trivy-results.json" - label: ":key: Secret Scan · Gitleaks" key: "secret-scan" command: | gitleaks detect --report-path gitleaks-report.json artifact_paths: - "gitleaks-report.json" - wait: ~ continue_on_failure: true # allows the pipeline to continue even if security checks fail - label: ":bar_chart: Aggregate Security Results" depends_on: - "security-tests" command: | echo "All security tests completed. Review results above." ``` ###### Ownership and deployment - Use [block steps](/docs/pipelines/configure/step-types/block-step) as explicit approvals between stages. Attach change summaries and release notes to the block. - Consider splitting large pipelines into smaller, purpose-specific pipelines using [trigger steps](/docs/pipelines/configure/step-types/trigger-step). This enables independent ownership, versioning, and evolution of different deployment stages or environments. - Define `CODEOWNERS` for pipeline files and generation code. Require reviews for changes to core templates. - Version your [pipeline templates](/docs/pipelines/governance/templates) and [custom plugins](/docs/pipelines/integrations/plugins/writing). Roll them out with a changelog for tracking changes. - Implement environment isolation - separate credentials and secrets per environment [using environment hooks](/docs/pipelines/security/secrets/managing#without-a-secrets-storage-service-exporting-secrets-with-environment-hooks) or secret managers. Never reuse production credentials in CI. You can learn more about handling of credentials and other secrets in [Secrets management](/docs/pipelines/best-practices/secrets-management). --- ### Agent management URL: https://buildkite.com/docs/pipelines/best-practices/agent-management #### Agent management best practices This page covers best practices for effective management of [Buildkite agents](/docs/agent). Buildkite agents execute your pipeline's jobs. The right infrastructure, queue layout, and lifecycle policies for your Buildkite agents determine the security, speed, and cost of your agent fleet. ##### Choosing the right architecture Buildkite agents can run on local machines, cloud compute, container schedulers, and serverless infrastructure. Choose based on your workload characteristics, cost constraints, and operational maturity. Many teams adopt a hybrid approach, combining different stacks for different workload types. | Stack | Best for | Key benefits | | ----- | -------- | ------------ | | **Cloud compute** | High utilization, disk-heavy jobs | Bin-pack multiple agents, warm images, large cache support | | **Containers (Kubernetes/ECS)** | Elastic isolation per job, burst isolation | Fast autoscaling, clean environments, strong isolation | | **Buildkite hosted agents** | Speed to value, zero ops, bursty workloads | Fully managed, isolated clusters, per-minute billing | | **Hybrid approach** | Cost optimization and accounting for different use cases for different teams| Provides the best agent infrastructure for your particular needs | See a more detailed overview of each architecture type for Buildkite agents to choose what's right for your Buildkite organization. ###### Cloud compute Run multiple agents per an instance to maximize cost efficiency and enable heavy caching. **Pros:** - Strong isolation with predictable performance - Warm images reduce job startup time - Compatible with spot instances for cost savings - Support for large disk caches and GPU/TPU workloads **Cons:** - Additional operational overhead to patch and maintain instances - Cost inefficiency at low utilization if agents are under-used - Slower agent spin-up times compared to other agent architectures Learn more in [Elastic CI Stack for AWS](/docs/agent/self-hosted/aws/elastic-ci-stack). ###### Containers (Kubernetes, ECS) You can deploy ephemeral agents per job for maximum isolation and rapid scaling, or long-running agents that stay alive between jobs for improved performance through warm starts and persistent caching. **Pros:** - Fast spin-up with fine-grained autoscaling - Clean environments reduce build flakiness - Native resource limits and multi-tenant isolation **Cons:** - Pulling large images can increase job startup latency - Requires cluster expertise and ongoing platform maintenance - Limited access to large persistent disk caches per job Learn more in [Agent Stack for Kubernetes](/docs/agent/self-hosted/agent-stack-k8s). ###### Buildkite hosted agents [Buildkite hosted agents](/docs/agent/buildkite-hosted) provide fully managed infrastructure with isolated clusters and minimal operational overhead. **Pros:** - Fully managed infrastructure with zero operational overhead - Built-in caching for [Git mirrors](/docs/agent/self-hosted/configure/git-mirrors) and containers, as well as attachable [Cache volumes](/docs/agent/buildkite-hosted/cache-volumes#container-cache-volumes) for temporary data storage - Isolated clusters that provide strong security boundaries - Per-minute billing with automatic scaling for bursty workloads - Ideal for highly parallel test suites **Cons:** - Hosted agents run outside your private network boundary, so may not meet strict compliance or data-residency requirements - Less control over hardware configuration and OS versions than in self-managed compute - Higher cost for sustained high throughput compared to self-managed compute ##### Capacity strategy There is no need to settle on a single architecture within your Buildkite organization as you utilize different stacks based on the needs and knowledge level in your teams. For example, a popular approach among Buildkite users is to have a self-managed agent fleet that is based on either [Kubernetes](/docs/agent/self-hosted/agent-stack-k8s) or cloud compute instances ([AWS](/docs/agent/aws) or [Google Cloud Platform](/docs/agent/self-hosted/gcp)), as well as on [Buildkite macOS hosted agents](/docs/agent/buildkite-hosted/macos) due to ease of management, clean development environments, and [optimized caching](/docs/agent/buildkite-hosted/cache-volumes) the latter provide. Different teams in those Buildkite organizations can utilize the stacks that are better suited to their needs. Similarly, in terms of agent fleet scaling, instead of choosing between using static or autoscaling agents exclusively, you can: - Keep one-two small static instances in your default queue for pipeline uploads as this speeds up pipeline starts and allows proper autoscaling. - Use dedicated autoscaling queues for actual workload. ##### Structuring clusters and queues You should organize [clusters](/docs/pipelines/security/clusters) as security boundaries and [queues](/docs/agent/queues) for workload routing. Use separate queues and a small subset of agents to trial new architectures (for example, [Buildkite hosted agents](/docs/agent/buildkite-hosted)) before rolling them out broadly across your Buildkite organization. Learn more about using clusters and queues in [Managing clusters](/docs/pipelines/security/clusters/manage) and [Managing queues](/docs/agent/queues/managing). ##### Agent lifecycle - Long-running agents provide caching benefits ([Git mirrors](/docs/agent/self-hosted/configure/git-mirrors), [dependencies](/docs/pipelines/configure/depends-on)): * Retire oldest agents first during scale-down * Add telemetry to detect flaky agents - Ephemeral agents reduce attack surface and configuration drift. [Buildkite hosted agents](/docs/agent/buildkite-hosted/linux#agent-images) support repository caches and shared volumes. ##### Right-sizing of your agent fleet - Monitor queue times with [cluster insights](/docs/pipelines/security/clusters#cluster-insights) and the [buildkite-agent-metrics](/docs/agent/self-hosted/monitoring-and-observability#buildkite-agent-metrics-cli) tool. - Use cloud-based autoscaling ([Elastic CI Stack for AWS](https://github.com/buildkite/elastic-ci-stack-for-aws), [Buildkite agent Scaler](https://github.com/buildkite/buildkite-agent-scaler), [Agent Stack for Kubernetes](/docs/agent/self-hosted/agent-stack-k8s)). - Maintain dedicated pools for CPU-intensive, GPU-enabled, or OS-specific workloads. - Configure [graceful termination](/docs/agent/lifecycle#signal-handling) to allow jobs to complete. - To be able to duplicate your fleet of agents in an easy way, favor agent images and configurations that are able to run in more than one environment. For example, you can have a single Docker image that contains the latest Buildkite agent binary, a selection of development and deployment tools, and a config that reads information such as queues or tags from environment variables. You could then run such image as Kubernetes agents, ECS agents, or in a Docker setup on a virtual machine. ##### Resilience and redundancy Strive to have an architecture that allows you to run agents in multiple regions or on a secondary platform to make sure that the critical queues keep running during outages. For example, instead of running all your agents for a critical queue in a single availability zone - spread your agents to other availability zones. This way, if one of the availability zones experiences issues, the agents in other zones will still be able to pick up the jobs. Opt for building out your agent architecture in such a way that a single host or cluster problem will only affect a limited (preferably small) subset of queues or pipelines, and not your entire agent fleet. ##### Security Build security into agent infrastructure from the start. Follow least privilege principles and integrate proper secret management. It's recommended that you: - Store secrets in hooks or cloud secret stores. You can find more on proper secrets management in Buildkite Pipelines in [Buildkite secrets](/docs/pipelines/security/secrets/buildkite-secrets) and [Secrets management](/docs/pipelines/best-practices/secrets-management) - Use short-lived tokens and [ephemeral agents](/docs/agent/buildkite-hosted/linux#agent-images) - Enforce infrastructure-as-code ([Terraform](/docs/package-registries/ecosystems/terraform), CloudFormation) For more information on agent security, see [Buildkite agent security](/docs/pipelines/best-practices/security-controls#buildkite-agent-security). --- ### Docker-based builds URL: https://buildkite.com/docs/pipelines/best-practices/docker-containerized-builds #### Containerized builds with Docker Buildkite Pipelines has built-in support for running your builds in Docker containers. Running your builds with Docker allows each pipeline to define and document its testing environment, greatly simplifying your build servers, and provides build isolation when [parallelizing your build](parallel-builds). ##### Overview To run your steps using Docker, there are two official [Buildkite plugins](/docs/pipelines/integrations/plugins): the [Docker Compose plugin](https://github.com/buildkite-plugins/docker-compose-buildkite-plugin) and the [Docker plugin](https://github.com/buildkite-plugins/docker-buildkite-plugin). The [Docker Compose plugin](https://github.com/buildkite-plugins/docker-compose-buildkite-plugin) supports repositories with `docker-compose.yml` files, projects that use multiple containers or have dependent services, and building docker images inside pipeline steps. The [Docker plugin](https://github.com/buildkite-plugins/docker-buildkite-plugin) supports single container applications, and involves less setup than the Docker Compose plugin. It doesn't allow the use of `docker-compose.yml` files, and doesn't support advanced operations like building images. ##### Docker Hub rate limits If you're using Docker with Docker images hosted on Docker Hub, note that as of 2nd November 2020 there are [strict rate limits](/docs/pipelines/integrations/other/docker-hub) for image downloads. ##### Creating a Docker Compose configuration file For most projects we recommend using [Docker Compose](https://docs.docker.com/compose/) as it allows each pipeline to define its own `docker-compose.yml` with dependent containers and environment variables to be passed through. Here's a example of a `docker-compose.yml` file for a Ruby on Rails application that depends on Postgres, Redis and Memcache: ```yml version: '2' services: db: image: postgres redis: image: redis memcache: image: memcached app: build: . working_dir: /app volumes: - .:/app depends_on: - db - redis - memcache environment: PGHOST: db PGUSER: postgres REDIS_URL: redis://redis MEMCACHE_SERVERS: memcache ``` Mounting `.` (the directory of current build) as a volume in the container allows any changes from inside the container to be visible to the agent. This is required if you want to upload artifacts that were created in the container. ##### Configuring the build step This example runs a test suite in the `app` service using the [Docker Compose plugin](https://github.com/buildkite-plugins/docker-compose-buildkite-plugin): ```yml - name: "Docker Test %n" command: test.sh plugins: - docker-compose#v5.11.0: run: app ``` This is the equivalent of running: ```bash docker-compose run app test.sh ``` For more examples and information on using this plugin, have a look at the [Docker Compose Plugin on GitHub](https://github.com/buildkite-plugins/docker-compose-buildkite-plugin). The Buildkite agent also has support for running build steps in an existing Docker image. To use the Docker plugin, add the `plugins` attribute to your command step. In this example, the `yarn` commands will be run inside a Docker container using the `node:8` Docker image: ```yml steps: - command: yarn install && yarn run test plugins: - docker#v5.13.0: image: "node:8" workdir: /app ``` There are many configuration options available for the Docker plugin. For the complete list, see the readme for the [Docker Buildkite Plugin on GitHub](https://github.com/buildkite-plugins/docker-buildkite-plugin). >📘 Pinning plugin versions > Specifying the version of your plugin using the `plugin-name#vx.x.x` format is recommended, to ensure that no changes are introduced without your knowledge. ##### Pipeline templates using Docker To see more examples of how Docker is used in Buildkite pipelines, browse the [Docker templates](https://buildkite.com/pipelines/templates?platform=docker). ##### Creating your own Docker infrastructure If your team has significant Docker experience you might find it worthwhile invoking your own runner scripts rather than using the simpler built-in Docker support. To do this, see the [job lifecycle hooks](/docs/agent/hooks#job-lifecycle-hooks) and [parallel builds](parallel-builds) documentation. ##### Adding buildkite-agent to the Docker group On the agent machine, to allow `buildkite-agent` to use the Docker client, you'll need to ensure its user has the necessary permissions. For most platforms this means adding the `buildkite-agent` user to your system's `docker` group, and then restarting the Buildkite agent to ensure it is running with the correct permissions. See your platform's [Docker installation instructions](https://docs.docker.com/installation/) for more details. ##### Adding a cleanup task Over time your Docker host's file system can fill up with unused images. It's recommended to schedule Docker's [system prune](https://docs.docker.com/engine/reference/commandline/system_prune) command to run on a daily basis, which can remove all unused containers, networks, and images. --- ### Parallelizing builds URL: https://buildkite.com/docs/pipelines/best-practices/parallel-builds #### Parallel builds Running a build's steps in parallel is a way to decrease your build's total running time. This guide will show you how to use multiple agents and job parallelism to increase the speed of your builds. [Command steps](/docs/pipelines/configure/step-types/command-step) run in parallel by default. If you define multiple steps, and [run multiple agents](#running-multiple-agents), the steps will run at the same time across your agents. If you don't want your steps to run at the same time, you can add [wait steps](/docs/pipelines/configure/step-types/wait-step) or use [dependencies](/docs/pipelines/configure/depends-on). For example, you could have a test step and a deploy step, with a wait step in between. A single command step can also be broken up into many [parallel jobs](#parallel-jobs). For example, a long-running test suite can be split into many parallel pieces across multiple agents, reducing the total run time of your build. ##### Running multiple agents There are two ways to scale your build agents: horizontally across multiple machines, or vertically on a single machine. You can even run many agents per machine across many machines. ###### Multiple agents on one machine The steps for running multiple agents are slightly different for each platform. Automated installers and detailed instructions can be found in the [installation](/docs/agent/self-hosted/install) section. But the simplest example is to use the [`spawn` option](/docs/agent/cli/reference/start#spawn) when starting the agent: ```bash #### After running the standard install instructions... #### Start five agents buildkite-agent start --spawn 5 ``` This will start a single process, but can run up to 5 different jobs at the same time. The start command can also be run multiple times with different configurations. For example, to change the [queue](/docs/agent/queues): ```bash buildkite-agent start --tags queue=test #### In another window, or tab buildkite-agent start --tags queue=deploy ``` ###### Coordinating multiple agents > 🛠️ Experimental feature > To use it, set `experiment="agent-api"` in your [agent configuration](/docs/agent/self-hosted/configure#experiment). > This requires Agent v3.47.0 or later. Multiple agents on a single host can sometimes interfere with one another. For example, a pipeline might contain commands like `docker prune` or `apt upgrade`, but these commands fail if another job runs the same command at the same time. To coordinate access to shared resources on the same host, you can use agent locks. Locking is advisory (nothing prevents a buggy command from ignoring a lock), but it can help avoid multiple agents interfering with each other. Here's how you could use locks in a script to make sure a command is run by only one agent at a time: ```bash #### Acquire the lock called "docker prune", and store the token. token=$(buildkite-agent lock acquire "docker prune") #### Once the lock is acquired, proceed to run the command - in this example, docker prune docker prune #### Release the lock afterwards. #### To make this example more robust, consider using an EXIT trap, so that the lock is released whether the command succeeded or not. buildkite-agent lock release "docker prune" "${token}" ``` ###### Multiple agents on many machines The secret to fast builds is running as many build agents as you can. The best way to do that is to have many machines running build agents. These machines can be anything ranging from your laptop, a few spare computers in your office, to a fleet of thousands of cloud compute instances. The Buildkite agent should run on any hardware and any cloud compute provider. It is built to be flexible, and can be composed in any way that suits the platform, infrastructure, or workload. The [installation instructions](/docs/agent/self-hosted/install) demonstrate how to run the Buildkite agent across various platforms. For example, you could start several [Google Cloud Compute instances](/docs/agent/self-hosted/gcp), then install and start build agents. These instructions can be automated using infrastructure as code tools like [Terraform](https://www.terraform.io), and then [add auto-scaling rules](#auto-scaling-your-build-agents) so you always have enough capacity. The [Elastic CI Stack for AWS](/docs/agent/self-hosted/aws/elastic-ci-stack) provides a pre-built CloudFormation Stack for AWS that runs multiple auto-scaling agents. ##### Parallel jobs `parallelism` is an attribute on a single [command step](/docs/pipelines/configure/step-types/command-step) which causes it to be split into many jobs. Those jobs will be the same except for having a parallel index and count. They share the same dependencies and agent tags. To run the same step in parallel over all 5 of the agents, we can set the `parallelism` field for a single build step: ```yaml steps: - command: "tests.sh" parallelism: 5 ``` Update the name of the step to include `%n`, like the example below. This will include a number at runtime so that you can differentiate between the parallel build jobs. ```yaml steps: - command: "tests.sh" label: "Test %n" parallelism: 5 ``` You can choose from the following parallel job index label helpers: - `%n` to display job count starting at `0`. - `%N` to display job count starting at `1`. - `%t` to display the total number of parallel jobs in the step. Now that the pipeline is configured, create a new build: [Image: build.png] If you inspect the first job's environment variables you'll find: ``` BUILDKITE_PARALLEL_JOB=0 BUILDKITE_PARALLEL_JOB_COUNT=5 ``` The `BUILDKITE_PARALLEL_JOB` environment variable stores the index of each parallel job created from a parallel build step, starting from 0. For a build step with `parallelism: 5`, the value would be 0, 1, 2, 3, and 4 respectively. The `BUILDKITE_PARALLEL_JOB_COUNT` environment variable stores the total number of jobs created from this step for this build. You can use these two environment variables to divide your application's tests between the different jobs. ##### Libraries For best results, Buildkite recommends using the Test Engine Client ([bktec](https://github.com/buildkite/test-engine-client)) tool, which supports parallel jobs. bktec uses your Test Engine test suite data to provide intelligent test splitting and automatic management of flaky tests. For more information, see [Speed up builds with the Test Engine Client](/docs/test-engine/speed-up-builds-with-bktec) and its [configuration options](/docs/test-engine/bktec/configuring). Other libraries that have built-in support for the `BUILDKITE_PARALLEL_JOB` and `BUILDKITE_PARALLEL_JOB_COUNT` environment variables are: - [Knapsack](https://github.com/ArturT/knapsack) Knapsack is a ruby gem for automatically dividing your tests between parallel jobs, as well as making sure each job runs in comparable time. It supports RSpec, Cucumber, and minitest. - [Knapsack Pro](https://knapsackpro.com/?utm_source=buildkite&utm_medium=docs&utm_campaign=buildkite-parallel-builds) A commercially supported version of Knapsack that provides a hosted service for test timing data and additional job distribution modes for Ruby, JavaScript, and more. See the [README](https://github.com/KnapsackPro/knapsack_pro-ruby?tab=readme-ov-file#knapsack_pro-ruby-gem) and [step-by-step tutorial](http://docs.knapsackpro.com/2017/auto-balancing-7-hours-tests-between-100-parallel-jobs-on-ci-buildkite-example) for Ruby setup instructions and example pipelines. For other programming languages please check [integrations](https://docs.knapsackpro.com/integration/). - [Shardy McShardFace](https://www.npmjs.com/package/shardy-mc-shard-face) Shardy McShardFace is an npm package for dividing your tests between parallel jobs. it shards as evenly as possible, uneven splits will end up in the tail shards, supports sharding fewer items than the parallelism count, and distributes items into shards based on a given seed for a random number generator to provide random, but stable distribution. See their [README](https://github.com/joscha/ShardyMcShardFace?tab=readme-ov-file#shardymcshardface) for more information. ##### Isolated jobs You can safely run multiple build jobs on a single machine, as the agent runs each build in its own checkout directory. You'll still need to ensure your application supports running in parallel on the same machine, and doesn't try to write to any shared resources at the same time (such as modifying the same database at the same time). One convenient way of achieving build job isolation is to use the agent's built in [Docker Compose support](docker-containerized-builds) which will run each job inside a set of completely isolated Docker containers. ##### Auto-scaling your build agents In addition to the [Elastic CI Stack for AWS](/docs/agent/self-hosted/aws/elastic-ci-stack) (which has built-in support for auto-scaling) we provide a number of APIs and tools you can use to auto-scale your own build agents: - [GraphQL API](/docs/apis/graphql-api) allows you to efficiently fetch your organization's scheduled jobs count, agents count, and details about each agent. - [Pipelines REST API](/docs/apis/rest-api/pipelines) and [Agents API](/docs/apis/rest-api/agents) you're able to fetch each pipeline's job count, and information about each agent. - [Agent priorities](/docs/agent/self-hosted/prioritization) allow you to define which agents are assigned work first, such as high performance ephemeral agents. - [Agent queues](/docs/agent/queues) allow you to divide your agent pools into separate groups for scaling and performance purposes. - [buildkite-agent-metrics](/docs/agent/self-hosted/monitoring-and-observability#buildkite-agent-metrics-cli) tool allows you to collect your organization's Buildkite metrics and report them to a range of backends including AWS CloudWatch, StatsD, Prometheus, and OpenTelemetry. Using these tools you can automate your build infrastructure, scale your agents based on demand, and massively reduce build times using job parallelism. ###### Overloaded single steps Avoid cramming unrelated tasks into one step, for example: ``` #### ❌ Bad - Mixing unrelated concerns - label: "Build and security scan and deploy" command: | docker build -t myapp . trivy image myapp docker push myapp:latest kubectl apply -f k8s/deployment.yaml #### ✅ Good - Separate logical concerns - label: ":docker: Build application" command: "docker build -t myapp ." - label: ":shield: Security scan" command: "trivy image myapp" depends_on: "build" - label: ":rocket: Deploy to production" command: | docker push myapp:latest kubectl apply -f k8s/deployment.yaml depends_on: - "build" - "security-scan" ``` The "bad" example crams together building, security scanning, and deployment which are three totally different concerns that you'd want to handle separately, potentially with different permissions, agents, and failure handling strategies. Cramming more tasks into one step reduces the ability of the pipeline to scale and take advantage of multiple agents. Splitting steps makes it logically easier to understand and also takes advantage of Buildkite's scalable agents. Also makes it easier to troubleshoot when something breaks in the pipeline. ###### Controlled parallelism and concurrency Balance parallel execution for speed while managing resource consumption and costs: **Step-level parallelism (`parallelism` attribute):** - Set reasonable limits on the `parallelism` attribute for individual steps based on your agent capacity. - Consider that each parallel job consumes an agent, so `parallelism: 50` requires 50 available agents. - Monitor queue wait times when using high parallelism values to ensure adequate agent availability. **Build-level concurrency:** - While running jobs in parallel across different steps speeds up builds, be mindful of your total agent pool capacity. - Buildkite has default limits on concurrent steps per build to prevent resource exhaustion. - Design pipeline dependencies (`wait` steps) to balance speed with resource availability. **Example of controlled parallelism:** ```yaml steps: - label: "Unit Tests" command: npm test parallelism: 10 # Reasonable for most agent pools - wait - label: "Integration Tests" command: npm run test:integration parallelism: 5 # Lower parallelism for resource-intensive tests ``` --- ### Working with monorepos URL: https://buildkite.com/docs/pipelines/best-practices/working-with-monorepos #### Working with monorepos A monorepo development strategy means that the code for multiple projects is stored in a single, centralized version-controlled repository. This strategy provides advantages like easier code sharing, unified versioning, and consistent tooling, but it also poses challenges such as longer build times and potential conflicts if not managed effectively. This page covers approaches and best practices for effectively managing and running monorepos. ##### Approaches to running monorepos All such approaches start with detecting changes in your monorepo, usually at the folder level. To detect these changes, you can use either the `--apply-if-changed` option (as an attribute of [command](/docs/pipelines/configure/step-types/command-step#agent-applied-attributes), [group](/docs/pipelines/configure/step-types/group-step#agent-applied-attributes), or [trigger](/docs/pipelines/configure/step-types/trigger-step#agent-applied-attributes) steps or using the [agent CLI](/docs/agent/cli/reference/pipeline#apply-if-changed)) on the [pipeline upload command](/docs/agent/cli/reference/pipeline) of the Buildkite agent to detect [`if_changed` attribute](/docs/pipelines/configure/step-types/command-step#agent-applied-attributes) usage in your pipeline steps. > 📘 > In Buildkite Pipelines, you have the ability to structure your monorepo pipeline as a single pipeline that orchestrates other pipelines by triggering them, or as a single pipeline containing many steps. Both approaches have tradeoffs. Some users prefer the clean separation that triggering pipelines by another provides, while others prefer all their steps to run conditionally in a single pipeline. There are two preferred approaches to running monorepos with Buildkite Pipelines: - **Static**: a single diff pipeline triggers different static pipelines in your monorepo based on what parts of the monorepo were changed. - **Dynamic**: [dynamic pipelines](/docs/pipelines/configure/dynamic-pipelines) inject specific steps into a single pipeline based on the changes in the monorepo. You will need to run bash scripts to inject steps according to the changes. You can also use the [Buildkite SDK](/docs/pipelines/configure/dynamic-pipelines/sdk) to inject steps dynamically in one of its supported programming languages. Now, let's look into implementing these possible approaches to working with monorepos in more detail. ##### Static approach The static approach to working with monorepos involves creating a single orchestrating pipeline that triggers other pipelines (predefined for different scenarios) in your monorepo. A typical example of the static approach would be a single main pipeline that contains the [Monorepo diff plugin](https://buildkite.com/resources/plugins/buildkite-plugins/monorepo-diff-buildkite-plugin/) and, depending on what files get modified in the repository, this pipeline will trigger other pipelines. You can check out the [Monorepo example](https://buildkite.com/resources/examples/buildkite/monorepo-example/) pipeline to see a practical implementation. > 🚧 > In the static monorepo approach, the triggered pipelines must only be triggered by the dedicated triggering pipeline and _never_ directly via the Buildkite interface, API, or other means. Direct execution bypasses the change detection logic, causing the pipeline to run without awareness of the changes in the monorepo, or the necessary build context from the triggering pipeline. This might lead to a number of unwanted consequences, such as build artifacts being generated with incorrect library versions. ##### Dynamic approach The dynamic approach to working with monorepos involves having dynamic pipelines that inject specific steps in the programming language of your choice into a single pipeline in your monorepo based on the detected changes. When implementing the dynamic pipelines approach, you can use either: - [Direct scripting](/docs/pipelines/configure/dynamic-pipelines) - [The Buildkite SDK](/docs/pipelines/configure/dynamic-pipelines/sdk) A useful way to implement dynamic pipelines is to upload the generated YAML steps file as an artifact using the `buildkite-agent artifact upload` command. This allows you to download and review that YAML file later to see exactly what was generated. > 📘 Dry-run preview > If you want to preview the pipeline before it's uploaded, you can use `buildkite-agent pipeline upload --dry-run` command to output the final YAML without running it. Buildkite customers who use [Bazel](/docs/pipelines/tutorials/bazel) and [Gradle](https://gradle.org/) prefer the dynamic approach since these build systems allow you to target certain steps once the diff that needs to be built is identified. These tools also allow you to map which tests to run on the paths that changed. ###### Implementation with dynamic pipelines You can see a hands-on implementation of the dynamic pipelines-based approach in this [Bazel monorepo example](https://github.com/buildkite/bazel-monorepo-example). The example analyzes Git changes to determine which projects need to be built, then constructs a dependency graph to ensure that the projects build in the correct order. How the example works: 1. Change detection stage - the pipeline analyzes Git diff to identify changed files. 1. Dependency resolution stage - a dependency graph is built to determine which projects need building. 1. Pipeline generation stage - a dynamic pipeline with proper job dependencies is created. 1. Parallel execution - independent projects build in parallel, respecting dependencies. Learn more about running through this example in [Creating dynamic pipelines and build annotations using Bazel](/docs/pipelines/tutorials/dynamic-pipelines-and-annotations-using-bazel). This implementation is also valid if using Buildkite SDK. ###### Using the Buildkite SDK The [Buildkite SDK](/docs/pipelines/configure/dynamic-pipelines/sdk) provides an SDK library of methods for a number of supported languages (JavaScript/TypeScript, Python, Go, and Ruby), which you can use to help you dynamically generate Buildkite pipeline steps in YAML or JSON format, to upload to your Buildkite pipeline. The Buildkite SDK acts as a translation layer, making it easier to generate Buildkite pipeline steps to re-upload to your pipeline, rather than having to manually script these dynamic pipeline steps yourself. For example, if you need to detect changes in a Bazel- or Gradle-based monorepo, you could use the Buildkite SDK to dynamically generate the required pipeline steps based on the execution outcomes from your Bazel or Gradle build scripts. ##### Combined approach In your CI/CD process, you don't need to limit your options to a single one of these described approaches to be working with a monorepo. Many customers, especially those with large Buildkite organizations, mix and combine static and dynamic approaches based on their specific requirements. ##### Pipeline step count guidance When designing monorepo pipelines, consider keeping the number of steps in a single pipeline build up to 500 to ensure that the UI and build processing perform well. If your use case requires a large number of steps in a build, consider consolidating some steps, splitting work across multiple pipelines, or using an orchestrator pattern. For builds that consistently need step counts well beyond this range, [contact us](mailto:sales@buildkite.com) to discuss your requirements. > 📘 > Each step can contain multiple jobs when using the [`parallelism` attribute](/docs/pipelines/configure/step-types/command-step#label), so the step count doesn't necessarily reflect the total number of jobs running. ###### Tip for large monorepos For monorepos that could generate hundreds or thousands of steps, use an orchestrator pipeline that [dynamically generates](/docs/pipelines/configure/dynamic-pipelines) only the steps needed for each build. Upload steps in batches or [trigger](/docs/pipelines/configure/step-types/trigger-step) child pipelines to avoid single-build step counts growing into the thousands. --- ### Environment and dependency management URL: https://buildkite.com/docs/pipelines/best-practices/environment-and-dependency #### Environment and dependency management This page covers best practices for containerized builds, dependency management, handling of secrets, and environment configuration using [Buildkite agents](/docs/agent), [queues](/docs/agent/queues), [plugins](/docs/pipelines/integrations/plugins), and [dynamic pipelines](/docs/pipelines/configure/dynamic-pipelines). ##### Build containerization for consistency Containerization provides isolation and repeatability, ensuring that your builds run the same way across all environments. Use Docker-based steps to eliminate issues where something works locally but doesn't scale (a "works on my machine" kind of issue) and maintain strict control over build dependencies. It's further recommended to: - Use the [Docker plugin](https://buildkite.com/resources/plugins/buildkite-plugins/docker-buildkite-plugin) for single containers or [Docker Compose plugin](https://buildkite.com/resources/plugins/buildkite-plugins/docker-compose-buildkite-plugin) for multi-service builds. - Use [multi-stage Dockerfiles](https://docs.docker.com/build/building/multi-stage/) to keep images small and secure. - Pin base images and tags and avoid using `latest` to prevent upstream drift. - Align development, CI, and production images to reduce environment drift. - Manage image pull reliability: * Use a [private registry](/docs/package-registries) or [Amazon Elastic Container Registries (ECR)](https://aws.amazon.com/ecr/)/[Google Container Registries (or Artifact Registries)](https://docs.cloud.google.com/artifact-registry/docs/transition/transition-from-gcr) with regional mirrors * Authenticate pulls with [OIDC](/docs/package-registries/security/oidc) rather than static keys * Account for [Docker Hub rate limits](https://docs.docker.com/docker-hub/usage/) and use local caching on agents You can learn more in [Containerized builds with Docker](/docs/pipelines/best-practices/docker-containerized-builds). ##### Dependency handling Consistent dependency management prevents build failures and ensures reproducibility across environments. It's recommended that you lock all dependencies, cache intelligently, and verify integrity to maintain build stability. - Lock versions: * Commit lockfiles (`package-lock.json`, `poetry.lock`, `Gemfile.lock`, `go.mod`, `Cargo.lock`). * Pin plugin versions in pipelines to avoid breaking changes. - Cache packages appropriately: * Scope caches to repository and dependency hash. * Use separate cache keys for production vs development dependencies. * Invalidate caches on lockfile changes. - Verify integrity: * Enable checksums or signatures for package managers. * Generate and keep [software bill of materials (SBOM)](https://en.wikipedia.org/wiki/Software_supply_chain) for artifacts. - Constrain [concurrency](/docs/pipelines/configure/workflows/controlling-concurrency) when necessary: * For non-thread-safe tools, prefer parallel fan-out across isolated steps. ##### Handling environment values Don't hard-code environment values. Inject configurations at runtime rather than hard-coding values in scripts or Dockerfiles. This improves flexibility, security, and the possibility to reuse configurations across environments. For example, here is a sample configuration with a non-recommended and recommended approach: ```yaml #### ❌ Non-recommended command: "deploy.sh https://api.myapp.com/prod" #### ✅ Recommended command: "deploy.sh $API_ENDPOINT" env: API_ENDPOINT: "https://api.myapp.com/prod" ``` - Use step-level `env`, pipeline `env`, or [hooks](/docs/agent/hooks) to set values. - Keep secrets out of `pipeline.yml` and repositories—use a secrets manager or [Buildkite secrets](/docs/pipelines/security/secrets/buildkite-secrets). - Be aware of the OS's limits for environment size; opt for using files instead of variables for large payloads. ##### Optimizing agent hosts and queues for environment needs - Match your agent infrastructure to your environment requirements by creating specialized [queues](/docs/agent/queues) and minimizing host-level dependencies. - Create queues that map to specific environments, for example the OS, CPU/RAM, GPU, network access, trust boundary, and so on. - Keep system dependencies in containers when possible. - If host-level tooling is required, pin versions and manage via [infrastructure-as-code (IaC)](https://aws.amazon.com/what-is/iac/) approach. - Use [ephemeral agents](/docs/pipelines/glossary#ephemeral-agent) for untrusted workloads. - Persist only necessary caches within the correct trust boundary. ##### Build script hygiene Proper script hygiene prevents silent failures and makes debugging easier. Write robust build scripts that [fail fast](/docs/pipelines/configure/step-types/command-step#fast-fail-running-jobs) and provide clear error messages. - Use strict Bash flags in scripts to catch errors early: * `set -euo pipefail` * Consider only using `set -x` for debugging - Don't assume shell init files; explicitly configure shell behavior in your [build scripts](/docs/pipelines/configure/writing-build-scripts). - [Fail fast](/docs/pipelines/configure/step-types/command-step#fast-fail-running-jobs) with clear exit codes. - Surface summaries via [Buildkite annotations](/docs/agent/cli/reference/annotate) for quick feedback. ##### Reproducible Docker builds in pipelines Ensure Docker builds are consistent and traceable by pinning dependencies and labeling images with build metadata. - Keep `RUN` steps idempotent and pinned. - Avoid copying host-specific files that can change uncontrollably. - Use build arguments only when necessary and pin their values in CI. - Label images with source commit, pipeline URL, and build timestamp for traceability. Example Docker Compose step: ```yaml steps: - label: "Docker :rocket:" plugins: - docker-compose#v5.11.0: build: app image-repository: "registry.local/your-team/app" push: true config: docker-compose.ci.yml env: APP_VERSION: "${BUILDKITE_COMMIT}" ``` For more best practices for using Docker, see [Containerized builds with Docker](/docs/pipelines/best-practices/docker-containerized-builds). ##### Environment configuration patterns Establish clear patterns for managing environment configuration across your pipelines. Centralized defaults with targeted overrides reduce complexity and improve maintainability. It's recommended to: - Centralize shared environment defaults at the pipeline or queue level. - Use metadata and inputs to thread environment choices through [dynamic pipelines](/docs/pipelines/configure/dynamic-pipelines). - Validate required variables at step start and fail with actionable messages. ###### Governance and compliance touch points Integrate security and compliance checks directly into your build process to ensure artifacts meet organizational standards before deployment. - Sign and verify artifacts as part of the build. - Generate SBOMs and attach to artifacts. - Gate promotions on policy checks and required reviews. See more on governance in [Governance overview](/docs/pipelines/governance). ##### Observability for environments Monitor and measure your build environments to identify optimization opportunities and track performance over time. - Emit key build-time environment facts as [annotations](/docs/agent/cli/reference/annotate): * Image digest and source * Toolchain versions * Cache hit ratios - Track [queue metrics](/docs/pipelines/insights/queue-metrics), build time by step, and [flake rates](/docs/test-engine). - Use this data to adjust caching and [parallelism](/docs/pipelines/configure/workflows/controlling-concurrency#concurrency-and-parallelism). --- ### Secrets management URL: https://buildkite.com/docs/pipelines/best-practices/secrets-management #### Secrets management Proper secrets management is key to the overall security of your CI/CD infrastructure. The following are some recommendations on keeping your secrets safe in your Buildkite pipelines: - Use Buildkite's native secret management tools whenever possible. Start by using the built-in [Buildkite secrets and redaction](/docs/pipelines/security/secrets/buildkite-secrets) feature or explore the [secrets plugins](/docs/pipelines/integrations/plugins/directory) available for different secret stores. - Rotate your secrets regularly. Even if a secret hasn't been compromised, regular [automated rotation](/docs/apis/managing-api-tokens#api-token-security-rotation) limits the window of opportunity if something does go wrong. - Keep secrets scoped as tightly as possible. Only expose a secret to the specific pipeline steps that actually need it. For example, don't allow test steps to have access to production deployment credentials. You can configure granular access using [Buildkite secrets](/docs/pipelines/security/secrets/buildkite-secrets#use-a-buildkite-secret-in-a-job) or through plugins like the [vault secrets plugin](https://buildkite.com/resources/plugins/buildkite-plugins/vault-secrets-buildkite-plugin/). - Track how your secrets are being used. [Audit logs](/docs/platform/audit-log) showing which steps consume which secrets help you maintain visibility into your security posture and make compliance reporting much easier when needed (for example, during compliance audits). > 📘 > For in-depth information on security best practices for Buildkite Pipelines, see [Enforcing security controls](/docs/pipelines/best-practices/security-controls). --- ### Infrastructure as code recommendations URL: https://buildkite.com/docs/pipelines/best-practices/iac #### Infrastructure as Code (IaC) in Buildkite Pipelines This page provides recommendations on managing your Buildkite organizations, pipelines, agents, and security controls entirely as code using Terraform, GitOps, and least privilege access principles. ##### Core principles - Treat the interface as read-only - use the [dashboard](/docs/pipelines/dashboard-walkthrough) for observability and approvals, never for configuration changes. - Store all configurations in version control with PR reviews and automated validation. - Use [OIDC](/docs/pipelines/security/oidc) over long-lived secrets - authenticate agents to cloud providers with federated identities and rotate API tokens regularly. - Apply minimal required permissions at every layer ranging from the organization roles, team access to queue rules, agent tokens, cloud IAM, secret scope. - Consider managing roles and team access with an IaC-supporting [SSO provider](/docs/platform/sso#supported-providers). - Design for change and scalability. Use [dynamic pipelines](/docs/pipelines/configure/dynamic-pipelines), modular Terraform, and progressive rollouts with canary queues and approval gates. ##### GitOps workflow - Propose changes exclusively using pull requests in a dedicated repository. - Apply automated checks for validation of your Terraform plan, YAML schema, policy rules (for example, [Open Policy Agent (OPA)](https://www.openpolicyagent.org/) and/or [Sentinel](https://developer.hashicorp.com/sentinel)). - Require multiple approvals for production queues, team permissions, or security settings. Consider using [block steps](/docs/pipelines/configure/step-types/block-step) to manage permission levels across your organization (for example, create [teams](/docs/platform/team-management/permissions#manage-teams-and-permissions) with or without deploy permissions). - Trigger `terraform apply` through a Buildkite pipeline with machine user identity on merge. - Split Terraform state by blast radius: `org/`, `clusters/`, `pipelines/`. - Use remote state with locking (for example, S3 + DynamoDB, GCS, Terraform Cloud). - Schedule drift detection jobs via [GraphQL API](/docs/apis/graphql-api). ##### Terraform provider Use the [Buildkite Terraform provider](https://registry.terraform.io/providers/buildkite/buildkite/latest/docs) to manage teams, pipelines, clusters, queues, agent tokens, schedules, and templates. If something is created outside Terraform, treat it as drift and import it into the state. The following example shows basic provider configuration and creates a cluster, queue, and pipeline: ```hcl terraform { required_providers { buildkite = { source = "buildkite/buildkite", version = "~> 1.0" } } } provider "buildkite" { organization = "buildkite" # Use the `BUILDKITE_API_TOKEN` environment variable so the token is not committed # api_token = "" } resource "buildkite_cluster" "shared" { name = "shared-ci" } resource "buildkite_cluster_queue" "terraform" { cluster_id = buildkite_cluster.shared.id key = "terraform" description = "IaC workloads with restricted cloud access" } resource "buildkite_pipeline" "svc_a" { name = "svc-a" repository = "git@github.com:org/svc-a.git" steps = file(".buildkite/pipeline.yml") cancel_intermediate_builds = true skip_intermediate_builds = true } ``` ##### Role-based access control (RBAC) - Create a dedicated service account for Terraform with scoped API tokens (for example, tokens scoped to pipeline write permissions or team management). - Store tokens in secrets manager (for example, [AWS Secrets Manager](https://aws.amazon.com/secrets-manager/), [Vault](https://registry.terraform.io/providers/hashicorp/vault/latest/docs), and so on) and rotate quarterly or on a schedule that suits your security posture. - Define teams and membership in Terraform (`buildkite_team`, `buildkite_team_member`). - Grant pipeline access per team (`buildkite_team_pipeline`), not org-wide. - Restrict UI write access to Platform teams while providing most other engineers with read-only access. ##### Secrets management - Fetch secrets at runtime via [agent hooks](/docs/agent/hooks) (`environment`, `pre-command`) from AWS Secrets Manager, GCP Secret Manager, or Vault. - Use OIDC plugins: [AWS Assume Role plugin](https://buildkite.com/resources/plugins/cultureamp/aws-assume-role-buildkite-plugin/) or [GCP Workload Identity Federation Buildkite plugin](https://buildkite.com/resources/plugins/buildkite-plugins/gcp-workload-identity-federation-buildkite-plugin/). - Scope secrets by environment and queue. Never give CI builds access to production credentials. - Use different IAM roles per queue and enable audit logging (using AWS CloudTrail or GCP Audit Logs). - Redact sensitive patterns in logs and automate secret rotation with zero-downtime rollover. ##### Agents, clusters, queues - Separate clusters by security zones (CI, prod deploy, compliance) and queues - by trust level, workload type, architecture, and environment. For example: * `default` - general CI with ephemeral agents * `docker` - containerized builds with DinD * `arm64` - ARM/macOS builds * `production-deploy` - restricted, long-lived, audit-logged - Prefer ephemeral agents for hermetic builds, and autoscale on queue depth. Maintain purpose-built base images (`builder`, `security-scanner`, `mobile`) and rebuild often (for example, weekly). - Use [agent hooks](/docs/agent/hooks) to load credentials, validate requirements, and clean up. ##### Dynamic pipelines - Generate pipeline YAML at runtime based on changed files, repository structure, or external state. For example: ```bash #!/bin/bash #### .buildkite/generate-pipeline.sh buildkite-agent pipeline upload < 📘 > In addition to sparse checkout and Git mirrors, for checkout optimization you can also use the [Git Shallow Clone Buildkite Plugin](https://buildkite.com/resources/plugins/peakon/git-shallow-clone-buildkite-plugin/) that sets `--depth` flag for `git-clone` and `git-fetch` commands. ##### Understanding checkout defaults across platforms The default checkout behavior in Buildkite Pipelines prioritizes completeness and flexibility. As a result, if you're migrating to Buildkite Pipelines from another CI/CD platform, especially if you're using LFS, you might notice differences in checkout speed or behavior. To understand how Buildkite's checkout defaults differ from other platforms in a GitHub Actions-based example (including LFS handling, shallow clones, and customization options), see [Understanding the difference in default checkout behaviors](/docs/pipelines/migration/from-githubactions#understand-the-differences-the-difference-in-default-checkout-behaviors). ##### How to monitor Git operations Understanding where time is spent during Git checkout helps you identify bottlenecks and measure the impact of optimizations. The following approaches can help you gain visibility into Git performance across your builds. ###### OpenTelemetry tracing The Buildkite agent emits [OpenTelemetry](/docs/pipelines/integrations/observability/opentelemetry) trace spans for checkout behavior when [tracing is enabled](/docs/agent/self-hosted/monitoring-and-observability/tracing#using-opentelemetry-tracing). Two spans are relevant to Git operations: - **`checkout`:** Covers the entire checkout phase, including `pre-checkout` and `post-checkout` [hooks](/docs/agent/hooks). - **`repo-checkout`:** A child span of `checkout` that isolates the Git checkout itself, excluding hook execution time. By comparing these two spans, you can determine whether slowdowns originate from Git operations or from custom [hook](/docs/agent/hooks) logic. If you are also using the [OpenTelemetry Tracing Notification Service](/docs/pipelines/integrations/observability/opentelemetry#opentelemetry-tracing-notification-service), you can propagate traces from the Buildkite control plane through to the agent spans for an end-to-end view of build performance. ###### Checkout hooks You can use a [checkout hook](/docs/agent/hooks) on your agents to add custom timing or instrumentation around the Git checkout phase. For example, a `pre-checkout` hook could record a start timestamp and a `post-checkout` hook could calculate the elapsed time and send it to your monitoring system. This approach works with any observability platform and does not require OpenTelemetry. ###### Git caching proxies A local or network-level Git caching proxy sits between your agents and the upstream Git server, caching repository data and serving repeated clones or fetches from a local cache. Because all Git traffic flows through the proxy, it provides a natural instrumentation point for collecting metrics such as cache hit rates, clone durations, and bandwidth usage. Two open-source options that support Git caching with built-in observability are: - [Cachew](https://github.com/block/cachew): A protocol-aware caching proxy that maintains compressed snapshots of repositories for faster restores. It supports OpenTelemetry metrics and Prometheus integration. - [content-cache](https://github.com/wolfeidau/content-cache): A content-addressable caching proxy that supports Git smart HTTP protocol with pack-level caching. It exports OpenTelemetry metrics and provides Prometheus endpoints for monitoring cache effectiveness. --- ### Plugin management URL: https://buildkite.com/docs/pipelines/best-practices/plugin-management #### Plugin management and standardization Buildkite [plugins](https://buildkite.com/docs/pipelines/integrations/plugins) serve as reusable building blocks that help maintain consistency and reduce repetition across pipeline configurations in a Buildkite organization. ##### Common use cases You can extract common pipeline functionality into plugins by [writing](/docs/pipelines/integrations/plugins/writing) a plugin. Common use cases include: - Security and compliance integration - automatically integrate with security scanning tools, compliance frameworks, or audit logging systems. - Deployment standardization - encapsulate deployment patterns, environment-specific configurations, and rollback procedures. - Infrastructure automation - interact with internal APIs, infrastructure provisioning systems, or monitoring platforms. - Quality gate enforcement - enforce code quality standards, testing requirements, or documentation completeness checks. - Artifact management - standardize packaging and distribution processes across teams. - Performance testing - implement consistent benchmarking and performance testing procedures. - Container workflows - standardize container image building and security scanning. ##### Plugin sources Buildkite supports three types of plugin sources, each suited to different security and distribution requirements: - Buildkite-maintained plugins that are available in the [Buildkite plugins directory](/docs/pipelines/integrations/plugins/directory) and provide standard functionality like Docker, Docker Compose, common testing frameworks, and so on. - Third-party plugins from the community that are also available in the plugins directory. You can [get your own plugin published](/docs/pipelines/integrations/plugins/writing#publish-to-the-buildkite-plugins-directory) there as well. Maintain an allowlist of vetted community plugins that meet your security and reliability standards. - Private organizational plugins can be created and hosted in private repositories for sensitive or proprietary functionality. Write a [private plugin](/docs/pipelines/integrations/plugins/using#plugin-sources) when you need to implement organization-specific requirements or standardize complex workflows. Use full Git URLs to reference these plugins, for example: ```yml steps: - command: deploy to production plugins: - ssh://git@github.com/your-org/deployment-plugin.git#v1.0.0: environment: production approval_required: true - file:///internal/monitoring-plugin.git#v2.0.0: alert_channels: ["#ops", "#security"] ``` ##### Version management Implement strict version management practices to ensure plugin reliability and security: - Always pin plugins to specific versions or commit SHA values to prevent unexpected changes, for example: `docker#v3.3.0` or `my-plugin#287293c4`. - Regularly audit and update plugin versions as part of your maintenance cycle. - Use [YAML anchors](/docs/pipelines/integrations/plugins/using#using-yaml-anchors-with-plugins) to centralize plugin configuration and ensure consistency across pipelines. - Monitor plugin repositories for security vulnerabilities and updates. ##### Security and access control To maintain a secure plugin ecosystem, implement these practices: - Use the [agent's plugin restrictions](/docs/agent/self-hosted/security#restrict-access-by-the-buildkite-agent-controller-allow-a-list-of-plugins) to allowlist approved plugins. - Set the [`no-plugins`](/docs/agent/self-hosted/configure#no-plugins) option to disable plugins entirely on sensitive agents. - Implement different plugin policies for different [clusters](/docs/pipelines/security/clusters) based on security requirements. - Use separate Git repositories for different security domains. - Implement code review processes for all plugin changes. - Regularly audit plugin permissions, access patterns, and usage across your organization to identify potential security risks or optimization opportunities. --- ### Caching URL: https://buildkite.com/docs/pipelines/best-practices/caching #### Caching Proper caching makes your builds faster and cheaper by reusing data across jobs and builds. This page covers the caching capabilities and recommended patterns for Buildkite Pipelines. ##### What to cache Cache the following for faster builds: - Dependency directories for your language or build tool - Large files repeatedly downloaded from the Internet - Git mirrors by enabling [Git mirrors](/docs/agent/self-hosted/configure/git-mirrors) on your agents - Docker build layers using plugins like [Docker ECR Cache Buildkite plugin](https://github.com/seek-oss/docker-ecr-cache-buildkite-plugin) for ECR/GCR > 📘 > Git mirrors on [Buildkite hosted agents](/docs/agent/buildkite-hosted) can be enabled with the help of [cache volumes](/docs/agent/buildkite-hosted/cache-volumes). Additionally, you can also enable [queue images](/docs/agent/buildkite-hosted/linux#agent-images). Don't cache: - Final build artifacts that will be published elsewhere - Test outputs that depend on current code ##### Caching strategies - For Git checkout caching, use Git mirrors or shallow clones on persistent workers to speed up fetches. Learn more in [Git checkout optimization](/docs/pipelines/best-practices/git-checkout-optimization). - For caching dependencies: * Key off the lockfile hash and platform * Separate build from test caches if they diverge - Docker layer caching: * Order your Dockerfile's structure in such a way that immutable layers (OS packages and core dependencies) come first * Copy lockfiles before installation to maximize cache hits - For artifact caching, store heavyweight build outputs as artifacts between steps instead of re-building. See more in the following section. ##### Using artifacts for caching Buildkite [build artifacts](/docs/pipelines/configure/artifacts) are files uploaded by a job that you can download in later steps or later builds. Artifacts are durable and addressable, so you can reuse previously produced files to cache common data between steps instead of re-computing them. Unlike a purpose‑built cache, artifacts are: - Build outputs with metadata and a download URL - Retained according to your artifact storage policy - Retrieved by path patterns, job, build number, or using the API > 📘 > Buildkite’s dedicated cache features and hosted cache volumes serve different goals and trade-offs than artifacts. Cache volumes aim for speed with different retention and locality guarantees. Artifacts are deterministic and durable. To use artifacts for caching: 1. Produce dependencies into a directory. 1. Compress the dependencies to a single archive keyed by an identifier that represents inputs, e.g. a lockfile checksum. 1. Upload the result as an artifact. 1. In the later steps/builds, resolve the correct key (same checksum), download, and unpack. This way, you keep downloads small and avoid re-installing dependencies when the inputs haven't changed. ##### Using cached images Operating at scale requires cached agent images. In those images, keep only the tooling needed for specific functions and avoid monolithic images. For example, a "security" image with ClamAV, Trivy, and Snyk or "frontend" image with Node.js, npm, and testing frameworks. It's also recommended to: - Build images nightly to include system, framework, and image updates. - Store the images in [Buildkite Packages](https://buildkite.com/packages) or cloud provider registries. - For hosted agents, use [agent images](/docs/agent/buildkite-hosted/linux#agent-images). ##### Bazel caching Buildkite Pipelines sends Bazel target commands to the build, from which distributed compilation is handled, leveraging Bazel's remote execution framework. There are two main cache layers in Bazel: - Local cache that exists on the agent machines and is great for iterative builds but is not shared across agents. - Remote cache that is shared across machines, persists between builds, and is essential for CI and large monorepos. ###### Remote cache options for Bazel You can use the following approaches for creating and keeping a remote cache with Bazel: - Object stores as backend - Google Cloud Storage or AWS S3 via Bazel’s HTTP cache flags. - Managed services - [BuildBuddy](https://www.buildbuddy.io/) is a common choice for remote cache and optional remote execution. - Self‑hosted cache - [Bazel-remote](https://github.com/buchgr/bazel-remote) on AWS (using ECS with S3 backend). ###### Minimal setup for Bazel caching In `.bazelrc`, set the following: ```bash build --remote_cache=https:// #### If using GCS: build --google_credentials=/path/to/credentials.json #### If using S3: build --remote_upload_local_results=true ``` You can also pass `--remote_cache` on the command line per build/test invocation. ###### Using Bazel caching with Buildkite - Using Bazel caching works both with hosted agents and self-hosted agents - but you need to ensure network access to the cache and provide credentials via the environment or pre-command [hooks](/docs/agent/hooks). - Teams commonly layer: * Local repository/repository cache in a persistent volume to skip external dependency fetches * Remote cache (for example, BuildBuddy or Bazel-remote) for cross-machine reuse ###### Best practices for Bazel caching - Prefer remote cache for CI. Keep local repository cache in a persistent volume when possible to avoid re-downloading external dependencies on ephemeral agents. - Co-locate cache and compute to reduce latency and cost as cache proximity matters. - Warm the cache with representative builds. Monitor hit/miss rates using Bazel’s logs and remote-cache debugging guidance. - Avoid cache poisoning: * Separate development and CI caches or treat CI cache as read-mostly “first tier” * Use tags like "no-remote-cache" on sensitive targets if needed - Make credentials available at build time via secure secret management and pre-step hooks. > 📘 > Ephemeral agents without persistent volumes lose local caches between jobs. You can mitigate this by using [cache volumes](/docs/agent/buildkite-hosted/cache-volumes) and a robust remote cache. ##### Hosted agents caching [Cache volumes](/docs/agent/buildkite-hosted/cache-volumes) on [Buildkite hosted agents](/docs/agent/buildkite-hosted) are: - Best‑effort attachment, shared across steps, scoped to a pipeline - Well-suited for simple, fast, shared caching - High‑performance NVMe on Linux and sparse bundle images on macOS - Updated only on successful job completion and forked per job for safe concurrency. > 📘 Non-deterministic behavior > Cache volumes on Buildkite hosted agents are [non-deterministic by nature](/docs/agent/buildkite-hosted/cache-volumes#lifecycle-non-deterministic-nature) and allow for dependency caching and Git mirror caching. > For deterministic caching in your pipeline, use Docker images with [remote Docker builders](/docs/agent/buildkite-hosted/linux/remote-docker-builders) which allow you to have fast Docker builds and the [internal container registry](/docs/agent/buildkite-hosted/internal-container-registry). - What to cache: * Use cache volumes for local tool data that's expensive to refetch between ephemeral jobs, for example, Bazel repository cache and custom CLIs. * Prefer a remote cache (for example, BuildBuddy or Bazel-remote on AWS) for cross-machine reuse. Treat local volumes as best‑effort accelerators. - Recommended caching patterns: * Use Buildkite hosted agents with cache volumes mounted to Bazel's repository cache path to avoid fetching the external dependencies twice. * Standardize cache config via a CI `bazelrc` emitted per job, injected alongside secrets in pre‑commands. * Use the [official Buildkite plugins](/docs/pipelines/integrations/plugins/directory) for caching (for example, the [Cache Buildkite plugin](https://buildkite.com/resources/plugins/buildkite-plugins/cache-buildkite-plugin/)) when you need to persist directories by key to object storage (for example, S3). > 📘 > Field reports show ~30% faster test times on hosted agents when cache volumes are used in combination with a remote cache. ###### Practical tips - Expect some non‑determinism with ephemeral volumes; Bazel will re‑download missing pieces. Keep remote cache as the source of truth. - Co‑locate compute and cache to reduce latency. - Keep images lean; preinstall `Bazelisk` and critical toolchains. - Manage credentials via [Buildkite secrets](/docs/pipelines/security/secrets/buildkite-secrets) or your KMS - do not hard-code them into `.bazelrc`. ##### Git Large File Storage (LFS) caching Git LFS stores large files outside your repository in a separate storage location to keep clone sizes manageable, but downloading these objects during checkout can slow builds significantly. The strategies below help you minimize LFS download times: - Skip LFS on checkout - set `GIT_LFS_SKIP_SMUDGE=1` during checkout, then run targeted `git lfs fetch` and `git lfs checkout` only for required paths. - Mirror and prefetch - use [Git mirrors](/docs/agent/self-hosted/configure/git-mirrors) for base clones, then prefetch LFS objects with `git lfs fetch --recent` in a pre-command hook. - Cache volumes - mount `.git/lfs/objects` (and optionally `.git/lfs/tmp`) in a cache volume to reuse blobs between jobs. Expect occasional cache misses; the remote LFS server remains authoritative. > 📘 > Use Git mirrors to speed up clones and cache volumes to avoid re-downloading large objects. ###### Practical tips - Preinstall git-lfs in your agent image to avoid per-job setup overhead. - Cache volumes are scoped per pipeline, shared across steps, and retained for 14 days since last use. Design for cache misses after inactivity. - Cache volumes are locality-aware and non-deterministic. Always fetch from the LFS remote when you need guaranteed up-to-date objects. To find out more about optimizing Buildkite Pipelines for handling Git LFS, see [Understanding the difference in default checkout behaviors](/docs/pipelines/migration/from-githubactions#understand-the-differences-the-difference-in-default-checkout-behaviors). --- ### Monitoring and observability URL: https://buildkite.com/docs/pipelines/best-practices/monitoring-and-observability #### Monitoring and observability This page covers the best practices regarding monitoring, observability, and logging in Buildkite Pipelines. ##### Telemetry operational tips - When implementing [telemetry](/docs/agent/self-hosted/monitoring-and-observability/tracing#using-opentelemetry-tracing), start by profiling the wait and checkout times for your queues as the biggest, cheapest wins. - Include pipeline, queue, repo path, and commit metadata in spans and events to make troubleshooting easier. - Stream Buildkite Pipelines telemetry data to your standard observability stack so platform-level SLOs and alerts exist alongside the app telemetry, keeping one source of truth. ###### Quick checklist for using telemetry Choose integrations based on your existing [observability](/docs/pipelines/integrations/observability/overview) tooling and needs: - Enable [Amazon EventBridge](/docs/pipelines/integrations/observability/amazon-eventbridge) for real-time alerting when you need to integrate with AWS-native tooling. Start with setting up notifications and subscribe your alerting pipeline. - Turn on [OpenTelemetry (OTel)](/docs/pipelines/integrations/observability/opentelemetry) export when you need vendor-neutral observability that works with your existing OTel collector. Start with job spans and queue metrics. - If you are using [Datadog](/docs/pipelines/integrations/observability/datadog), enable agent APM tracing. - If you are using [Backstage](/docs/pipelines/integrations/other/backstage), integrate the [Buildkite Backstage plugin](https://github.com/buildkite/backstage-plugin) to surface pipeline health and build status directly in your developer portal. - If you are using [Honeycomb](/docs/pipelines/integrations/observability/honeycomb), send build events and traces to enable high-cardinality analysis of pipeline performance and failures. ###### Core pipeline telemetry recommendations Establish standardized metrics collection across all pipelines to enable consistent [monitoring](/docs/agent/self-hosted/monitoring-and-observability) and analysis: - Track build times by pipeline, step, and queue to identify performance bottlenecks with build duration metrics. - Monitor agent availability and scaling efficiency across different workload types by tracking queue wait times. - Measure success rates by pipeline, branch, and time period to identify reliability trends through failure rate analysis. - Standardize retry counts for flaky tests and assign custom exit statuses that you can report on with your telemetry provider. - Track retry success rates by exit code to differentiate between transient failures worth retrying and permanent failures that need fixing. - Use [OTel integration](/docs/pipelines/integrations/observability/opentelemetry#opentelemetry-tracing-notification-service) to gain deep visibility into pipeline execution flows. ###### Using analytics for performance improvement - Monitor build duration, throughput, and success rate as key metrics. Use [OTel integration](/docs/pipelines/integrations/observability/opentelemetry) and [queue metrics](/docs/pipelines/insights/queue-metrics). - You can also use [OTel integration](/docs/pipelines/integrations/observability/opentelemetry) to identify the slowest steps and optimize them through bottleneck analysis. - Look for repeated error types with failure clustering. ##### Logging and monitoring - Favor JSON or other parsable formats for structured logs, as such formats can be easily queried when debugging. Use [log groups](/docs/pipelines/configure/managing-log-output#grouping-log-output) to better represent relevant sections in the logs visually. - Differentiate between info, warnings, and errors by using appropriate log levels. - Store logs, reports, and binaries as [artifacts](/docs/pipelines/configure/artifacts) for debugging and compliance. - Use [cluster insights](/docs/pipelines/insights/clusters) or external tools to analyze durations and failure patterns to track trends. - Avoid creating log files that are too large. Large log files make it harder to troubleshoot issues and are harder to manage in the Buildkite Pipelines' interface. * To avoid overly large log files, try not to use verbose output of apps and tools unless needed. See also [Managing log output](/docs/pipelines/configure/managing-log-output#log-output-limits). * If you are using Bazel, note that Bazel's log file is extremely verbose. Instead, consider using the [Bazel BEP Failure Analyzer Buildkite Plugin](https://buildkite.com/resources/plugins/buildkite-plugins/bazel-annotate-buildkite-plugin/) to get a simplified view of the error(s). ###### Set relevant alerts - Notify responsible teams for failing builds with [failure alerts](/docs/pipelines/configure/notify#slack-channel-and-direct-messages-conditional-slack-notifications). - Detect bottlenecks when builds queue too long by monitoring queue depth. You can use [queue metrics (insights)](/docs/pipelines/insights/queue-metrics) for this. - Trigger alerts when agents go offline or degrade to monitor agent health. If individual agent health is less of a concern, then terminate an unhealthy instance and spin up a new one. ##### Getting metrics out of Buildkite Pipelines Buildkite Pipelines provides multiple ways to export CI/CD metrics depending on your needs (agent fleet health, build performance, trace correlation, [test quality](/docs/test-engine), and so on) and where you want the data (Datadog, Prometheus, Grafana, CloudWatch, your own OpenTelemetry collector, or Buildkite's [built-in dashboards](/docs/pipelines/insights/clusters)). Most teams need two or three of these approaches working together, as they are complementary rather than competing. The following sections introduce each approach, explain when to use it, and link to detailed setup documentation. ###### Decision matrix What you want to measure | Best approach | Plan tier | Push or pull | Key destinations --- | --- | --- | --- | --- Agent fleet health (agents online, busy, idle per queue) | [buildkite-agent-metrics](/docs/agent/self-hosted/monitoring-and-observability#buildkite-agent-metrics-cli) | All | Pull (polls Buildkite API) | Prometheus, StatsD/DogStatsD to Datadog, CloudWatch Agent process metrics (goroutines, memory, GC) | [Agent health check service](/docs/agent/self-hosted/monitoring-and-observability#health-checking-metrics-and-status-page) | All | Pull (Prometheus scrape) | Prometheus Build and job lifecycle traces¹ (spans, durations, wait times) | [OpenTelemetry notification service](/docs/pipelines/integrations/observability/opentelemetry#opentelemetry-tracing-notification-service) | Enterprise | Push (OTel) | Any OTel-compatible collector ([Honeycomb](/docs/pipelines/integrations/observability/honeycomb), Grafana Tempo, [Datadog](/docs/pipelines/integrations/observability/datadog), and others) Agent-side job execution traces | [OpenTelemetry agent tracing](/docs/agent/self-hosted/monitoring-and-observability/tracing) | All | Push (OTel) | Any OTel-compatible collector Queue depth, wait times, concurrency² | [Cluster insights](/docs/pipelines/insights/clusters) and [GraphQL API](/docs/apis/graphql-api) | Varies | Pull or UI | Built-in UI; GraphQL or REST for custom dashboards Build events for alerting and dashboards | [Webhooks](/docs/apis/webhooks) and [Amazon EventBridge](/docs/pipelines/integrations/observability/amazon-eventbridge) | All | Push | PagerDuty, Datadog, custom endpoints Test performance and flaky tests | [Test Engine](/docs/test-engine) | Add-on | UI and API | Built-in UI; API for export ¹ The `buildkite.job` span includes the pipeline slug, build number, and a `wait_time_ms` attribute. You can also use a [Signals to Metrics Connector](https://github.com/open-telemetry/opentelemetry-collector-contrib/blob/10f63383121cea32bcbc32ecc76fe9e431332816/connector/signaltometricsconnector/README.md) to produce metrics from spans. ² The GraphQL `ClusterQueue` node exposes a `metrics` field with `connectedAgentsCount`, `runningJobsCount`, `waitingJobsCount`, and `waitTimeSec` (min/p50/p95/max). The same data is available through REST API at `/v2/organizations/{org}/clusters/{cluster_uuid}/queues/{queue_uuid}/metrics`. > 📘 buildkite-agent-metrics and the agent health check service are different tools > The [buildkite-agent-metrics](/docs/agent/self-hosted/monitoring-and-observability#buildkite-agent-metrics-cli) tool gives you fleet-level queue and agent counts by polling the Buildkite API. The agent's [health check service](/docs/agent/self-hosted/monitoring-and-observability#health-checking-metrics-and-status-page) exposes per-agent process health through a Prometheus endpoint on the agent binary itself. You likely want both. ##### Metrics approaches in detail Each approach below covers a different aspect of CI/CD observability available in Buildkite Pipelines. Choose a combination of these to get full coverage across fleet health, build performance, and test quality. ###### Fleet health dashboard [buildkite-agent-metrics](/docs/agent/self-hosted/monitoring-and-observability#buildkite-agent-metrics-cli) is a standalone binary (separate from the agent) that polls the Buildkite API and exports agent and queue metrics. **Metrics provided:** - Agents: total, busy, idle counts per queue - Jobs: running, scheduled, waiting counts - Queue depth and wait times **Supported destinations:** - **Prometheus** — exposes a `/metrics` endpoint for scraping - **StatsD** — emits StatsD-format metrics, which is also the path to get metrics into [Datadog](/docs/pipelines/integrations/observability/datadog) (configure DogStatsD as the StatsD receiver) - **CloudWatch** — publishes directly to AWS CloudWatch Metrics Use this approach when you want a fleet-level view of agent capacity and [queue](/docs/agent/queues) health in your external monitoring tool. This is the primary path for getting agent metrics into Datadog, Prometheus, or CloudWatch. > 📘 Getting agent metrics into Datadog > To get Buildkite agent metrics into Datadog, configure `buildkite-agent-metrics` with the StatsD backend pointed at a DogStatsD receiver (the Datadog Agent's built-in StatsD server). See the [buildkite-agent-metrics CLI documentation](/docs/agent/self-hosted/monitoring-and-observability#buildkite-agent-metrics-cli) for setup details. This tool polls the Buildkite API, so it shows point-in-time snapshots rather than event-level granularity. It does not cover build lifecycle events or trace data. ###### Per-agent process health The Buildkite agent's [health check service](/docs/agent/self-hosted/monitoring-and-observability#health-checking-metrics-and-status-page) includes a native Prometheus-compatible `/metrics` endpoint served by the agent process itself (available since agent version 3.113.0). **Metrics provided:** - Go runtime metrics: goroutines, memory allocation, GC pause times - Agent process health: uptime, version info Use this approach when you run Prometheus and want to monitor agent process health alongside your other infrastructure. This is useful for detecting agent crashes, memory leaks, or degraded agents. This endpoint shows individual agent process health, not fleet-level queue or capacity data. For fleet-level metrics, use [buildkite-agent-metrics](/docs/agent/self-hosted/monitoring-and-observability#buildkite-agent-metrics-cli) alongside it. ###### Build lifecycle traces with OpenTelemetry > 🚧 Enterprise only feature > The OpenTelemetry tracing notification service requires an Enterprise plan. It provides traces (spans), not traditional metrics (gauges or counters). If you need time-series metrics, you need to derive them from spans in your backend (for example, using span-to-metrics features in Datadog or Grafana). The [OpenTelemetry tracing notification service](/docs/pipelines/integrations/observability/opentelemetry#opentelemetry-tracing-notification-service) pushes build and job lifecycle events as OpenTelemetry (OTel) traces to your collector. **Data provided (as trace spans):** - Build lifecycle: created, scheduled, running, finished - Job lifecycle with durations, wait times, queue information - Pipeline and organization metadata as span attributes **Supported destinations:** Any OTel-compatible backend, including [Honeycomb](/docs/pipelines/integrations/observability/honeycomb), Grafana, [Datadog](/docs/pipelines/integrations/observability/datadog) APM, Jaeger, or your own OpenTelemetry collector. Use this approach when you have an existing distributed tracing setup and want CI/CD events to appear as spans alongside your application traces. This is best for correlating build activity with deployments and service health. ###### Agent-side execution traces The Buildkite agent can emit [OpenTelemetry spans](/docs/agent/self-hosted/monitoring-and-observability/tracing) for job execution, providing execution-side trace context. **Data provided (as trace spans):** - Job checkout, plugin, command, and artifact upload phases as individual spans - Execution timing for each phase **Supported destinations:** Any OTel-compatible backend. Use this approach when you want end-to-end trace context flowing from your application code through CI and back. This works alongside the [notification service](/docs/pipelines/best-practices/monitoring-and-observability#metrics-approaches-in-detail-build-lifecycle-traces-with-opentelemetry), as they are complementary: - **Notification service** provides control-plane lifecycle (build created, scheduled, running) - **Agent tracing** provides execution-side detail (checkout, plugins, command, artifacts) ###### Built-in cluster insights dashboards Buildkite's built-in [cluster insights](/docs/pipelines/insights/clusters) dashboards show queue health, wait times, agent utilization, and concurrency. **Metrics provided:** - Queue depth and wait times over time - Agent utilization and concurrency - Job throughput Use this approach for quick visual checks of CI health without any external tooling. This is useful for debugging queue backups or capacity issues in real time. For queue-specific data, see [queue metrics](/docs/pipelines/insights/queue-metrics). Note that some of the data shown in cluster insights is not yet available through an external export path (API, OpenTelemetry, or otherwise). ###### Custom dashboards with the GraphQL API Buildkite's [GraphQL API](/docs/apis/graphql-api) exposes build, job, agent, pipeline, and queue data for programmatic access. **Data available:** - Build and job metadata, statuses, timings - Agent and queue information - Pipeline configuration and metrics Use this approach when building custom dashboards (for example, in Retool or Grafana using a JSON API datasource), automation scripts, or when feeding data into your own data warehouse. This is a polling-based approach, so you need to build your own scheduling to keep data fresh. [Rate limits](/docs/apis/graphql/graphql-resource-limits#rate-limits) apply. ###### Real-time events with webhooks and EventBridge Buildkite pushes build, job, and agent lifecycle events to your HTTP endpoints ([webhooks](/docs/apis/webhooks)) or [Amazon EventBridge](/docs/pipelines/integrations/observability/amazon-eventbridge). **Events available:** - Build created, started, finished, blocked - Job scheduled, started, finished, activated - Agent connected, disconnected, stopped **Supported destinations:** Any HTTP endpoint (PagerDuty, Datadog webhook intake, custom services), or Amazon EventBridge to Lambda, SQS, or SNS. Use this approach for event-driven alerting (for example, notifying a team when a build fails), feeding CI events into incident management systems, or building custom integrations. You can also configure [pipeline-level notifications](/docs/pipelines/configure/notify) directly in your pipeline YAML. ###### Test-level performance metrics [Buildkite Test Engine](/docs/test-engine) ingests test results and provides test-level metrics. **Metrics provided:** - Test duration trends - Flaky test detection and rates - Pass and fail rates over time - Slowest tests Use this approach when you care about test health independently from build infrastructure health. This is best for engineering teams focused on test suite reliability and performance. > 🚧 > Test Engine is a separate product from build and agent metrics. It covers test execution quality, not CI infrastructure health. ##### Common metrics recipes The following recipes show how to connect Buildkite Pipelines' metrics to popular destinations. Each one maps a common goal to the right approach and configuration. ###### Agent metrics in Datadog Configure [buildkite-agent-metrics](/docs/agent/self-hosted/monitoring-and-observability#buildkite-agent-metrics-cli) to emit StatsD metrics and point it at your Datadog Agent's DogStatsD listener (default: `localhost:8125`). This gives you agent counts, queue depth, and job counts as Datadog metrics that you can graph and alert on. ```bash buildkite-agent-metrics -backend statsd \ -statsd-host localhost:8125 \ -statsd-tags \ -token $BUILDKITE_AGENT_TOKEN ``` The `-statsd-tags` flag enables Datadog-compatible tagging, so metrics are tagged by `queue` rather than including the queue name in the metric name. This makes it easier to filter and group metrics in Datadog dashboards. ###### Build traces in Honeycomb or Grafana Tempo Set up the [OpenTelemetry tracing notification service](/docs/pipelines/integrations/observability/opentelemetry#opentelemetry-tracing-notification-service) to push to your OTel endpoint. For deeper execution-phase spans, also enable [agent-level OpenTelemetry tracing](/docs/agent/self-hosted/monitoring-and-observability/tracing). Together they provide control-plane lifecycle and execution detail. ###### Queue wait times in Prometheus Run [buildkite-agent-metrics](/docs/agent/self-hosted/monitoring-and-observability#buildkite-agent-metrics-cli) with the Prometheus backend and scrape its `/metrics` endpoint. You get queue-level wait time metrics. For more granular per-job wait times, use OpenTelemetry traces, which provide span durations rather than traditional gauges. ###### Build failure alerts in PagerDuty Configure a [webhook notification service](/docs/apis/webhooks) to send `build.finished` events to PagerDuty's Events API. Filter on `build.state == "failed"` in PagerDuty's event rules. You can also use [conditional notifications](/docs/pipelines/configure/notify#conditional-notifications) in your pipeline YAML to send alerts to specific channels. ###### Pipeline performance data collection Poll the [GraphQL API](/docs/apis/graphql-api) for build and job data on a schedule and store it in your own data warehouse. The API has time window limits on queryable data, so start collecting early. For built-in historical views, [cluster insights](/docs/pipelines/insights/clusters) provides some data with limited time ranges. ###### Per-agent process health in Prometheus Enable the agent's [health check service](/docs/agent/self-hosted/monitoring-and-observability#health-checking-metrics-and-status-page) and add the `/metrics` endpoint to your Prometheus scrape config. This gives you Go runtime metrics for each agent process, which is useful for detecting degraded or unhealthy agents. ##### Current limitations The following are the areas where the current metrics capabilities have known limitations: - **Metrics export parity:** [Cluster insights](/docs/pipelines/insights/clusters) shows data that can't be fully replicated through any external export path today. If you are building external dashboards, some metrics might currently not be available for export. - **OpenTelemetry enrichment:** Additional span attributes such as build metadata, trigger context, and span links for triggered builds are being actively improved. - **Historical data:** Current cluster insights and [queue metrics](/docs/pipelines/insights/queue-metrics) have limited lookback periods. If you need longer time windows for capacity planning, consider using the [GraphQL API](/docs/apis/graphql-api) to collect and store data in your own warehouse. - **Traces and metrics gap:** OpenTelemetry exports are trace-based (spans), but some workflows require traditional time-series metrics (gauges, counters). Converting spans to metrics requires backend-side processing that not all observability stacks handle well. - **Event payload coverage:** [Webhooks](/docs/apis/webhooks) and [Amazon EventBridge](/docs/pipelines/integrations/observability/amazon-eventbridge) event payloads don't include all metadata, such as retry context and manual-versus-automatic action flags. --- ### Platform controls URL: https://buildkite.com/docs/pipelines/best-practices/platform-controls #### Platform controls This guide covers how platform (infrastructure) teams can maintain centralized control while giving developer (engineering) teams the flexibility they need to run and observe pipelines in your Buildkite organization. > 📘 > If you're looking for in-depth information on security controls, see [Enforcing security controls](/docs/pipelines/best-practices/security-controls). ##### Concept of platform management The key to successful administration of the Buildkite Pipelines platform lies in finding the right balance between centralized control and developer autonomy. Platform teams need to manage shared resources and enforce company-wide standards while avoiding becoming a bottleneck for feature teams. The distinction between platform and developer teams is that platform team specifies the settings like the size of the infrastructure, machine capacity, maximum rerun attempts, time-outs, etc. in the [YAML pipeline configurations included in the codebase](/docs/pipelines/create-your-own#create-a-pipeline), that stays unchanged (by the developer teams). The platform team also manages the scripts that read these YAML configuration files and generate the correct pipeline(s), and allocates agents (with correct underlying capacity) to run the jobs in those pipelines. ##### Agent infrastructure administration Platform teams with [organization administrator permissions](/docs/platform/team-management/permissions#manage-teams-and-permissions-organization-level-permissions) decide on agent resource allocation (CPU, RAM, etc.) before agents start picking up jobs. This applies whether you use [hosted agents](/docs/agent/buildkite-hosted), [self-hosted agents](/docs/pipelines/architecture#self-hosted-hybrid-architecture), or cloud deployments ([AWS](/docs/agent/aws), [GCP](/docs/agent/self-hosted/gcp), [Kubernetes](/docs/agent/self-hosted/agent-stack-k8s)). ##### Pipeline templates as a platform control tool Platform teams need to create and be responsible for the pipeline YAML and the [pipeline templates](/docs/pipelines/governance/templates). > 📘 Enterprise feature > Pipeline templates are only available on an [Enterprise](https://buildkite.com/pricing) plan. Pipeline templates provide platform teams with a powerful mechanism to enforce standardization and security across all CI/CD pipelines in your organization. By creating centrally-managed templates that define approved step configurations, security scanning requirements, deployment patterns, and compliance checks, platform teams can ensure that all developer teams follow established best practices without needing to manually review every pipeline. The ability to update templates centrally means that policy changes or security improvements can be rolled out instantly across all pipelines using that template, eliminating the need to coordinate updates across multiple developer teams. Additionally, platform teams can create different template variants for different environments or application types (microservices, frontend applications, data pipelines) while maintaining consistent underlying security and infrastructure patterns, providing both flexibility and control over your organization's build and deployment processes. ##### Implementing least privilege access The [teams feature](/docs/platform/team-management/permissions#manage-teams-and-permissions) in Buildkite Pipelines provides platform teams with granular access controls and functionality management across pipelines, test suites, and registries throughout your organization. These controls help standardize operations while providing teams with necessary flexibility to manage their own resources within defined boundaries. The teams permissions allow three distinct permission levels for different resources: - **Full Access**: Complete control over pipelines, test suites, and registries. - **Build & Read** (pipelines): Ability to trigger builds and view pipeline details. - **Read Only**: View-only access for monitoring and reporting purposes. It is recommended to base your permission-granting policies on the least privilege access principles. ###### Automated team management Leverage programmatic controls to maintain consistency: - Use the [GraphQL API](/docs/apis/graphql-api) for automated [team provisioning](/docs/apis/graphql/cookbooks/teams) and [user management](/docs/apis/graphql/cookbooks/organizations). - Implement [SSO integration](/docs/platform/sso) to automatically assign new users to appropriate teams. - Configure agent restrictions using the `BUILDKITE_BUILD_CREATOR_TEAMS` environment variable. - Set up automatic team membership for new organization members. - You can also use [Buildkite Terraform provider](/docs/pipelines/best-practices/iac#terraform-provider) to manage users and teams programmatically. You can learn more in the [Manage your CI/CD resources as Code with Terraform](https://buildkite.com/resources/blog/manage-your-ci-cd-resources-as-code-with-terraform/) blog post. > 📘 Security incident response > Platform teams can quickly respond to security incidents by immediately removing compromised users from the organization, which instantly revokes all their access to organizational resources. For organizations with SSO enabled, coordinate user removal both in Buildkite and your SSO provider to prevent re-authentication. Enterprise customers using SCIM deprovisioning can automate this by deactivating users directly in their identity provider. ##### Enforcement of access controls Access controls determine who can view or modify your pipeline configurations. Getting this right means your sensitive pipelines stay in the right hands. - Set up team-based access controls that match how your organization actually works. Give teams the permissions they need, whether that's read-only access for visibility or write permissions for teams managing their own pipelines. Check out [Teams permissions](/docs/platform/team-management/permissions) for details on configuring these settings. - Protect your critical branches. If you're using branch-based workflows, use branch protections to prevent unauthorized changes to sensitive pipelines. This adds a layer of review before changes go live. - Review permissions regularly. As people join, leave, or change roles, and as projects evolve, permissions that made sense six months ago might not make sense today. Schedule periodic access reviews to keep things tidy. - Integrate SSO or SAML if your organization uses an identity provider. This centralizes authentication, makes onboarding and offboarding smoother, and often helps with compliance requirements. It's also one less set of credentials for people to manage. ##### Telemetry reporting Platform teams should implement comprehensive telemetry and observability solutions to monitor pipeline performance, identify reliability issues, and optimize CI/CD infrastructure. Effective telemetry provides actionable insights into build patterns, failure rates, resource utilization, and team productivity while enabling data-driven infrastructure decisions. You can turn Buildkite into a first‑class source of operational truth for your CI fleet by combining in‑product metrics with open telemetry streams, your preferred observability backend, and Buildkite’s real‑time event feeds. Ensure all pipelines report metrics to your centralized [monitoring](/docs/agent/self-hosted/monitoring-and-observability) system for: - Build success/failure rates - Queue wait times - Agent utilization - Cost per pipeline/team See more in [Monitoring and observability](/docs/pipelines/best-practices/monitoring-and-observability). ##### Setting up notifications for platform teams Timely [notifications](/docs/pipelines/configure/notify) help platform teams keep builds healthy without manually watching dashboards. In Buildkite Pipelines, you can enable the following notifications types: - [Basecamp](/docs/pipelines/configure/notify#basecamp-campfire-message) - [Email](/docs/pipelines/configure/notify#email) - [GitHub commit status](/docs/pipelines/configure/notify#github-commit-status) - [GitHub check](/docs/pipelines/configure/notify#github-check) - [PagerDuty](/docs/pipelines/configure/notify#pagerduty-change-events) - [Slack](/docs/pipelines/configure/notify#slack-channel-and-direct-messages) - [Webhooks](/docs/pipelines/configure/notify#webhooks) Setting up notification service(s) allows platform teams to: - Send a success message only when a pipeline that usually fails passes, or when a critical deploy completes. - Route failed builds to a dedicated channel (for example, `#ci-alerts`) so on-call engineers can react quickly. - Tag on-call rotations with `@here` or `@platform-oncall` to avoid alert fatigue for the wider team. - Use thread replies for follow-up logs or links to build pages, keeping the main channel concise. - Configure different channels for routine and critical events. See more in [Notifications](/docs/pipelines/configure/notify#slack-channel-and-direct-messages). ##### Custom checkout scripts Platform teams can standardize code checkout processes across all pipelines by implementing custom checkout hooks that gather consistent metadata, enforce security policies, and prepare the build environment according to organizational standards. Custom checkout scripts ensure that every job starts with the same foundation while accommodating different repository and project requirements. See more in [Git checkout optimization](/docs/pipelines/best-practices/git-checkout-optimization). ##### Cost and billing controls Platform teams can implement various controls and optimization methods to manage Buildkite infrastructure costs effectively. These approaches help balance performance requirements with budget constraints while maintaining visibility into resource utilization across your organization. ###### Implement cost allocation Implement comprehensive cost allocation mechanisms to understand and optimize spending: - Tag builds with team, project, or department identifiers to enable cost attribution. - Generate regular usage reports that break down compute hours by team, project, and queue type. - Track peak usage periods to optimize scaling schedules and resource allocation. - Monitor artifact storage costs and implement retention policies for large or frequently uploaded artifacts. ###### Proactive cost management Set up monitoring and alerting systems to prevent unexpected cost overruns: - Configure alerts for unusual usage spikes that could indicate runaway builds or security incidents. - Implement build timeout policies to prevent stuck or infinite-loop jobs from consuming resources. - Set up automated reporting that provides cost visibility to team leads and budget owners. - Create dashboards that show real-time cost trends and projections for proactive budget management. ###### Cost optimization workflows - Establish regular review cycles to assess queue utilization and right-size resources. - Implement automated policies that pause or scale down underutilized queues. - Create cost-aware pipeline design guidelines that help teams optimize their build configurations. - Use build duration and queue wait time metrics to identify opportunities for parallelization or resource optimization. By implementing these cost controls, platform teams can maintain predictable infrastructure spending while ensuring that developer teams have the resources they need for efficient CI/CD operations. ###### User and license management Since the cost of using Buildkite Pipelines (depending on your tier) is partially based on the number of active users, the platform administrators can track the number of users in an organization with the help of the following GraphQL query: ```graphql query getOrgMembersCount { organization(slug: "org-slug") { members(first:1) { count } } } ``` Alternatively, Buildkite organization administrators can view the number of users in a Buildkite organization in https://buildkite.com/organizations/~/users. It's also recommended to: - Monitor user activity and remove inactive accounts to optimize license costs. - Implement automated user provisioning and deprovisioning workflows integrated with your identity management system. - Track user activity patterns using [GraphQL organization queries](/docs/apis/graphql/cookbooks/organizations) to identify optimization opportunities. - Set up alerts when user counts approach license limits to prevent overage charges. ##### Plugin management best practices Platform teams can leverage [Buildkite plugins](/docs/pipelines/integrations/plugins) to standardize tooling, enforce best practices, and reduce configuration duplication across pipelines - for instance, when you are repeatedly reusing some pieces of code. By creating and managing a curated set of plugins, platform teams can provide developer teams with approved, secure, and well-maintained tools while maintaining control over the CI/CD environment. By establishing secure [Plugin management](/docs/pipelines/best-practices/plugin-management) practices, platform teams can provide developer teams with powerful, standardized tools while maintaining security, compliance, and operational consistency across the entire CI/CD ecosystem. ##### Release and deployment processes Platform teams need to balance deployment velocity with safety and compliance requirements. This means implementing controls that prevent unauthorized production changes while avoiding processes that slow down legitimate deployments. The key is building guardrails into your pipelines that enforce approval workflows, enable gradual rollouts, and maintain visibility into deployment activities across your organization. ###### Deployment approvals and gates Use block steps to require human confirmation before critical deployments. This gives teams a final checkpoint to verify that the right code is going to the right environment: ```yaml steps: - block: ":rocket: Deploy to production?" branches: "main" fields: - select: "Environment" key: "environment" options: - label: "Staging" value: "staging" - label: "Production" value: "production" ``` Block steps work particularly well for production deployments, infrastructure changes, or any operation where you want a human in the loop. Learn more in [Block step](/docs/pipelines/configure/step-types/block-step). For more sophisticated deployment patterns, implement canary releases and staged rollouts directly in your pipelines. This lets you gradually increase traffic to new versions while monitoring for issues. See [Deployments](/docs/pipelines/deployments) for implementation details, or use the [Buildkite deployment plugins](https://buildkite.com/docs/pipelines/deployments/deployment-plugins) to standardize these patterns across your organization. ###### Reliability and resilience practices Build resilience testing into your platform operations. Periodically inject failure scenarios (failing agents, flaky dependencies, network issues, etc.) to validate that your pipelines handle problems gracefully. This chaos testing approach helps you identify weak points before they cause real incidents. Never ignore failing steps without a clear follow-up plan. Silent failures erode trust in your CI/CD platform and hide problems that will eventually cause larger issues. Configure your pipelines to surface failures immediately and ensure someone is responsible for addressing them. ##### Build context and visibility with annotations Use [annotations](/docs/agent/cli/reference/annotate) to provide build context and link to relevant documentation or monitoring systems. Annotations help developer teams quickly understand build failures, access troubleshooting resources, and find related operational data without leaving the Buildkite interface. Platform teams can standardize annotation patterns across pipelines to include: - Links to internal FAQs or runbooks for common build issues - Direct links to monitoring dashboards showing real-time infrastructure health - Pointers to relevant documentation for pipeline-specific processes - Contact information for on-call teams or subject matter experts By embedding these contextual links directly in build output, you reduce the time teams spend hunting for information when builds fail or when they need to understand pipeline behavior. ##### Next steps The following are the key areas we recommend you to focus on next: - [Security controls](/docs/pipelines/security/enforcing-security-controls) - [Monitoring and observability](/docs/pipelines/best-practices/monitoring-and-observability) strategies - [Integration](/docs/pipelines/integrations) with your existing infrastructure --- ### Enforcing security controls URL: https://buildkite.com/docs/pipelines/best-practices/security-controls #### Enforcing security controls This guide helps security engineers identify common risks and implement proven prevention and mitigation strategies across key areas of the Buildkite ecosystem. The guide covers secrets management, supply chain security, artifact storage reliability, and platform hardening. Use this guide as a reference for building a defensible, auditable, and resilient CI/CD foundation with Buildkite. ##### Authentication and session security in the Buildkite interface, APIs and CLI **Risk:** Unauthorized access through credential compromise, user impersonation, session hijacking, overprivileged API keys. **Controls:** - Enforce either [Single sign-on (SSO)](/docs/platform/sso) or [Two-factor authentication (2FA/MFA)](/docs/platform/team-management/enforce-2fa) for all access to the Buildkite interface. - Use time-scoped API tokens with [automated rotation](/docs/apis/managing-api-tokens#api-token-security-rotation). - Apply least privilege principle when [scoping API keys](/docs/apis/managing-api-tokens#token-scopes). - [Restrict API tokens to specific IP ranges](/docs/apis/managing-api-tokens#restricting-api-access-by-ip-address) where possible. ##### Source code security and version control integrity **Risk:** Compromised repository access, unsigned commits, unauthorized branch access. **Controls:** - Use the [Buildkite GitHub App integration](/docs/pipelines/source-control/github#connecting-buildkite-and-github) for secure repository connections. - Enforce [SCM signed commits](https://buildkite.com/resources/blog/securing-your-software-supply-chain-signed-git-commits-with-oidc-and-sigstore/) and branch protection rules on [GitHub](https://docs.github.com/en/repositories/configuring-branches-and-merges-in-your-repository/managing-protected-branches/managing-a-branch-protection-rule) and [GitLab](https://docs.gitlab.com/user/project/repository/branches/protected/) with Buildkite Pipelines [conditionals](/docs/pipelines/configure/conditionals). - Map Buildkite users to SCM identities with [team-based permissions](/docs/platform/team-management/permissions#manage-teams-and-permissions-team-level-permissions). Use [agent hooks](/docs/agent/hooks) to ensure only authorized team members can trigger builds. You can also see a [live example](https://buildkite.com/resources/examples/buildkite/agent-hooks-example/) to discover how agent hooks operate in builds. - Utilize [programmatic team management](/docs/platform/team-management/permissions#manage-teams-and-permissions-programmatically-managing-teams) alongside pre-merge hooks to verify that commit authors have appropriate permissions before allowing build execution. - [Disable triggering builds on forks](/docs/pipelines/source-control/github#running-builds-on-pull-requests) for public pipelines and repositories to ensure open source contributors are unable to substantially alter a pipeline to extract secrets. ##### Dependencies and package management **Risk:** Malicious or [typosquatted](https://en.wikipedia.org/wiki/Typosquatting) packages that can execute arbitrary code during builds, vulnerable dependencies that persist in packaged images and production deployments. **Controls:** - Integrate with a container scanning tool to keep track of a [software bill of materials (SBOM)](https://en.wikipedia.org/wiki/Software_supply_chain) for your packages. For example, see the following list of community-maintained [SBOM generation tools](https://github.com/cybeats/sbomgen?tab=readme-ov-file#list-of-sbom-generation-tools). - Use Buildkite's official [security and compliance plugins](/docs/pipelines/integrations/security-and-compliance/plugins) (or [write your own plugin](/docs/pipelines/integrations/plugins/writing)) to integrate with your existing security scanning infrastructure for source code, container testing, and vulnerability assessment. - Run automated dependency and malware scanning on every merge using established tools such as [GuardDog](https://github.com/DataDog/guarddog), [Snyk](https://snyk.io/), [Aqua Trivy](https://www.aquasec.com/products/trivy/) (also available as a [Trivy Buildkite plugin](https://buildkite.com/resources/plugins/equinixmetal-buildkite/trivy-buildkite-plugin/)), or cloud security services across your software supply chain. - Use [pipeline templates](/docs/pipelines/governance/templates) (a Buildkite [Enterprise](https://buildkite.com/pricing/) plan-only feature), [dynamic pipelines](/docs/pipelines/configure/dynamic-pipelines), and [agent hooks](/docs/agent/hooks) to ensure security scans cannot be bypassed by modifying `pipeline.yml` files. Use [pipeline templates](/docs/pipelines/governance/templates) to standardize security testing across all the pipelines in a Buildkite organization. - Track dependencies using [Buildkite Annotations](/docs/agent/cli/reference/annotate) to document exact package versions in each build. This creates an auditable record enabling targeted remediation when vulnerabilities are discovered. - Establish automated response workflows that trigger [notifications](/docs/pipelines/configure/notify) and remediation processes when [critical CVEs](https://www.cve.org/) are identified. ##### Secrets management **Risk:** Exposed secrets in logs, environment variables, or compromised agents. **Controls:** - Leverage [Buildkite's secrets plugins](/docs/pipelines/integrations/secrets/plugins) for secrets management and the [Buildkite secrets](/docs/pipelines/security/secrets/buildkite-secrets) feature to ensure that secrets are only available where explicitly required. Note that Buildkite [automatically redacts secrets](/docs/pipelines/security/secrets/buildkite-secrets#redaction) in logs. - Integrate external secrets management using dedicated [secrets storage services](/docs/pipelines/security/secrets/managing#using-a-secrets-storage-service) such as [AWS Secrets Manager](https://aws.amazon.com/secrets-manager/) or [HashiCorp Vault](https://www.vaultproject.io). - [Export secrets with environment hooks](/docs/pipelines/security/secrets/managing#without-a-secrets-storage-service-exporting-secrets-with-environment-hooks) for agent-level secrets, rather than injecting them at build runtime. If you absolutely have to inject your secrets at runtime, avoid storing them as static environment variables. - Establish environment-specific [cluster](/docs/pipelines/security/clusters/manage) and [queue](/docs/agent/queues/managing) segmentation of your builds to restrict access so that builds in a queue can only access the secrets they require to run. - Monitor how secrets are accessed within your CI/CD environment by reviewing the [Audit Log](/docs/platform/audit-log). - Use additional secret scanning tools such as [git-secrets](https://github.com/awslabs/git-secrets) to prevent accidental commits of secrets to repositories before they enter the build process. - Consider using strict pipeline upload guards, such as the [reject-secrets](/docs/agent/cli/reference/pipeline#reject-secrets) option for `buildkite-agent pipeline upload` commands. - Have incident response procedures for secret compromise, including automated revocation and rotation processes. Note that cluster maintainers can [revoke tokens](/docs/agent/self-hosted/tokens#revoke-a-token) using the REST API for rapid containment. ##### Buildkite agent security **Risk:** Buildkite agent compromise leading to privilege escalation, lateral movement, data access, malicious code execution. **Controls:** - Set [granular command authorization controls](/docs/agent/self-hosted/security#restrict-access-by-the-buildkite-agent-controller) for what the `buildkite-agent` user is allowed to run, restricting executable operations to predefined security parameters. - Configure automated regular credential rotation, such as setting automatic [expiration dates](/docs/agent/self-hosted/tokens#agent-token-lifetime) on your agent tokens to limit their window of opportunity to be compromised. - [Upgrade your Buildkite agents](/docs/agent/self-hosted/install#upgrade-agents) on a regular basis. - Deploy ephemeral build environments using isolated virtual machines or containers. Ensure that your deployment environment is secure by installing minimal operating systems, disabling inbound SSH access, and enforcing strict network egress controls. - Isolate pipelines with sensitive builds to run on dedicated agent pools within their own [cluster](/docs/pipelines/security/clusters). This way, you're ensuring that critical workloads cannot be affected by compromise of less secure environments—for example, open-source repositories with unverified code. - Enable [pipeline signing](/docs/agent/self-hosted/security/signed-pipelines) and verification mechanisms. - Set appropriate [job time limits](/docs/pipelines/configure/build-timeouts#command-timeouts) to limit the potential duration of malicious code execution on compromised agents. - Utilize [OIDC-based authentication](/docs/pipelines/security/oidc) to establish secure, short-lived credential exchange between agents and cloud infrastructure, leveraging session tags to add strong unique claims. - [Disable command evaluation](/docs/agent/self-hosted/security#restrict-access-by-the-buildkite-agent-controller-disable-command-evaluation) where appropriate and enforce script-only execution instead. - Consider using the [`--no-plugins` buildkite-agent start option](/docs/agent/cli/reference/start#no-plugins) to prevent the agent from loading any plugins. - Learn more about making your virtual machine or container running the `buildkite-agent` process more secure in [Securing your Buildkite agent](/docs/agent/self-hosted/security). > 📘 On better Buildkite agent security > For small teams with limited experience in hosting and hardening infrastructure, [Buildkite hosted agents](/docs/agent/buildkite-hosted) provide a secure, fully managed solution that reduces operational overhead. However, organizations with stringent governance, risk, and compliance (GRC) requirements that mandate enhanced security postures, should deploy [self-hosted agents](/docs/pipelines/architecture#self-hosted-hybrid-architecture) for their most sensitive workloads, as this approach offers greater control over security configurations and compliance controls. ##### API access token compromise **Risk:** Compromised or overprivileged Buildkite API access tokens enabling unauthorized pipeline access, code execution, and data theft. **Controls:** - Create API access tokens with only the minimal [required scopes](/docs/apis/managing-api-tokens#token-scopes). Use [portals](/docs/apis/portals) to limit GraphQL query scope. Review permissions regularly to match current needs. - Establish [rotation of access tokens](/docs/apis/managing-api-tokens#api-token-security-rotation) with defined expiration periods. Automate rotation where possible to limit exposure windows. - Bind access tokens to [specific IP addresses or network segments](/docs/apis/managing-api-tokens#restricting-api-access-by-ip-address). Use network address translation (NAT) with centralized egress routing for enhanced monitoring and rapid compromise detection. - Deploy access tokens within dedicated virtual private clouds (VPCs) using [Buildkite’s Elastic CI Stack for AWS](/docs/agent/self-hosted/aws/elastic-ci-stack/ec2-linux-and-windows/security#network-configuration) for network isolation. - Monitor access token usage patterns through the [Audit Log](/docs/platform/audit-log). Set up alerts on unusual patterns: unexpected locations, excessive API calls, unauthorized resource access. - When using the [Buildkite Model Context Protocol (MCP) server](/docs/apis/mcp-server), preference using the [remote MCP server](/docs/apis/mcp-server#types-of-mcp-servers-remote-mcp-server) as this MCP server type issues short-lived OAuth access tokens, compared to the local MCP server, which requires you to configure an API access token that can pose a security risk if leaked. ##### Network and transport security **Risk:** Interception of traffic between agents, the Buildkite API, and artifact storage, as well as data tampering, exposure, and unauthorized external communications potentially allowing malicious code injection. **Controls:** Buildkite enforces TLS encryption by default for all platform communications, ensuring traffic to and from Buildkite services is encrypted in transit. To further tighten your network security, you can take these additional steps: - For [self-hosted agents](/docs/pipelines/architecture#self-hosted-hybrid-architecture), implement a [zero trust architecture (ZTA)](https://www.ibm.com/think/topics/zero-trust) with least-privilege egress rules. - Monitor network traffic for anomalies or suspicious connection attempts from build agents. - Consider taking your infrastructure fully into the cloud with the help of [Buildkite hosted agents](/docs/agent/buildkite-hosted) or by running your agents in [AWS](/docs/agent/aws) or in [Google Cloud](/docs/agent/self-hosted/gcp). - Harden your cloud infrastructure perimeter using [AWS PrivateLink](https://docs.aws.amazon.com/vpc/latest/privatelink/what-is-privatelink.html) or [VPC endpoints](https://docs.aws.amazon.com/vpc/latest/privatelink/create-interface-endpoint.html) for the AWS services, or [Private Google Access](https://cloud.google.com/vpc/docs/private-google-access) for Google Cloud. ##### Artifact storage and integrity **Risk:** Artifact tampering, data exfiltration, compromised deployments. **Controls:** - Enforce encryption at rest and in transit when storing and transferring build artifacts. - Use cloud storage for storing build artifacts. You can use [Buildkite Package Registries](/docs/package-registries) or other supported private cloud storage options: * [AWS S3 buckets](/docs/agent/cli/reference/artifact#using-your-private-aws-s3-bucket) * [Google Cloud Storage buckets](/docs/agent/cli/reference/artifact#using-your-private-google-cloud-bucket) * [Azure Blob containers](/docs/agent/cli/reference/artifact#using-your-private-azure-blob-container) - Implement artifact signing using Buildkite's [SLSA provenance](/docs/package-registries/security/slsa-provenance) feature, or alternatively using [in-toto](https://in-toto.io/) or [cosign](https://github.com/sigstore/cosign), and establish verification processes before deployment to document artifact provenance and detect tampering. - Enforce [KMS signing](/docs/agent/self-hosted/security/signed-pipelines#aws-kms-managed-key-setup) of your pipelines. ##### Consistent pipeline-as-code approach **Risk:** Inconsistent security implementations across teams and projects within your Buildkite organization, creating undetected security blind spots and gaps. **Controls:** - Adopt an [infrastructure-as-code (IaC)](https://aws.amazon.com/what-is/iac/) approach and mandate the exclusive use of the [Buildkite Terraform provider](https://buildkite.com/resources/blog/manage-your-ci-cd-resources-as-code-with-terraform/) for all pipeline configuration management, implementing a mandatory two-reviewer approval process for infrastructure changes. **Note:** Organizations without proper governance and peer review protocols may have gaps in their security posture. The suggested approach is to create a service account for Terraform that is not tied to any specific user identity using your identity provider. Use this account's API key to make changes (in the pipelines, tokens, etc.) in Terraform through the Buildkite Terraform provider, while enforcing Buildkite's role-based access control capabilities and [GitOps](https://www.redhat.com/en/topics/devops/what-is-gitops) workflows. - Restrict pipeline configuration access to [Buildkite organization administrators](/docs/pipelines/security/permissions#manage-teams-and-permissions-organization-level-permissions) only by [making your pipelines **Read Only** to your teams](/docs/pipelines/security/permissions#manage-teams-and-permissions-pipeline-level-permissions). - Set zero-tolerance policies for manual pipeline overrides, with any unauthorized modifications triggering immediate alerts within your [security information and event management (SIEM)](https://en.wikipedia.org/wiki/Security_information_and_event_management) system to ensure rapid incident response and maintain configuration integrity. - Establish a "break glass" protocol that is tied to SIEM alerts in case someone has to make manual modifications to Buildkite's systems outside of the automated IaC workflow. - Deploy agent-level [lifecycle hooks](/docs/agent/hooks#agent-lifecycle-hooks) as these cannot be bypassed or avoided through a `pipeline.yml` modification or other developer-level code change. You can also customize the hooks to scan your `pipeline.yml` files to validate their structure and contents, and ensure that those files conform to your Buildkite organization's security requirements. - Use [ephemeral Buildkite agents](/docs/pipelines/glossary#ephemeral-agent) (like the Buildkite [Agent Stack for Kubernetes](/docs/agent/self-hosted/agent-stack-k8s)) or tools such as [Ansible](https://docs.ansible.com/) or [Puppet](https://www.puppet.com/blog/puppet-cicd) to force configuration changes on persistent hosts. - Mandate comprehensive security scanning (including container vulnerability and static code analysis scanning) and [SBOM](https://en.wikipedia.org/wiki/Software_supply_chain) generation for all builds. For instance, use [pipeline templates](/docs/pipelines/governance/templates) to ensure every pipeline in your Buildkite organization inherits predetermined configurations and maintains consistent baseline protections. - Restrict plugin usage to [private](/docs/pipelines/integrations/plugins/using#plugin-sources) or [version-pinned](/docs/pipelines/integrations/plugins/using#pinning-plugin-versions) plugins to prevent supply chain attacks and ensure reproducible builds with known, vetted components. - Use only [verified Docker images](https://docs.docker.com/docker-hub/repos/manage/trusted-content/dvp-program/). - Scope pipelines to specific [agent queues](/docs/agent/queues#assigning-a-self-hosted-agent-to-a-queue) to maintain separation between environments and prevent unauthorized access across build processes. - Use permission models to [target appropriate agents](/docs/pipelines/configure/defining-steps#targeting-specific-agents) for builds, ensuring sensitive workloads run only on designated, secured infrastructure. ##### Monitoring, anomaly detection, logging **Risk:** Insufficient monitoring and logging resulting in undetected malicious activity, delayed incident response, and prolonged exposure to security threats within a CI/CD environment. **Controls:** - Export or stream all of your Buildkite Pipelines metrics to your preferred monitoring and observability platform to maintain visibility across CI/CD pipeline activities. Learn more about Buildkite Pipelines' observability platform integrations from the **Observability** section of the [Integrations](/docs/pipelines/integrations) page (for example, see the [OpenTelemetry integration capabilities in Buildkite](/docs/pipelines/integrations/observability/opentelemetry)). - Set up [Amazon EventBridge](/docs/pipelines/integrations/observability/amazon-eventbridge) to consume Buildkite's [Audit Log](/docs/platform/audit-log) and integrate that information with your [SIEM](https://en.wikipedia.org/wiki/Security_information_and_event_management) system. - Monitor logs for anomalies (unusual IPs, secret access patterns, build frequency spikes) and configure automated alerts. ##### Incident response and recovery **Risk:** Security incidents involving leaked secrets, compromised credentials, or unauthorized access to build environments. **Controls:** - Contact support@buildkite.com immediately upon discovering any security incident. [Enterprise Premium Support](https://buildkite.com/pricing/#premium-support) customers can report an incident through their priority support channel. Early notification allows Buildkite to assist with immediate remediation steps. - Buildkite's incident response team can [audit access logs](/docs/platform/audit-log) to identify which users and IP addresses accessed builds containing leaked information. For [Enterprise](https://buildkite.com/pricing/) plan customers, older logs can be rehydrated for in-depth forensic analysis. ##### Further questions If you didn't find coverage of a security-related question that interests you here, feel free to contact support@buildkite.com. --- ### Using Bazel on Buildkite URL: https://buildkite.com/docs/pipelines/tutorials/bazel #### Using Bazel on Buildkite [Bazel](https://www.bazel.build/) is an open-source build and test tool similar to Make, Maven, and Gradle. Bazel supports large codebases across multiple repositories, and large numbers of users. ##### Using Bazel on Buildkite 1. [Install Bazel](https://docs.bazel.build/install.html) on one or more Buildkite agents. 2. Add an empty [`WORKSPACE` file](https://bazel.build/start/cpp#getting-started) to your project to mark it as a Bazel workspace. 3. Add a [`BUILD` file](https://bazel.build/start/cpp#understand-build) to your project to tell Bazel how to build it. 4. Add the Bazel build target(s) to your Buildkite [Pipeline](/docs/pipelines/configure/defining-steps). ##### Buildkite Bazel example The [Building with Bazel](https://buildkite.com/pipelines/templates/ci/bazel-ci?queryID=2e432af39a35aeac99901b275534243c) example pipeline template demonstrates how a continuous integration pipeline might run on a Bazel project. The visualization below shows the steps in its example pipeline. ##### Buildkite C++ Bazel example The following repository is a simple Bazel example which you can run and customize. Make sure you're signed into your [Buildkite account](https://buildkite.com) and have access to a Buildkite agent [running Bazel](https://docs.bazel.build/install.html), then click through to the example: [:memo: Buildkite Bazel Example A Buildkite Bazel Example you can run and customize. github.com/buildkite/bazel-example](https://github.com/buildkite/bazel-example) ##### Further reading - The [Bazel Tutorial: Build a C++ Project](https://bazel.build/start/cpp) goes into more detail about how to configure more complex Bazel builds, covering multiple build targets and multiple packages. - The Bazel [external dependencies docs](https://bazel.build/external/overview) show you how to build other local and remote repositories. ##### Next steps Now that you've built a simple Bazel example, you can also [use Bazel to create dynamic pipelines and build annotations](/docs/pipelines/tutorials/dynamic-pipelines-and-annotations-using-bazel). --- ### Dynamic pipelines and annotations using Bazel URL: https://buildkite.com/docs/pipelines/tutorials/dynamic-pipelines-and-annotations-using-bazel #### Creating dynamic pipelines and build annotations using Bazel This tutorial takes you through the process of creating dynamic pipelines and build annotations in Buildkite Pipelines, using [Bazel](https://www.bazel.build/) as the build tool. If you are not already familiar with: - How the Bazel build tool can integrate with Buildkite, learn more about this in the [Using Bazel with Buildkite tutorial](/docs/pipelines/tutorials/bazel), which uses a Buildkite pipeline to build a simple Bazel example. - The basics of Buildkite Pipelines, run through the [Pipelines getting started tutorial](/docs/pipelines/getting-started) first. The tutorial uses an [Bazel Monorepo Example](https://github.com/buildkite/bazel-monorepo-example) project, whose program `pipeline.py` within the `.buildkite/` directory is one of the first things run by Buildkite Pipelines when the pipeline commences its build. This Python program creates additional Buildkite pipeline steps (in JSON format) that are then uploaded to the same pipeline, which Buildkite continues to run as part of the same pipeline build. Buildkite pipelines that generate new pipeline steps dynamically like this are known as [_dynamic pipelines_](/docs/pipelines/configure/dynamic-pipelines). This `pipeline.py` Python program: - Determines which initial Bazel packages need to be built, based on changes that have been committed to either the `app/` or `library/` files, and then proceeds to upload the relevant steps that builds these packages as part of the same pipeline build. - Also runs [Bazel queries](https://bazel.build/query/guide) to determine which additional Bazel packages (defined within the packages' `BUILD.bazel` files) depend on these initial Bazel packages (for example, to build the `library/`'s dependency, which is `app/`), and then builds those additional packages too. ##### Before you start To complete this tutorial, you'll need to have done the following: - Run through the [Getting started with Pipelines](/docs/pipelines/getting-started) tutorial, to familiarize yourself with the basics of Buildkite Pipelines. - Make your own copy or fork of the [bazel-monorepo-example](https://github.com/buildkite/bazel-monorepo-example) repository within your own GitHub account, or Git-based setup. ##### Set up an agent Buildkite Pipelines requires an [agent](/docs/agent) running Bazel to build this pipeline. You can [set up your own self-hosted agent](#set-up-an-agent-set-up-a-self-hosted-agent) to do this. However, you can get up and running more rapidly by [creating a Buildkite hosted agent for macOS](#set-up-an-agent-create-a-buildkite-hosted-agent-for-macos), instead. > 📘 Already running an agent > If you're already running an agent and its operating system environment is already running [Bazel](https://bazel.build/install), skip to the [next step on creating a pipeline](#create-a-pipeline). ###### Create a Buildkite hosted agent for macOS Unlike [Linux hosted agents](/docs/agent/buildkite-hosted/linux), which would require you to install Bazel or Bazelisk on the agent (for example, using an [agent image](/docs/agent/buildkite-hosted/linux#agent-images)), and implement other configurations to ensure that Bazel runs successfully on the agent (for example, ensuring Bazel runs as a non-root user), [macOS hosted agents](/docs/agent/buildkite-hosted/macos) already come pre-installed with Bazelisk and ready to run Bazel. You can create the first [Buildkite hosted agent](/docs/agent/buildkite-hosted) for [macOS](/docs/agent/buildkite-hosted/macos) within a Buildkite organization for a two-week free trial, after which a usage cost (based on the agent's capacity) is charged per minute. > 📘 > If you're unable to access the Buildkite hosted agent feature or create one in your cluster, please contact support at support@buildkite.com to request access to this feature. Otherwise, you can set yourself up with a [self-hosted agent](#set-up-an-agent-set-up-a-self-hosted-agent) instead. To create your macOS hosted agent: 1. Follow the [Create a Buildkite hosted queue](/docs/agent/queues/managing#create-a-buildkite-hosted-queue) > [Using the Buildkite interface](/docs/agent/queues/managing#create-a-buildkite-hosted-queue-using-the-buildkite-interface) instructions to begin creating your hosted agent within its own queue. As part of this process: * Give this queue an intuitive **key** and **description**, for example, **macos** and **Buildkite macOS hosted queue**, respectively. * In the **Select your agent infrastructure** section, select **Hosted**. * Select **macOS** as the **Machine type** and **Medium** for the **Capacity**. 1. Make your pipelines use your new macOS hosted agent by default, by ensuring its queue is the _default queue_. This should be indicated by **(default)** after the queue's key on the cluster's **Queues** page. If this is not the case and another queue is marked **(default)**: 1. On the cluster's **Queues** page, select the queue with the hosted agent you just created. 1. On the queue's **Overview** page, select the **Settings** tab to open this page. 1. In the **Queue Management** section, select **Set as Default Queue**. Your Buildkite macOS hosted agent, as the new default queue, is now ready to use. ###### Set up a self-hosted agent Setting up a self-hosted agent for this tutorial requires you to first install a Buildkite agent in a self-hosted environment, and then install [Bazel](https://www.bazel.build/) to the same environment. To set up a self-hosted agent for this tutorial: 1. Ensure you have followed the [Create a self-hosted queue](/docs/agent/queues/managing#create-a-self-hosted-queue) and relevant [Buildkite agent installation instructions](/docs/agent/self-hosted/install) to get set up with your self-hosted agent. 1. Install Bazel, by following the relevant instructions to install [Bazelisk (recommended)](https://bazel.build/install/bazelisk) or the relevant [Bazel package](https://bazel.build/install) to the same operating system environment that your self-hosted agent was installed to. ##### Create a pipeline Next, you'll create a new pipeline that builds an [example Python project with Bazel](https://github.com/buildkite/bazel-monorepo-example), which in turn creates additional dynamically-generated steps in JSON format that Buildkite runs to build and test a hello-world library. To create this pipeline: 1. [Add a new pipeline](https://buildkite.com/new) in your Buildkite organization, select your GitHub account from the **Any account** dropdown, and specify [your copy or fork of the 'bazel-monorepo-example' repository](#before-you-start) for the **Git Repository** value. 1. On the **New Pipeline** page, select the cluster associated with the [agent you had set up with Bazel](#set-up-an-agent). 1. If necessary, provide a **Name** for your new pipeline. 1. Select the **Cluster** of the [agent you had previously set up](#set-up-an-agent). 1. If your Buildkite organization already has the [teams feature enabled](/docs/platform/team-management/permissions#manage-teams-and-permissions), choose the **Team** who will have access to this pipeline. 1. Leave all other fields with their pre-filled default values, and select **Create Pipeline**. This associates the example repository with your new pipeline, and adds a step to upload the full pipeline definition from the repository. ##### Build the pipeline Now that your pipeline has been set up and [created](#create-a-pipeline) in Buildkite Pipelines, it is ready to start being built, and we can start making commits to different areas of this project to see how these affect your dynamic pipeline builds. ###### Step 1: Create the first build 1. On the next page after [creating](#create-a-pipeline) your pipeline, which shows its name, select **New Build**. In the resulting dialog, create a build using the pre-filled details. 1. In the **Message** field, enter a short description for the build. For example, **My first build**. 1. Select **Create Build**. 1. Once the build has completed, visit [your pipeline's build summary page](https://buildkite.com/~/bazel-monorepo-example), and verify that only the initial **Compute the pipeline with Python** step has been run. ###### Step 2: Make changes to both an app and library file 1. Edit one of the files within both the `./app` and `./library` directories, and commit and push this change to its `main` branch, with an appropriate message (for example, **A change to both an app and a library file**). 1. On [your pipeline's build summary page](https://buildkite.com/~/bazel-monorepo-example), and notice that both the dynamically generated **Build and test //library/...** _and_ **Build and test //app/...** Bazel package build steps have also been run. 1. Note also the **Bazel Results** build annotation on this pipeline build's results, which are generated from Bazel builds using the [Bazel BEP Annotate Buildkite Plugin](https://github.com/buildkite-plugins/bazel-annotate-buildkite-plugin). This plugin is defined in the example Python project's `utils.py` file, which in turn, is used by the `pipeline.py` file. ###### Step 3: Make changes to only an app file 1. Edit one of the files within the `./app` directory only, and commit and push this change to its `main` branch, with an appropriate message (for example, **A change to only an app file**). 1. On [your pipeline's build summary page](https://buildkite.com/~/bazel-monorepo-example) again, and notice that only the dynamically generated **Build and test //app/...** Bazel package build step is built. ###### Step 4: Make changes to only a library file 1. Edit one of the files within the `./library` directory only, and commit and push this change to its `main` branch, with an appropriate message (for example, **A change to only a library file**). 1. On [your pipeline's build summary page](https://buildkite.com/~/bazel-monorepo-example), notice that both the dynamically generated **Build and test //library/...** _and_ **Build and test //app/...** Bazel package build steps have been built. **Why?** According to each Bazel package's respective `BUILD.bazel` files in this project, `//app` has a dependency on `//library`. Therefore, if any change is made to a file in `./library`, then `./app` needs to be re-built to determine if the changes in `./library` also affect those in `./app`. ##### Next steps That's it! You've successfully configured a Buildkite agent, built a Buildkite pipeline with an example Python program that: - Builds pipeline steps dynamically. - Uses Bazel to define Bazel package dependencies, and runs [Bazel queries](https://bazel.build/query/guide) to determine which Bazel packages need to be built (based on their dependencies). - Generates pipeline build annotations using the using the [Bazel BEP Annotate Buildkite Plugin](https://github.com/buildkite-plugins/bazel-annotate-buildkite-plugin). 🎉 Learn more about dynamic pipelines from the [Dynamic pipelines](/docs/pipelines/configure/dynamic-pipelines) page. --- ### Migrating to YAML steps URL: https://buildkite.com/docs/pipelines/tutorials/pipeline-upgrade #### Migrating to YAML steps This guide explains the differences between the web-based editor and the YAML Steps editor, and shows how to migrate your organization's pipelines to YAML Steps. There are two parts to migrating: 1. Opt-in to using YAML Steps for any new pipelines created by your organization 1. Migrate all existing pipelines, either one at a time or using bulk migration After migrating to YAML Steps, you will no longer be able to access the web-based step editor. ##### What is the YAML Steps editor? Instead of creating and managing pipeline steps with the GUI-like step editor, pipelines will be managed in YAML in the YAML Steps editor. A new reference sidebar is available alongside the YAML editor. Each step type lists the available top level attributes with short descriptions, for more detailed information on each you can click through to the relevant documentation page. If you've been using the YAML steps editor during its beta, then your pipelines have already been migrated and will not require any changes. ###### Compatibility issues Using YAML Steps changes the order in which environment variables are interpolated. There are two precedence changes: 1. Step-level environment variables will now take precedence over build-level variables (variables set when creating a new build) 1. Top level `env` blocks in YAML no longer override step-level environment variables For example, in the `pipeline.yml` below, the command will echo `step` after you have migrated to using YAML Steps: ```yaml env: LEVEL: "pipeline" steps: command: "echo $LEVEL" env: LEVEL: "step" ``` Prior to migrating to YAML Steps, the command will echo `pipeline`. ##### Using YAML Steps for new pipelines To use the YAML Steps editor for new pipelines created in your organization, you'll need to opt-in on the Pipeline YAML Migration page in Organization Settings. Clicking the button to start using YAML pipelines won't change anything for any existing pipelines in your organization. Migration of existing pipelines will need to be completed either individually or using the bulk migration tool. See [Steps to migrate existing pipelines](#migrating-existing-pipelines) below for further information. ##### Migrating existing pipelines The migration page can be accessed by organization administrators from the organization **Settings** page. On this page you can see which pipelines have already been migrated and which are yet to migrate. ###### Individual migration You can migrate each pipeline individually from the Pipeline Settings page. The new YAML steps version of your pipeline will be auto-saved during migration, and will replace the web-based steps. Under the web-based Steps editor is a button to **Convert to YAML Steps**: Click this button to convert your pipeline steps into YAML. If you expect to have compatibility issues with your environment variables, individual migration is recommended. ###### Bulk migration You can migrate all of the pipelines in your organization at the same time using the **Replace web steps for all pipelines with YAML** button on the migration page. The new YAML steps version of your pipeline will be auto-saved during migration, and will replace the web-based steps. > 📘 > If you expect to have [compatibility issues](#what-is-the-yaml-steps-editor-compatibility-issues) with your environment variables, migrate your pipelines [individually](#migrating-existing-pipelines-individual-migration). Click the migrate button to convert the steps of every in your organization into YAML. The new YAML steps version of your pipeline will be auto-saved during migration, and will replace the web-based steps. After migrating, please manually check each pipeline, ensuring that they have completed successfully. --- ### Using GitHub merge queues URL: https://buildkite.com/docs/pipelines/tutorials/github-merge-queue #### Using GitHub merge queues Merge queues are a feature of GitHub to improve development velocity on busy branches. They automate the merging for pull requests while protecting the branch from failure due to incompatibilities introduced by different pull requests. Buildkite supports creating builds for pull requests in a GitHub merge queue, and can automatically cancel redundant builds when the composition of the merge queue changes. These builds are uniquely identified in the Buildkite UI, and the behavior of the pipeline can be manipulated based on conditionals and environment variables that identify it as a merge queue build. ##### Before you start Familiarize yourself with [managing a merge queue in GitHub](https://docs.github.com/en/repositories/configuring-branches-and-merges-in-your-repository/configuring-pull-request-merges/managing-a-merge-queue). ##### Enable merge queue builds for a pipeline To enable merge queue builds for a pipeline: 1. From your Buildkite dashboard, select your pipeline. 1. Select **Pipeline Settings** > **GitHub**. 1. In the **GitHub Settings** section, select the **Build merge queues** checkbox. > 🚧 Ensure GitHub webhook has _Merge groups_ events enabled > Buildkite relies on receiving `merge_group` webhook events from GitHub to create builds for merge groups in the merge queue. Ensure your pipeline's [webhook](/docs/pipelines/source-control/github#set-up-a-new-pipeline-for-a-github-repository) has the _Merge groups_ event enabled before enabling merge queue builds. That's it! Your pipeline now supports merge queues in GitHub. 🎉 ##### Understanding merge queue behavior When a GitHub Pull Request (PR) is added to the merge queue, a "merge group" is created. A merge group contains the changes for that PR, along with changes belonging to any PR ahead of it in the merge queue. Each merge group is based on the HEAD commit of the merge group ahead of it in the queue, and the merge group at the front of the queue is based on the HEAD commit of the target branch. The HEAD commit of a merge group is a speculative commit constructed based on the _Merge method_ setting of the merge queue in GitHub. This commit is the exact commit that will end up on the target branch _if_ the merge group is successfully merged into the target branch. ###### Builds created for merge groups Every time GitHub creates a merge group, two webhook events are sent that Buildkite might respond to: 1. A `push` webhook event for the temporary `gh-readonly-queue/*` branch that was created. 1. A `merge_group` webhook event for the merge group. If **Build branches** is enabled for the pipeline, then Buildkite will by default respond to the `push` event by creating a build for the temporary branch. These builds will be no different to a build created for any other branch. However, if **Build merge queues** is enabled, the `push` event will be ignored and instead Buildkite will respond to the `merge_group` event by creating a "merge queue build" that captures additional properties about that merge group: These properties are exposed as [conditionals](/docs/pipelines/configure/conditionals#variable-and-syntax-reference) and [environment variables](/docs/pipelines/configure/environment-variables#buildkite-environment-variables) in the build: Property | Conditional | Environment variable ----------------- | -------------------------------- | -------------------------------- base_sha | `build.merge_queue.base_commit` | `BUILDKITE_MERGE_QUEUE_BASE_COMMIT` base_ref | `build.merge_queue.base_branch` | `BUILDKITE_MERGE_QUEUE_BASE_BRANCH` head_sha | `build.commit` | `BUILDKITE_COMMIT` head_ref | `build.branch` | `BUILDKITE_BRANCH` > 📘 Skipping builds > [Skipping a build](/docs/pipelines/configure/skipping) is not supported for merge queue builds, as GitHub expects every merge group commit to receive a commit status update. > However, you can still use [conditionals](/docs/pipelines/configure/conditionals#conditionals-in-steps) to prevent steps from running inside of a merge queue build. ###### Listing merge queue builds Merge queue builds are listed separately at the top of the pipeline page: This listing reflects all builds created for the merge queue, it is not representative of the current state of the merge queue in GitHub. For example, if a pull request is removed from the merge queue, the corresponding build will remain visible in Buildkite. ###### Failing builds in a merge queue Builds for merge groups can post [commit status updates](/docs/pipelines/source-control/github#customizing-commit-statuses) like any other build: If that commit status is a required check for the merge queue, then a "failing" (or "failed") update will cause GitHub to remove the corresponding pull request from the merge queue. > 🚧 Behavior may differ based on GitHub merge queue settings > If you've disabled the _Require all queue entries to pass required checks_ setting on the merge queue in GitHub, then a failing build will not always cause the pull request to be removed from the merge queue immediately. > Instead GitHub will first wait to see if the build for any merge group behind it in the queue succeeds. This option is intended to prevent flaky test failures from causing pull requests to be removed from the merge queue unnecessarily. When this happens the merge groups for any merge groups behind it in the queue will be "invalidated" and replaced with new merge groups that exclude the removed pull request. This will result in a new build being created for the newly created merge group: ###### Automatic cancellation of redundant builds When a merge group is invalidated, GitHub sends a `merge_group` webhook event that Buildkite can respond to by cancelling any running build for that merge group. Select **Cancel builds for destroyed merge groups** in the pipeline's GitHub settings to enable this behavior. ###### Interaction with if_changed agent behavior The agent [supports an `if_changed` attribute](/docs/agent/cli/reference/pipeline#apply-if-changed) that allows steps to be conditionally included in a build based on the files changed in the commit range for that build. By default for merge queue builds this commit range will be the range of commits between the HEAD of the target branch and the HEAD of the merge group the build is for. That means it will also consider file changes from merge groups ahead of the build's merge group in the queue. If your merge queue has the _Require all queue entries to pass required checks_ setting enabled, it is safe for `if_changed` to consider only the file changes belonging to the PR the merge group is for. Select **Use base commit when making `if_changed` comparisons** in the pipeline's GitHub settings to enable this behavior. ###### After merge groups are merged When a series of merge groups are successfully merged, GitHub fast-forwards the target branch of the queue to the HEAD of the last merge group being merged. GitHub sends a `push` webhook event for the updated target branch and a build will be created if **Build branches** is enabled in the pipeline's GitHub settings. The creation of this build can be avoided by [skipping existing commits](/docs/pipelines/configure/skipping#skip-builds-with-existing-commits) or applying [branch filtering](https://buildkite.com/docs/pipelines/configure/workflows/branch-configuration#pipeline-level-branch-filtering). --- ### Triggering Pipelines Using GitHub Actions URL: https://buildkite.com/docs/pipelines/tutorials/github-actions #### GitHub Actions ##### Migrating from GitHub Actions to Buildkite Buildkite Pipelines now supports many of the same GitHub webhook events that GitHub Actions uses as workflow triggers, making incremental migration easier. ###### Webhook event triggers The following GitHub webhook events can trigger Buildkite Pipelines builds: - Pull request reviews (`pull_request_review`) - Pull request review comments (`pull_request_review_comment`)—inline diff comments - Check runs (`check_run`) - Releases (`release`) - Issue comments (`issue_comment`) - Deployment statuses (`deployment_status`) - Branch/tag creation (`create`) ###### Expanded pull request actions Beyond `opened` and `synchronize`, Buildkite Pipelines now supports these pull request actions: `edited`, `reopened`, `labeled`, `unlabeled`, `ready_for_review`, `converted_to_draft`, `review_requested`, and `dequeued`. ###### Conditional variables Use the following variables to write fine-grained build filters similar to the GitHub Actions `on..types` filtering: - `build.source_event`: the GitHub webhook event that triggered the build. - `build.source_action`: the specific action within that event. - `build.pull_request.label`: the specific label that was just added or removed, so you can filter on exactly which label changed. For full configuration details, see the [GitHub integration docs](/docs/pipelines/source-control/github#running-builds-on-additional-github-events). ##### Triggering pipelines using GitHub Actions [GitHub Actions](https://github.com/features/actions) is a GitHub-based workflow automation platform. You can use the GitHub actions [Trigger Buildkite Pipeline](https://github.com/marketplace/actions/trigger-buildkite-pipeline) to trigger a build on a Buildkite pipeline. The Trigger Buildkite Pipeline GitHub Action allows you to: - Create builds in Buildkite pipelines and set `commit`, `branch`, `message`. - Save the build JSON response to `${HOME}/${GITHUB_ACTION}.json` for downstream actions. Find the Trigger Buildkite Pipeline on [GitHub Marketplace](https://github.com/marketplace) or follow [this link](https://github.com/marketplace/actions/trigger-buildkite-pipeline) directly. ##### Before you start This tutorial assumes some familiarity with GitHub and using GitHub Actions. Learn more about GitHub Actions from their [documentation](https://docs.github.com/en/actions/learn-github-actions). ##### Creating the workflow for Buildkite GitHub Actions 1. If a workflow directory does not exist yet, create the `.github/workflows` directory in your repo to store the workflow files for Buildkite Pipeline Action. 1. Create a [Buildkite API access token](/docs/apis/rest-api#authentication) with `write_builds` [scope](/docs/apis/managing-api-tokens#token-scopes), and save it to your GitHub repository's Settings in Secrets. Learn more about this in [Creating secrets for a repository](https://docs.github.com/en/actions/how-tos/write-workflows/choose-what-workflows-do/use-secrets#creating-secrets-for-a-repository) in GitHub's documentation. 1. Define your GitHub Actions workflow with the details of the pipeline to be triggered. To ensure that the latest version is always used, click "Use latest version" on the [Trigger Buildkite Pipeline](https://github.com/marketplace/actions/trigger-buildkite-pipeline) page. Copy and paste the code snippet provided. 1. Configure the workflow by setting the applicable configuration options. ##### Example workflow The following workflow creates a new Buildkite build on every commit (change `my-org/my-deploy-pipeline` to the slug of your org and pipeline, and `TRIGGER_BK_BUILD_TOKEN` to the secrets env var you have defined): ```yml on: [push] steps: - name: Trigger a Buildkite Build on Push using v2.0.0 uses: buildkite/trigger-pipeline-action@v2.0.0 with: buildkite_api_access_token: ${{ secrets.TRIGGER_BK_BUILD_TOKEN }} pipeline: "lzrinc/experimental-pipeline" branch: master commit: HEAD message: ":buildkite::github: 🚀🚀🚀 Triggered from a GitHub Action" ``` ##### Configuring the workflow Refer to the [action.yml](https://github.com/buildkite/trigger-pipeline-action/blob/master/action.yml) for the input parameters required. See [Trigger-pipeline-action](https://github.com/buildkite/trigger-pipeline-action) for more details, code, or to contribute to or raise an issue for the Buildkite GitHub Action. --- ### Attributing AWS agent costs URL: https://buildkite.com/docs/pipelines/tutorials/attributing-aws-agent-costs #### Attributing AWS agent costs using Amazon EventBridge Buildkite organizations running monorepos often need to attribute agent compute costs to specific teams. However, this is not straightforward—a single build can fan out across multiple self-hosted [queues](/docs/agent/queues), each mapping to a separate [Elastic CI Stack for AWS](/docs/agent/self-hosted/aws/elastic-ci-stack) instance that may use different EC2 instance types. This tutorial walks through setting up a data pipeline that ingests Buildkite [Amazon EventBridge](/docs/pipelines/integrations/observability/amazon-eventbridge) events into Amazon S3, making them queryable with Amazon Athena. The result lets you correlate queues to agents, agents to EC2 instances, and job duration to hourly AWS pricing. > 📘 Proof of concept > This tutorial demonstrates feasibility. A production implementation would normalize the EventBridge fields into a proper schema rather than working around them with inline SQL. The Athena queries here are illustrative and would need real hourly pricing data from AWS. ##### Before you start To complete this tutorial, you need: - A Buildkite organization using the [Elastic CI Stack for AWS](/docs/agent/self-hosted/aws/elastic-ci-stack) with EC2 instances. - An [Amazon EventBridge](/docs/pipelines/integrations/observability/amazon-eventbridge) notification service configured in your Buildkite organization settings. - [Terraform](https://www.terraform.io/) installed locally, and familiarity with using Terraform. - An AWS account with permissions to create [S3 buckets](https://docs.aws.amazon.com/AmazonS3/latest/userguide/Welcome.html), [Lambda functions](https://docs.aws.amazon.com/lambda/latest/dg/welcome.html), [Amazon Data Firehose delivery streams](https://docs.aws.amazon.com/firehose/latest/dev/what-is-this-service.html), [EventBridge rules](https://docs.aws.amazon.com/eventbridge/latest/userguide/eb-rules.html), and [Glue catalog resources](https://docs.aws.amazon.com/glue/latest/dg/catalog-and-crawler.html). ##### How it works The pipeline streams Buildkite events from EventBridge to S3 through Amazon Data Firehose, making the raw data available to any analytics backend. If you already use ClickHouse, Redshift, or Snowflake data warehouse tools, you can configure these tools to access the same S3 bucket. The data flow is: 1. Buildkite publishes `Job Finished`, `Agent Connected`, and `Agent Disconnected` [events](/docs/pipelines/integrations/observability/amazon-eventbridge#events) to the [partner event bus](https://docs.aws.amazon.com/eventbridge/latest/userguide/eb-saas.html). 1. EventBridge rules route matching events to an Amazon Data Firehose delivery stream (indicated by its former name Kinesis Firehose in the following diagram). 1. Firehose invokes a transformer Lambda function to append newline delimiters for Athena compatibility. 1. Firehose delivers the transformed records to S3. 1. Glue catalog tables define the schema, letting Athena query the data with SQL. This tutorial captures two event types: - [**Job Finished** events](/docs/pipelines/integrations/observability/amazon-eventbridge#events-job-finished): contain job duration, pipeline slug, and queue (from `agent_query_rules`). These are the primary input for cost attribution. - **Agent [Connected](/docs/pipelines/integrations/observability/amazon-eventbridge#example-event-payloads-agent-connected)/[Disconnected](/docs/pipelines/integrations/observability/amazon-eventbridge#example-event-payloads-agent-disconnected)** events: contain agent `meta_data` such as `aws:instance-id` and `aws:instance-type`. Join these with job events on `agent.uuid` to map jobs to specific EC2 instances and their hourly rates. > 📘 Cost attribution granularity > This approach attributes compute cost at the queue level. Mapping queues to teams requires a lookup table based on your own Elastic CI Stack and queue naming conventions. Individual-level attribution (who triggered the build) is not available from EventBridge events, as the creator field is not included in EventBridge payloads. For individual attribution, the [OpenTelemetry integration](/docs/pipelines/integrations/observability/opentelemetry) may be a better fit, since its events contain the build author. ##### Set up the infrastructure with Terraform Create a new Terraform project directory with the following files. ###### Define variables Create a `variables.tf` file to define the configurable inputs: ```hcl variable "aws_region" { description = "AWS region" type = string default = "us-east-1" } variable "buildkite_event_bus_arn" { description = "ARN of the Buildkite partner event bus" type = string } variable "buildkite_event_source" { description = "Buildkite partner event source identifier" type = string } variable "s3_bucket_name" { description = "S3 bucket name for Firehose destination" type = string default = "buildkite-cost-attribution" } variable "firehose_stream_name" { description = "Amazon Data Firehose delivery stream name" type = string default = "buildkite-cost-attribution" } variable "lambda_function_name" { description = "Lambda function name for Firehose record transformation" type = string default = "buildkite-firehose-transformer" } ``` ###### Set variable values Create a `terraform.tfvars` file with your resource names. Replace the placeholder values with your own: ```hcl buildkite_event_bus_arn = "arn\:aws\:events\:us-east-1\:012345678901:event-bus/aws.partner/buildkite.com/your-org/your-uuid" buildkite_event_source = "aws.partner/buildkite.com/your-org/your-uuid" s3_bucket_name = "your-buildkite-cost-attribution-bucket" ``` > 📘 Finding your event bus ARN > The partner event bus ARN and event source are created automatically when you [configure the Amazon EventBridge integration](/docs/pipelines/integrations/observability/amazon-eventbridge#configuring) in your Buildkite organization settings. ###### Create the Lambda transformer Create a `lambda/transformer.py` file. Amazon Data Firehose batches multiple records together, but Athena expects newline-delimited JSON. This Lambda function appends a newline to each record: ```python import base64 def lambda_handler(event, context): output = [] for record in event['records']: payload = base64.b64decode(record['data']) output.append({ 'recordId': record['recordId'], 'result': 'Ok', 'data': base64.b64encode(payload + b'\n').decode('utf-8') }) return {'records': output} ``` ###### Define the main configuration Create a `main.tf` file containing the S3 bucket, Lambda function, Amazon Data Firehose delivery stream, EventBridge rules, and Glue catalog resources: ```hcl terraform { required_providers { aws = { source = "hashicorp/aws" version = "~> 5.0" } archive = { source = "hashicorp/archive" version = "~> 2.0" } } } provider "aws" { region = var.aws_region } data "aws_caller_identity" "current" {} #### ── S3 ────────────────────────────────────────────────────── resource "aws_s3_bucket" "events" { bucket = var.s3_bucket_name } resource "aws_s3_bucket_public_access_block" "events" { bucket = aws_s3_bucket.events.id block_public_acls = true block_public_policy = true ignore_public_acls = true restrict_public_buckets = true } #### ── Lambda (Firehose record transformer) ──────────────────── data "archive_file" "transformer" { type = "zip" source_file = "${path.module}/lambda/transformer.py" output_path = "${path.module}/lambda/transformer.zip" } resource "aws_lambda_function" "firehose_transformer" { function_name = var.lambda_function_name filename = data.archive_file.transformer.output_path source_code_hash = data.archive_file.transformer.output_base64sha256 role = aws_iam_role.lambda.arn handler = "transformer.lambda_handler" runtime = "python3.13" timeout = 60 } resource "aws_iam_role" "lambda" { name = "${var.lambda_function_name}-role" assume_role_policy = jsonencode({ Version = "2012-10-17" Statement = [{ Effect = "Allow" Principal = { Service = "lambda.amazonaws.com" } Action = "sts:AssumeRole" }] }) } resource "aws_iam_role_policy" "lambda_logs" { name = "cloudwatch-logs" role = aws_iam_role.lambda.id policy = jsonencode({ Version = "2012-10-17" Statement = [ { Effect = "Allow" Action = "logs:CreateLogGroup" Resource = "arn\:aws\:logs:${var.aws_region}:${data.aws_caller_identity.current.account_id}:*" }, { Effect = "Allow" Action = ["logs:CreateLogStream", "logs:PutLogEvents"] Resource = "arn\:aws\:logs:${var.aws_region}:${data.aws_caller_identity.current.account_id}:log-group:/aws/lambda/${var.lambda_function_name}:*" } ] }) } #### ── Amazon Data Firehose ─────────────────────────────────── resource "aws_kinesis_firehose_delivery_stream" "events" { name = var.firehose_stream_name destination = "extended_s3" extended_s3_configuration { role_arn = aws_iam_role.firehose.arn bucket_arn = aws_s3_bucket.events.arn prefix = "eventbridge/" buffering_size = 5 buffering_interval = 60 processing_configuration { enabled = true processors { type = "Lambda" parameters { parameter_name = "LambdaArn" parameter_value = "${aws_lambda_function.firehose_transformer.arn}:$LATEST" } parameters { parameter_name = "BufferSizeInMBs" parameter_value = "1" } parameters { parameter_name = "BufferIntervalInSeconds" parameter_value = "60" } } } cloudwatch_logging_options { enabled = true log_group_name = "/aws/kinesisfirehose/${var.firehose_stream_name}" log_stream_name = "DestinationDelivery" } } } resource "aws_iam_role" "firehose" { name = "${var.firehose_stream_name}-role" assume_role_policy = jsonencode({ Version = "2012-10-17" Statement = [{ Effect = "Allow" Principal = { Service = "firehose.amazonaws.com" } Action = "sts:AssumeRole" }] }) } resource "aws_iam_role_policy" "firehose_s3" { name = "s3-delivery" role = aws_iam_role.firehose.id policy = jsonencode({ Version = "2012-10-17" Statement = [ { Effect = "Allow" Action = [ "s3:AbortMultipartUpload", "s3:GetBucketLocation", "s3:GetObject", "s3:ListBucket", "s3:ListBucketMultipartUploads", "s3:PutObject" ] Resource = [ aws_s3_bucket.events.arn, "${aws_s3_bucket.events.arn}/*" ] }, { Effect = "Allow" Action = ["lambda:InvokeFunction", "lambda:GetFunctionConfiguration"] Resource = "${aws_lambda_function.firehose_transformer.arn}:*" }, { Effect = "Allow" Action = ["logs:PutLogEvents"] Resource = "arn\:aws\:logs:${var.aws_region}:${data.aws_caller_identity.current.account_id}:log-group:/aws/kinesisfirehose/${var.firehose_stream_name}:*" } ] }) } #### ── EventBridge ───────────────────────────────────────────── #### #### The Buildkite partner event bus is created automatically when #### you configure the AWS EventBridge integration in your #### Buildkite organization settings. Reference it by ARN here. data "aws_cloudwatch_event_bus" "buildkite" { name = var.buildkite_event_bus_arn } resource "aws_cloudwatch_event_rule" "agent_connected_disconnected" { name = "AgentConnectedAndDisconnected" event_bus_name = data.aws_cloudwatch_event_bus.buildkite.name event_pattern = jsonencode({ source = [var.buildkite_event_source] detail-type = ["Agent Connected", "Agent Disconnected"] }) } resource "aws_cloudwatch_event_rule" "job_finished" { name = "JobFinished" event_bus_name = data.aws_cloudwatch_event_bus.buildkite.name event_pattern = jsonencode({ source = [var.buildkite_event_source] detail-type = ["Job Finished"] detail = { job = { type = ["script"] } } }) } resource "aws_cloudwatch_event_target" "agent_to_firehose" { rule = aws_cloudwatch_event_rule.agent_connected_disconnected.name event_bus_name = data.aws_cloudwatch_event_bus.buildkite.name target_id = "firehose" arn = aws_kinesis_firehose_delivery_stream.events.arn role_arn = aws_iam_role.eventbridge_to_firehose.arn } resource "aws_cloudwatch_event_target" "job_to_firehose" { rule = aws_cloudwatch_event_rule.job_finished.name event_bus_name = data.aws_cloudwatch_event_bus.buildkite.name target_id = "firehose" arn = aws_kinesis_firehose_delivery_stream.events.arn role_arn = aws_iam_role.eventbridge_to_firehose.arn } resource "aws_iam_role" "eventbridge_to_firehose" { name = "eventbridge-to-firehose-buildkite" assume_role_policy = jsonencode({ Version = "2012-10-17" Statement = [{ Effect = "Allow" Principal = { Service = "events.amazonaws.com" } Action = "sts:AssumeRole" }] }) } resource "aws_iam_role_policy" "eventbridge_to_firehose" { name = "put-records" role = aws_iam_role.eventbridge_to_firehose.id policy = jsonencode({ Version = "2012-10-17" Statement = [{ Sid = "ActionsForFirehose" Effect = "Allow" Action = ["firehose:PutRecord", "firehose:PutRecordBatch"] Resource = [aws_kinesis_firehose_delivery_stream.events.arn] }] }) } #### ── Athena / Glue ─────────────────────────────────────────── resource "aws_glue_catalog_database" "buildkite" { name = "buildkite" } resource "aws_glue_catalog_table" "job_finished" { name = "buildkite_job_finished" database_name = aws_glue_catalog_database.buildkite.name table_type = "EXTERNAL_TABLE" parameters = { "classification" = "json" "compressionType" = "none" } storage_descriptor { location = "s3://${var.s3_bucket_name}/eventbridge/" input_format = "org.apache.hadoop.mapred.TextInputFormat" output_format = "org.apache.hadoop.hive.ql.io.HiveIgnoreKeyTextOutputFormat" ser_de_info { serialization_library = "org.openx.data.jsonserde.JsonSerDe" parameters = { "mapping.detailtype" = "detail-type" "mapping.eventtime" = "time" } } columns { name = "version" type = "string" } columns { name = "id" type = "string" } columns { name = "detailtype" type = "string" } columns { name = "source" type = "string" } columns { name = "account" type = "string" } columns { name = "eventtime" type = "string" } columns { name = "region" type = "string" } columns { name = "detail" type = "struct,exit_status:int,signal_reason:string,passed:boolean,soft_failed:boolean,state:string,runnable_at:string,started_at:string,finished_at:string>,build:struct,pipeline:struct,organization:struct,agent:struct>" } } } resource "aws_glue_catalog_table" "agent_lifecycle" { name = "buildkite_agent_lifecycle" database_name = aws_glue_catalog_database.buildkite.name table_type = "EXTERNAL_TABLE" parameters = { "classification" = "json" "compressionType" = "none" } storage_descriptor { location = "s3://${var.s3_bucket_name}/eventbridge/" input_format = "org.apache.hadoop.mapred.TextInputFormat" output_format = "org.apache.hadoop.hive.ql.io.HiveIgnoreKeyTextOutputFormat" ser_de_info { serialization_library = "org.openx.data.jsonserde.JsonSerDe" parameters = { "mapping.detailtype" = "detail-type" "mapping.eventtime" = "time" } } columns { name = "version" type = "string" } columns { name = "id" type = "string" } columns { name = "detailtype" type = "string" } columns { name = "source" type = "string" } columns { name = "account" type = "string" } columns { name = "eventtime" type = "string" } columns { name = "region" type = "string" } columns { name = "detail" type = "struct,connected_at:string,disconnected_at:string,lost_at:string>,organization:struct,token:struct>" } } } resource "aws_athena_workgroup" "buildkite" { name = "buildkite" configuration { result_configuration { output_location = "s3://${var.s3_bucket_name}/athena-results/" } } } ``` ###### Import existing resources If your S3 bucket already exists, import it rather than letting Terraform create a new one: ```bash terraform import aws_s3_bucket.events your-bucket-name terraform import aws_s3_bucket_public_access_block.events your-bucket-name ``` ###### Apply the configuration Initialize and apply the Terraform configuration: ```bash terraform init terraform plan terraform apply ``` ##### Query the data with Athena Once events start flowing (after builds run and agents connect), you can query the data using the Athena workgroup created by the Terraform configuration. > 📘 Shared S3 location > Both Glue tables point to the same S3 location, since all EventBridge event types are delivered together by a single Firehose stream to S3. Use `WHERE detailtype = 'Job Finished'` or `WHERE detailtype = 'Agent Connected'` in your queries to filter to the relevant events. ###### Verify events are arriving Run a sanity check to confirm events are being ingested: ```sql SELECT 'job_finished' AS table_name, detailtype, COUNT(*) AS event_count FROM buildkite.buildkite_job_finished GROUP BY detailtype UNION ALL SELECT 'agent_lifecycle' AS table_name, detailtype, COUNT(*) AS event_count FROM buildkite.buildkite_agent_lifecycle GROUP BY detailtype ``` ###### Calculate job compute cost per pipeline and queue The following query joins job events with agent lifecycle events to calculate estimated compute cost. This query maps instance types to hourly rates and multiplies by job duration: ```sql WITH job_agent AS ( SELECT j.detail.pipeline.slug AS pipeline, j.detail.job.uuid AS job_uuid, j.detail.job.started_at AS started_at, j.detail.job.finished_at AS finished_at, TRY(split(filter(a.detail.agent.meta_data, m -> m LIKE 'queue=%')[1], '=')[2]) AS queue, TRY(split(filter(a.detail.agent.meta_data, m -> m LIKE 'aws:instance-type=%')[1], '=')[2]) AS instance_type, TRY(split(filter(a.detail.agent.meta_data, m -> m LIKE 'aws:instance-life-cycle=%')[1], '=')[2]) AS lifecycle FROM buildkite.buildkite_job_finished j JOIN buildkite.buildkite_agent_lifecycle a ON j.detail.agent.uuid = a.detail.agent.uuid WHERE j.detailtype = 'Job Finished' AND a.detailtype = 'Agent Connected' AND j.detail.job.passed = true ) SELECT pipeline, queue, instance_type, lifecycle, COUNT(DISTINCT job_uuid) AS job_count, SUM( date_diff('second', parse_datetime(started_at, 'yyyy-MM-dd HH:mm:ss z'), parse_datetime(finished_at, 'yyyy-MM-dd HH:mm:ss z') ) ) AS total_job_seconds, ROUND( SUM( date_diff('second', parse_datetime(started_at, 'yyyy-MM-dd HH:mm:ss z'), parse_datetime(finished_at, 'yyyy-MM-dd HH:mm:ss z') ) ) / 3600.0 * CASE instance_type WHEN 't3.micro' THEN 0.0104 WHEN 't3.small' THEN 0.0208 WHEN 't3.medium' THEN 0.0416 WHEN 'm5.large' THEN 0.096 WHEN 'm5.xlarge' THEN 0.192 ELSE 0 END, 6 ) AS estimated_cost_usd FROM job_agent GROUP BY pipeline, queue, instance_type, lifecycle ORDER BY estimated_cost_usd DESC; ``` > 🚧 Replace the hourly rates > The `CASE` statement in this query uses example EC2 on-demand pricing (for example, `... THEN 0.0104`). Replace these values with your actual rates, including any reserved instance or spot pricing you use. Jobs running on instance types not listed in the `CASE` statement will show `0` for estimated cost. Example output: ##### Next steps - To map queues to teams, create a lookup table based on your Elastic CI Stack and queue naming conventions, then join it with the query results. - For a production implementation, consider moving field extraction upstream so that values like queue, instance type, and lifecycle are written as top-level columns during ingestion rather than parsed from nested JSON at query time. - To capture more granular cost data, consider adding [OpenTelemetry](/docs/pipelines/integrations/observability/opentelemetry) alongside EventBridge, which includes the build author in its events. - For details on available EventBridge events and their payloads, see the [Amazon EventBridge integration](/docs/pipelines/integrations/observability/amazon-eventbridge) reference. --- ### Overview URL: https://buildkite.com/docs/agent #### The Buildkite agent The Buildkite agent is a small, reliable and cross-platform build runner that makes it easy to run automated builds on [your own self-hosted](/docs/agent/self-hosted) or [Buildkite's hosted](/docs/agent/buildkite-hosted) infrastructure. The agent's main responsibilities are polling buildkite.com for work, running a build's jobs, reporting back the status code and output log of the job, and uploading the job's artifacts. This page provides Buildkite organization administrators with an overview of the [differences between self-hosted and Buildkite hosted agents](#self-hosted-and-buildkite-hosted-agents-compared), [how the Buildkite agent works](/docs/agent#how-it-works), the [agent's lifecycle](#agent-lifecycle), how to [customize the agent's functionality with hooks](#customizing-with-hooks), and the agent's [command line usage](#command-line-usage). If you're new to Buildkite Pipelines, run through the [Getting started with Pipelines](/docs/pipelines/getting-started) tutorial, which will initially set you up to run [Buildkite hosted agents](/docs/agent/buildkite-hosted). From there, you can decide whether to continue using Buildkite hosted agents, or set yourself up to run [self-hosted agents](/docs/agent/self-hosted). ##### Self-hosted and Buildkite hosted agents compared The following table lists key feature differences between [self-hosted](/docs/agent/self-hosted) and [Buildkite hosted](/docs/agent/buildkite-hosted) agents. If you are looking to establish, expand or modify your Buildkite agent infrastructure, this table should help you choose which path or paths to take. In summary though: - _Self-hosted agents_ are suitable when your organization has any of the following requirements: * You need full control over your agent infrastructure. * Your agents need a lot of customization. * You operate under strict security conditions that requires your source code and CI/CD build runners (the agents) to be managed on premises or in your own cloud-based infrastructure. - _Buildkite hosted agents_ is a fully-managed platform that offers fast and specialized CI/CD build runners, which work well under default conditions. This option lets you get up and running rapidly to build your projects. | Feature | Self-hosted agents | Buildkite hosted agents | | | ##### How it works The agent works by polling Buildkite's agent API over HTTPS. There is no need to forward ports or provide incoming firewall access, and the agents can be run across any number of machines and networks. The agent starts by registering itself with Buildkite, and once registered it's placed into your organization's agents pool. The agent periodically polls the Buildkite platform, looking for new work, waiting to accept an available job. After accepting a build job the agent will execute the command, streaming back the build script's output and then posting the final exit status. Whilst the job is running you can use the `buildkite-agent meta-data` command to set and get build-wide meta-data, and `buildkite-agent artifact` for fetching and retrieving binary build-wide artifacts. These two commands allow you to have completely isolated build jobs (similar to a 12 factor web application) but have access to shared state and data storage across any number of machines and networks. ###### Job routing By default, a pipeline's jobs run on the first available agent associated with the relevant [queues](/docs/agent/queues) that the pipeline's [cluster](/docs/pipelines/security/clusters) is set to. Agents associated with a queue are ordered for selection by how recently these agents successfully completed a job. This takes advantage of warm caches to guarantee the fastest run time possible. Learn more about how Buildkite routes jobs to queues in the [Queues overview](/docs/agent/queues) page. ##### Agent lifecycle The agent goes through several stages during its operation, from starting up and registering with Buildkite, through to polling for and running jobs, and shutting down. For details on signal handling, exit codes, and troubleshooting common lifecycle issues, see the [Agent lifecycle](/docs/agent/lifecycle) page. ##### Customizing with hooks The agent's behavior can be customized using hooks, which are shell scripts that exist on your build machines or in each pipeline's code repository. Hooks can be used to set up [secrets](/docs/pipelines/security/secrets/managing) as well as overriding default behavior. See the [hooks](/docs/agent/hooks) documentation for full details. ##### Command line usage The Buildkite agent has a command line interface (CLI) that lets you interact with and control the agent through the command line. For a complete reference of all available commands, see the [Command-line reference](/docs/agent/cli/reference). --- ### Overview URL: https://buildkite.com/docs/agent/self-hosted #### Self-hosted agents Buildkite's self-hosted agents are [Buildkite agents](/docs/agent) that you run in your own self-hosted environment or infrastructure. This infrastructure could be servers that you host on-premises, or in cloud-based services, such as AWS, Google Cloud, or using Kubernetes. With self-hosted agents, you have control over managing infrastructure tasks, such as provisioning, scaling, security, and maintaining the servers that run your agents. The following diagram provides an overview of how Buildkite Pipelines, which is a software-as-as-service (SaaS) platform known as the Buildkite platform, interacts with Buildkite agents in your own self-hosted infrastructure. ##### Installation You can install self-hosted agents on a wide variety of platforms. See the [installation instructions](/docs/agent/self-hosted/install) for a full list and for information on how to get started. ##### Starting the agent To start a self-hosted agent, you'll need an [agent token](/docs/agent/self-hosted/tokens) associated with one of your Buildkite organization's [clusters](/docs/pipelines/security/clusters), along with a configured [self-hosted queue](/docs/agent/queues) in that cluster. The agent token is passed to the agent using an environment variable or command line flag (with an optional queue tag), and the token will register itself with your Buildkite Pipeline's cluster and wait to accept jobs. Learn more about this process in [Assigning a self-hosted agent to a queue](/docs/agent/queues#assigning-a-self-hosted-agent-to-a-queue). ##### Configuration A self-hosted agent has a standard configuration file format on all systems to set meta-data, priority, etc. See the [configuration documentation](/docs/agent/self-hosted/configure) for more details. ##### Experimental features Buildkite frequently introduces new experimental features to the agent, which you can try out on self-hosted agents. See [Agent experiments](/docs/agent/self-hosted/configure/experiments) for the full list of available and promoted experiments. ##### Agent versions directory For a complete list of stable Buildkite agent 3.x versions, see the [Agent versions directory](/docs/agent/self-hosted/versions-directory). When running self-hosted agents, you are responsible for keeping them up to date. --- ### Tokens URL: https://buildkite.com/docs/agent/self-hosted/tokens #### Agent tokens A Buildkite agent running in a [self-hosted architecture](/docs/pipelines/architecture#self-hosted-hybrid-architecture) requires an _agent token_ to connect to Buildkite and register for work. Agent tokens connect to Buildkite via a [cluster](/docs/pipelines/security/clusters), and can be accessed from the cluster's **Agent Tokens** page. A user who is a Buildkite organization administrator or a [maintainer of a cluster](/docs/pipelines/security/clusters/manage#manage-maintainers-on-a-cluster) within the organization can manage agent tokens for that cluster. If you are managing agents in an unclustered environment, refer to [Working with unclustered agent tokens](/docs/agent/self-hosted/tokens#working-with-unclustered-agent-tokens) instead. ##### The initial agent token When you create a new organization in Buildkite, and visit the [**Default cluster**](/docs/pipelines/security/clusters/manage#setting-up-clusters) for the first time, an initial agent token is created (called **Initial agent token** within this cluster). This token can be used for testing and development and is only revealed once during this process. It's recommended that you [create new, specific tokens](#create-a-token) for each new environment. ##### Using and storing tokens An agent token is used by the Buildkite agent's [start](/docs/agent/cli/reference/start#starting-an-agent) command, and can be provided on the command line, set in the [configuration file](/docs/agent/self-hosted/configure), or provided using the [environment variable](/docs/pipelines/configure/environment-variables) `BUILDKITE_AGENT_TOKEN`. It's recommended you use your platform's secret storage (such as the [AWS Systems Manager Parameter Store](https://docs.aws.amazon.com/systems-manager/latest/userguide/systems-manager-paramstore.html)) to allow for easier rollover and management of your agent tokens. ##### Create a token New agent tokens can be created by a [cluster maintainer](/docs/pipelines/security/clusters/manage#manage-maintainers-on-a-cluster) or Buildkite organization administrator using the [**Agent Tokens** page of a cluster](#create-a-token-using-the-buildkite-interface), as well as Buildkite's [REST API](#create-a-token-using-the-rest-api) or [GraphQL API](#create-a-token-using-the-graphql-api). For these API requests, the _cluster ID_ value submitted as part of the request is the target cluster the token is associated with. > 📘 An agent token's value is only displayed once > As soon as the agent token's value is displayed, copy its value and save it in a secure location. > If you forget to do this, you'll need to create a new token to obtain its value. It is possible to create multiple agent tokens (for your Default cluster or any other cluster in your Buildkite organization) using the processes described in this section. ###### Using the Buildkite interface To create an agent token for a cluster using the Buildkite interface: 1. Select **Agents** in the global navigation to access the **Clusters** page. 1. Select the cluster that will be associated with this agent token. 1. Select **Agent Tokens** > **New Token**. 1. In the **Description** field, enter an appropriate description for the agent token. **Note:** The token description should clearly identify the environment the token is intended to be used for (for example, `Read-only token for static site generator`), as it is listed on the **Agent Tokens** page of your specific cluster the agent connects to. This page can be accessed by selecting **Agents** (in the global navigation) > the specific cluster > **Agent Tokens**. 1. If you need to restrict which network addresses are allowed to use this agent token, enter these addresses (using [CIDR notation](https://en.wikipedia.org/wiki/Classless_Inter-Domain_Routing)) into the **Allowed IP Addresses** field. **Note:** Leave this field empty if there is no need to restrict the use of this agent token by network address. Learn more about this feature in [Restrict an agent token's access by IP address](/docs/pipelines/security/clusters/manage#restrict-an-agent-tokens-access-by-ip-address). 1. Select **Create Token**. Follow the instructions to copy and save your token to a secure location and select **Okay, I'm done!**. The new agent token appears on the cluster's **Agent Tokens** page. ###### Using the REST API To [create an agent token](/docs/apis/rest-api/clusters/agent-tokens#create-a-token) using the [REST API](/docs/apis/rest-api), run the following example `curl` command: ```bash curl -H "Authorization: Bearer $TOKEN" \ -X POST "https://api.buildkite.com/v2/organizations/{org.slug}/clusters/{cluster.id}/tokens" \ -H "Content-Type: application/json" \ -d '{ "description": "A description", "expires_at": "2026-01-01T00:00:00Z", "allowed_ip_addresses": "0.0.0.0/0" }' ``` where: - `$TOKEN` is an [API access token](https://buildkite.com/user/api-access-tokens) scoped to the relevant **Organization** and **REST API Scopes** that your request needs access to in Buildkite. - `{org.slug}` can be obtained: * From the end of your Buildkite URL, after accessing **Pipelines** in the global navigation of your organization in Buildkite. * By running the [List organizations](/docs/apis/rest-api/organizations#list-organizations) REST API query to obtain this value from `slug` in the response. For example: ```bash curl -H "Authorization: Bearer $TOKEN" \ -X GET "https://api.buildkite.com/v2/organizations" ``` - `{cluster.id}` can be obtained: * From the **Cluster Settings** page of your target cluster. To do this: 1. Select **Agents** (in the global navigation) > the specific cluster > **Settings**. 1. Once on the **Cluster Settings** page, copy the `id` parameter value from the **GraphQL API Integration** section, which is the `{cluster.id}` value. * By running the [List clusters](/docs/apis/rest-api/clusters#clusters-list-clusters) REST API query and obtain this value from the `id` in the response associated with the name of your target cluster (specified by the `name` value in the response). For example: ```bash curl -H "Authorization: Bearer $TOKEN" \ -X GET "https://api.buildkite.com/v2/organizations/{org.slug}/clusters" ``` - `description` (required) should clearly identify the environment the token is intended to be used for (for example, `Read-only token for static site generator`), as it is listed on the **Agent tokens** page of your specific cluster the agent connects to. To access this page, select **Agents** (in the global navigation) > the specific cluster > **Agent Tokens**. - `expires_at` (optional) is the date and time at which the token expires and prevents agents configured with this token from re-connecting to their Buildkite cluster. If not provided, the token will never expire. The timestamp for `expires_at` must be in ISO8601 format (2025-01-01T00:00:00Z) and must be at least 10 minutes in the future at the moment of token creation. Once set, `expires_at` cannot be updated. - `allowed_ip_addresses` (optional) is/are the IP addresses that agents must be accessible through, to access this token and connect to your Buildkite cluster. Use space-separated [CIDR notation](https://en.wikipedia.org/wiki/Classless_Inter-Domain_Routing) to enter IP addresses for this field value. The new agent token appears on the cluster's **Agent Tokens** page. ###### Using the GraphQL API To [create an agent token](/docs/apis/graphql/cookbooks/clusters#create-agent-token-with-an-expiration-date) using the [GraphQL API](/docs/apis/graphql-api), run the following example mutation: ```graphql mutation { clusterAgentTokenCreate( input: { organizationId: "organization-id" clusterId: "cluster-id" description: "A description" expiresAt: "2026-01-01T00:00:00Z" allowedIpAddresses: "0.0.0.0/0" } ) { clusterAgentToken { id uuid description allowedIpAddresses cluster { id uuid organization { id uuid } } createdBy { id uuid email } } tokenValue } } ``` where: - `organizationId` (required) can be obtained: * From the **GraphQL API Integration** section of your **Organization Settings** page, accessed by selecting **Settings** in the global navigation of your organization in Buildkite. * By running a `getCurrentUsersOrgs` GraphQL API query to obtain the organization slugs for the current user's accessible organizations, followed by a [getOrgId](/docs/apis/graphql/schemas/query/organization) query, to obtain the organization's `id` using the organization's slug. For example: Step 1. Run `getCurrentUsersOrgs` to obtain the organization slug values in the response for the current user's accessible organizations: ```graphql query getCurrentUsersOrgs { viewer { organizations { edges { node { name slug } } } } } ``` Step 2. Run `getOrgId` with the appropriate slug value above to obtain this organization's `id` in the response: ```graphql query getOrgId { organization(slug: "organization-slug") { id uuid slug } } ``` **Note:** The `organization-slug` value can also be obtained from the end of your Buildkite URL, by selecting **Pipelines** in the global navigation of your organization in Buildkite. - `clusterId` (required) can be obtained: * From the **Cluster Settings** page of your target cluster. To do this: 1. Select **Agents** (in the global navigation) > the specific cluster > **Settings**. 1. Once on the **Cluster Settings** page, copy the `cluster` parameter value from the **GraphQL API Integration** section, which is the `cluster.id` value. * By running the [List clusters](/docs/apis/graphql/cookbooks/clusters#list-clusters) GraphQL API query and obtain this value from the `id` in the response associated with the name of your target cluster (specified by the `name` value in the response). For example: ```graphql query getClusters { organization(slug: "organization-slug") { clusters(first: 10) { edges { node { id name uuid color description } } } } } ``` - `description` (required) should clearly identify the environment the token is intended to be used for (for example, `Read-only token for static site generator`), as it is listed on the **Agent tokens** page of your specific cluster the agent connects to. To access this page, select **Agents** (in the global navigation) > the specific cluster > **Agent Tokens**. - `expiresAt` (optional) is the date and time at which the token expires and prevents agents configured with this token from re-connecting to their Buildkite cluster. The timestamp for `expiresAt` must be in ISO8601 format (2025-01-01T00:00:00Z) and must be at least 10 minutes in the future at the moment of token creation. Once set, `expiresAt` cannot be updated. - `allowedIpAddresses` (optional) is/are the IP addresses that agents must be accessible through, to access this token and connect to your Buildkite cluster. Use space-separated [CIDR notation](https://en.wikipedia.org/wiki/Classless_Inter-Domain_Routing) to enter IP addresses for this field value. The new agent token appears on the cluster's **Agent Tokens** page. ##### Update a token Agent tokens can be updated by a [cluster maintainer](/docs/pipelines/security/clusters/manage#manage-maintainers-on-a-cluster) or Buildkite organization administrator using the [**Agent Tokens** page of a cluster](#update-a-token-using-the-buildkite-interface), as well as Buildkite's [REST API](#update-a-token-using-the-rest-api) or [GraphQL API](#update-a-token-using-the-graphql-api). Only the **Description** and **Allowed IP Addresses** of an existing agent token can be updated. **Expiration date** for a token cannot be updated. For these API requests, the _cluster ID_ value submitted as part of the request is the target cluster the token is associated with. ###### Using the Buildkite interface To update a cluster's agent token using the Buildkite interface: 1. Select **Agents** in the global navigation to access the **Clusters** page. 1. Select the cluster containing the agent token to update. 1. Select **Agent Tokens** and on this page, expand the agent token to update. 1. Select **Edit** and update the following fields as required: * **Description** should clearly identify the environment the token is intended to be used for (for example, `Read-only token for static site generator`), as it is listed on the **Agent tokens** page of your specific cluster the agent connects to. This page can be accessed by selecting **Agents** (in the global navigation) > the specific cluster > **Agent Tokens**. * **Allowed IP Addresses** is/are the IP addresses which agents must be accessible through to access this agent token and be able to connect to Buildkite via your cluster. Use space-separated [CIDR notation](https://en.wikipedia.org/wiki/Classless_Inter-Domain_Routing) to enter IP addresses for this field value. Leave this field empty if there is no need to restrict the use of this agent token by network address. Learn more about this feature in [Restrict an agent token's access by IP address](/docs/pipelines/security/clusters/manage#restrict-an-agent-tokens-access-by-ip-address). 1. Select **Save Token** to save your changes. The agent token's updates will appear on the cluster's **Agent Tokens** page. ###### Using the REST API To [update an agent token](/docs/apis/rest-api/clusters/agent-tokens#update-a-token) using the [REST API](/docs/apis/rest-api), run the following example `curl` command: ```bash curl -H "Authorization: Bearer $TOKEN" \ -X PUT "https://api.buildkite.com/v2/organizations/{org.slug}/clusters/{cluster.id}/tokens/{id}" \ -H "Content-Type: application/json" \ -d '{ "description": "A description", "allowed_ip_addresses": "202.144.0.0/24 198.51.100.12" }' ``` where: - `$TOKEN` is an [API access token](https://buildkite.com/user/api-access-tokens) scoped to the relevant **Organization** and **REST API Scopes** that your request needs access to in Buildkite. - `{org.slug}` can be obtained: * From the end of your Buildkite URL, after accessing **Pipelines** in the global navigation of your organization in Buildkite. * By running the [List organizations](/docs/apis/rest-api/organizations#list-organizations) REST API query to obtain this value from `slug` in the response. For example: ```bash curl -H "Authorization: Bearer $TOKEN" \ -X GET "https://api.buildkite.com/v2/organizations" ``` - `{cluster.id}` can be obtained: * From the **Cluster Settings** page of your target cluster. To do this: 1. Select **Agents** (in the global navigation) > the specific cluster > **Settings**. 1. Once on the **Cluster Settings** page, copy the `id` parameter value from the **GraphQL API Integration** section, which is the `{cluster.id}` value. * By running the [List clusters](/docs/apis/rest-api/clusters#clusters-list-clusters) REST API query and obtain this value from the `id` in the response associated with the name of your target cluster (specified by the `name` value in the response). For example: ```bash curl -H "Authorization: Bearer $TOKEN" \ -X GET "https://api.buildkite.com/v2/organizations/{org.slug}/clusters" ``` - `{id}` is that of the agent token, whose value can be obtained: * From the Buildkite URL path when editing the agent token. To do this: - Select **Agents** (in the global navigation) > the specific cluster > **Agent Tokens** > expand the agent token > **Edit**. - Copy the ID value between `/tokens/` and `/edit` in the URL. * By running the [List tokens](/docs/apis/rest-api/clusters/agent-tokens#list-tokens) REST API query and obtain this value from the `id` in the response associated with the description of your token (specified by the `description` value in the response). For example: ```bash curl -H "Authorization: Bearer $TOKEN" \\ -X GET "https://api.buildkite.com/v2/organizations/{org.slug}/clusters/{cluster.id}/tokens" ``` - `description` (optional) should clearly identify the environment the token is intended to be used for (for example, `Read-only token for static site generator`), as it is listed on the **Agent tokens** page of your specific cluster the agent connects to. To access this page, select **Agents** (in the global navigation) > the specific cluster > **Agent Tokens**. - `allowed_ip_addresses` (optional) is/are the IP addresses that agents must be accessible through, to access this token and connect to your Buildkite cluster. Use space-separated [CIDR notation](https://en.wikipedia.org/wiki/Classless_Inter-Domain_Routing) to enter IP addresses for this field value. This field can be omitted (where the default value is `0.0.0.0/0`) if there is no need to restrict the use of this agent token by network address, or change the field's current value. Learn more about this feature in [Restrict an agent token's access by IP address](/docs/pipelines/security/clusters/manage#restrict-an-agent-tokens-access-by-ip-address). ###### Using the GraphQL API To [update an agent token](/docs/apis/graphql/schemas/mutation/clusteragenttokenupdate) using the [GraphQL API](/docs/apis/graphql-api), run the following example mutation: ```graphql mutation { clusterAgentTokenUpdate( input: { organizationId: "organization-id" id: "token-id" description: "A description" allowedIpAddresses: "202.144.0.0/24 198.51.100.12" } ) { clusterAgentToken { id uuid description allowedIpAddresses cluster { id uuid organization { id uuid } } createdBy { id uuid email } } } } ``` where: - `organizationId` (required) can be obtained: * From the **GraphQL API Integration** section of your **Organization Settings** page, accessed by selecting **Settings** in the global navigation of your organization in Buildkite. * By running a `getCurrentUsersOrgs` GraphQL API query to obtain the organization slugs for the current user's accessible organizations, followed by a [getOrgId](/docs/apis/graphql/schemas/query/organization) query, to obtain the organization's `id` using the organization's slug. For example: Step 1. Run `getCurrentUsersOrgs` to obtain the organization slug values in the response for the current user's accessible organizations: ```graphql query getCurrentUsersOrgs { viewer { organizations { edges { node { name slug } } } } } ``` Step 2. Run `getOrgId` with the appropriate slug value above to obtain this organization's `id` in the response: ```graphql query getOrgId { organization(slug: "organization-slug") { id uuid slug } } ``` **Note:** The `organization-slug` value can also be obtained from the end of your Buildkite URL, by selecting **Pipelines** in the global navigation of your organization in Buildkite. - `id` (required) is that of the agent token, whose value can only be obtained using the APIs, by running a [getClustersAgentTokenIds](/docs/apis/graphql/schemas/query/organization) query, to obtain the organization's clusters and each of their agent tokens' `id` values in the response. For example: ```graphql query getClustersAgentTokenIds { organization(slug: "organization-slug") { clusters(first: 10) { edges { node { name id agentTokens(first: 10) { edges { node { description id } } } } } } } } ``` - `description` (required) should clearly identify the environment the token is intended to be used for (for example, `Read-only token for static site generator`), as it is listed on the **Agent tokens** page of your specific cluster the agent connects to. To access this page, select **Agents** (in the global navigation) > the specific cluster > **Agent Tokens**. If you do not need to change the existing `description` value, specify the existing field value in the request. - `allowedIpAddresses` (optional) is/are the IP addresses that agents must be accessible through, to access this token and connect to your Buildkite cluster. Use space-separated [CIDR notation](https://en.wikipedia.org/wiki/Classless_Inter-Domain_Routing) to enter IP addresses for this field value. This field can be omitted (where the default value is `0.0.0.0/0`) if there is no need to restrict the use of this agent token by network address, or change the field's current value. Learn more about this feature in [Restrict an agent token's access by IP address](/docs/pipelines/security/clusters/manage#restrict-an-agent-tokens-access-by-ip-address). The agent token's updates will appear on the cluster's **Agent Tokens** page. ##### Revoke a token Agent tokens can be revoked by a [cluster maintainer](/docs/pipelines/security/clusters/manage#manage-maintainers-on-a-cluster) or Buildkite organization administrator using the [**Agent Tokens** page of a cluster](#revoke-a-token-using-the-buildkite-interface), as well as Buildkite's [REST API](#revoke-a-token-using-the-rest-api) or [GraphQL API](#revoke-a-token-using-the-graphql-api). For these API requests, the _cluster ID_ value submitted as part of the request is the target cluster the token is associated with. Once a token is revoked, no new agents will be able to start with that token. Revoking a token does not affect any connected agents. ###### Using the Buildkite interface To revoke a cluster's agent token using the Buildkite interface: 1. Select **Agents** in the global navigation to access the **Clusters** page. 1. Select the cluster containing the agent token to revoke. 1. Select **Agent Tokens** and on this page, expand the agent token to revoke. 1. Select **Revoke** > **Revoke Token** in the confirmation message. ###### Using the REST API To [revoke an agent token](/docs/apis/rest-api/clusters/agent-tokens#revoke-a-token) using the [REST API](/docs/apis/rest-api), run the following example `curl` command: ```bash curl -H "Authorization: Bearer $TOKEN" \ -X DELETE "https://api.buildkite.com/v2/organizations/{org.slug}/clusters/{cluster.id}/tokens/{id}" ``` where: - `$TOKEN` is an [API access token](https://buildkite.com/user/api-access-tokens) scoped to the relevant **Organization** and **REST API Scopes** that your request needs access to in Buildkite. - `{org.slug}` can be obtained: * From the end of your Buildkite URL, after accessing **Pipelines** in the global navigation of your organization in Buildkite. * By running the [List organizations](/docs/apis/rest-api/organizations#list-organizations) REST API query to obtain this value from `slug` in the response. For example: ```bash curl -H "Authorization: Bearer $TOKEN" \ -X GET "https://api.buildkite.com/v2/organizations" ``` - `{cluster.id}` can be obtained: * From the **Cluster Settings** page of your target cluster. To do this: 1. Select **Agents** (in the global navigation) > the specific cluster > **Settings**. 1. Once on the **Cluster Settings** page, copy the `id` parameter value from the **GraphQL API Integration** section, which is the `{cluster.id}` value. * By running the [List clusters](/docs/apis/rest-api/clusters#clusters-list-clusters) REST API query and obtain this value from the `id` in the response associated with the name of your target cluster (specified by the `name` value in the response). For example: ```bash curl -H "Authorization: Bearer $TOKEN" \ -X GET "https://api.buildkite.com/v2/organizations/{org.slug}/clusters" ``` - `{id}` is that of the agent token, whose value can be obtained: * From the Buildkite URL path when editing the agent token. To do this: - Select **Agents** (in the global navigation) > the specific cluster > **Agent Tokens** > expand the agent token > **Edit**. - Copy the ID value between `/tokens/` and `/edit` in the URL. * By running the [List tokens](/docs/apis/rest-api/clusters/agent-tokens#list-tokens) REST API query and obtain this value from the `id` in the response associated with the description of your token (specified by the `description` value in the response). For example: ```bash curl -H "Authorization: Bearer $TOKEN" \\ -X GET "https://api.buildkite.com/v2/organizations/{org.slug}/clusters/{cluster.id}/tokens" ``` ###### Using the GraphQL API To [revoke an agent token](/docs/apis/graphql/cookbooks/clusters#revoke-an-agent-token) using the [GraphQL API](/docs/apis/graphql-api), run the following example mutation: ```graphql mutation { clusterAgentTokenRevoke( input: { organizationId: "organization-id" id: "token-id" } ) { deletedClusterAgentTokenId } } ``` where: - `organizationId` (required) can be obtained: * From the **GraphQL API Integration** section of your **Organization Settings** page, accessed by selecting **Settings** in the global navigation of your organization in Buildkite. * By running a `getCurrentUsersOrgs` GraphQL API query to obtain the organization slugs for the current user's accessible organizations, followed by a [getOrgId](/docs/apis/graphql/schemas/query/organization) query, to obtain the organization's `id` using the organization's slug. For example: Step 1. Run `getCurrentUsersOrgs` to obtain the organization slug values in the response for the current user's accessible organizations: ```graphql query getCurrentUsersOrgs { viewer { organizations { edges { node { name slug } } } } } ``` Step 2. Run `getOrgId` with the appropriate slug value above to obtain this organization's `id` in the response: ```graphql query getOrgId { organization(slug: "organization-slug") { id uuid slug } } ``` **Note:** The `organization-slug` value can also be obtained from the end of your Buildkite URL, by selecting **Pipelines** in the global navigation of your organization in Buildkite. - `id` (required) is that of the agent token, whose value can only be obtained using the APIs, by running a [getClustersAgentTokenIds](/docs/apis/graphql/schemas/query/organization) query, to obtain the organization's clusters and each of their agent tokens' `id` values in the response. For example: ```graphql query getClustersAgentTokenIds { organization(slug: "organization-slug") { clusters(first: 10) { edges { node { name id agentTokens(first: 10) { edges { node { description id } } } } } } } } ``` ##### Scope of access An agent token is specific to the cluster it was associated when created (within a Buildkite organization), and can be used to register an agent with any [queue](/docs/agent/queues) defined in that cluster. Agent tokens can not be shared between different clusters within an organization, or between different organizations. ##### Agent token lifetime Agent tokens [created using the Buildkite interface](#create-a-token-using-the-buildkite-interface) do not expire and need to be rotated manually. However, using Buildkite's APIs, you can specify an optional expiration date attribute with a timestamp value in your API call to create an agent token—[`expires_at` using the REST API](#create-a-token-using-the-rest-api) or [`expiresAt` using the GraphQL API](#create-a-token-using-the-graphql-api). The ability to set an expiration timestamp on an agent token is a security compliance and token lifecycle management feature, which allows you to implement an automated token rotation process using the Buildkite API, replacing any previous, more manual rotation processes for long-lived agent tokens. Note that the existing agent tokens will continue to work without expiration, unless they are manually revoked. There is no maximum expiration duration for an agent token, although a minimum of 10 minutes from the current time is required. After an agent token has expired, it is displayed with the following message in the Buildkite interface: ⚠️ **This token expired on ...** An expired agent token will prevent agents configured with this token from being able to re-connect to its Buildkite cluster. However, agents currently connected to their cluster at the time of expiration won't be affected. > 📘 Agent token expiration format > The timestamp must be set in ISO8601 format (for example, `2025-01-01T00:00:00Z`). This timestamp value cannot be changed on an existing agent token. An error is returned if an attempt is made to update the expiration date field of an existing agent token. ##### Additional agent tokens In addition to the agent token, the Buildkite agent automatically generates and manages two internal types of tokens during its operation—[session tokens](#additional-agent-tokens-session-tokens) and [job tokens](#additional-agent-tokens-job-tokens). ###### Session tokens Session tokens are internal tokens that last for the lifetime of the agent connection. They are used by the agent to request and start new jobs, and remain valid until the agent disconnects from Buildkite. ###### Job tokens Job tokens are internal agent access tokens that are generated for each individual job when it starts. They are exposed to the job as the [environment variable](/docs/pipelines/configure/environment-variables) `BUILDKITE_AGENT_ACCESS_TOKEN` and are used by the Buildkite agent's local Job API, which provides access to various CLI commands (including [annotate](/docs/agent/cli/reference/annotate), [artifact](/docs/agent/cli/reference/artifact), [meta-data](/docs/agent/cli/reference/meta-data), and [pipeline](/docs/agent/cli/reference/pipeline) commands). Job tokens are scoped to a single job for security reasons, limiting both the duration and the scope of access, and are valid until the job finishes. You can set a default or maximum [command timeout](/docs/pipelines/configure/build-timeouts#command-timeouts) to further scope the lifetime of job tokens. ###### Token exchange process When an agent starts, it follows the token exchange process: 1. The agent connects to the Buildkite agent API to register itself using its configured _agent token_ (`BUILDKITE_AGENT_TOKEN`). 1. The Agent API generates and returns a [session token](#additional-agent-tokens-session-tokens) to the agent. 1. The agent uses this session token to poll for available jobs and manage its connection to Buildkite. 1. When the agent accepts a job, Buildkite generates a [job token](#additional-agent-tokens-job-tokens) specific to that job. | Token type | Generated by | Use | Lifetime | Agent token | Buildkite organization admin or cluster maintainer | Initial agent registration and authentication. | Forever unless expiration date is set during creation with GraphQL or REST API, or is manually revoked. | Session token (internal) | Buildkite agent API during registration | Agent lifecycle APIs, polling for jobs, and starting jobs. | Until the agent disconnects. | Job token (internal) | Buildkite agent API when job is accepted | Local Job API access for CLI commands (including [annotate](/docs/agent/cli/reference/annotate), [artifact](/docs/agent/cli/reference/artifact), [meta-data](/docs/agent/cli/reference/meta-data), and [pipeline](/docs/agent/cli/reference/pipeline) commands). | Until the job finishes. >📘 Job tokens are not supported in agents prior to v3.39.0 > Agents prior to v3.39.0 use the session token for the `BUILDKITE_AGENT_ACCESS_TOKEN` environment variable and the job APIs. ##### Working with unclustered agent tokens > 🚧 This section documents a deprecated Buildkite feature > _It is not be possible to create and work with unclustered agents for any Buildkite organizations created after the official release of clusters on February 26, 2024._ Therefore, unclustered agent tokens are not relevant to these organizations, including this section of the Agent tokens page. > Previously, agents only connected directly to Buildkite using a token which was created and managed by the processes described in this page section. These tokens are now a deprecated feature of Buildkite, and are referred to as _unclustered agent tokens_. Unclustered agent tokens, however, are still available to customers who have not yet migrated their pipelines to a [cluster](/docs/pipelines/security/clusters). > _Agent tokens_ are now associated with clusters, and connect to Buildkite through a specific cluster within an organization. Learn more about how to manage agent tokens for clusters from the top of this main [Agent tokens](/docs/agent/self-hosted/tokens) page and how to [migrate your unclustered agents across to a cluster](/docs/pipelines/security/clusters/migrate-from-unclustered-to-clustered-agents). Any Buildkite organization created before February 26, 2024 has an **Unclustered** area for managing _unclustered agents_, accessible through **Agents** (from the global navigation) > **Unclustered** of the Buildkite interface, where an _unclustered agent_ refers to any agent that is not associated with a cluster. A Buildkite agent requires a token to connect to Buildkite and register for work. If you need to connect an _unclustered agent_ to Buildkite, then you need to create an _unclustered agent token_ to do so. ###### The default token Your Buildkite organization's unclustered agent tokens page, accessible through **Agents** (from the global navigation) > **Unclustered** > **Agent Tokens**, may have the **Default agent registration token**, which is the original default token when your organization was created. If you had previously saved this token's value in a safe place, this token can be used for testing and development. However, it's recommended that you [create new, specific tokens](#working-with-unclustered-agent-tokens-create-a-token) for each new environment. ###### Using and storing tokens The requirements for using and storing unclustered agent tokens are similar to those for [agent tokens associated with a cluster](#using-and-storing-tokens). ###### Create a token New unclustered agent tokens can be created using the [GraphQL API](/docs/apis/graphql-api) with the `agentTokenCreate` mutation. For example: ```graphql mutation { agentTokenCreate(input: { organizationID: "organization-id", description: "A description" }) { tokenValue agentTokenEdge { node { id } } } } ``` > 📘 An unclustered agent token's value is only displayed once > As soon as the unclustered agent token's value is displayed, copy its value and save it in a secure location. > If you forget to do this, you'll need to create a new token to obtain its value. You can find your `organization-id` in your Buildkite organization settings page, or by running the following GrapqQL query: ```graphql query GetOrgID { organization(slug: "organization-slug") { id } } ``` The token description should clearly identify the environment the token is intended to be used for (for example, `Read-only token for static site generator`), and is listed on the **Agent tokens** page of the **Agents** (from the global navigation) > **Unclustered** area. It is possible to create multiple unclustered agent tokens using the GraphQL API. ###### Revoke a token Unclustered agent tokens can be revoked using the [GraphQL API](/docs/apis/graphql/cookbooks/agents#revoke-an-unclustered-agent-token) query with the `agentTokenRevoke ` mutation. You need to pass your unclustered agent token as the ID in the mutation. First, you can retrieve a list of agent token IDs using this query: ```graphql query GetAgentTokenID { organization(slug: "organization-slug") { agentTokens(first:50) { edges { node { id uuid description } } } } } ``` Then, using the token ID, revoke the unclustered agent token: ```graphql mutation { agentTokenRevoke(input: { id: "token-id", reason: "A reason" }) { agentToken { description revokedAt revokedReason } } } ``` Once a token is revoked, no new agents will be able to start with that token. Revoking a token does not affect any connected agents. ###### Scope of access Unclustered agent tokens are specific to each Buildkite organization (created before February 26, 2024), and can be used to register an agent with any [unclustered queue](/docs/agent/queues#setting-up-queues-for-unclustered-agents). Unclustered agent tokens can not be shared between Buildkite organizations. ###### Additional agent tokens In addition to the unclustered agent token (and as is the case for [agent tokens associated with a cluster](#additional-agent-tokens)), the Buildkite agent automatically generates and manages two internal types of tokens during its operation—[session tokens](#additional-agent-tokens-session-tokens) and [job tokens](#additional-agent-tokens-job-tokens). ###### Token exchange process When an agent starts, it follows the token exchange process: 1. The agent connects to the Buildkite agent API to register itself using its configured _unclustered agent token_ (`BUILDKITE_AGENT_TOKEN`). 1. The Agent API generates and returns a _session token_ to the agent. 1. The agent uses this session token to poll for available jobs and manage its connection to Buildkite. 1. When the agent accepts a job, Buildkite generates a _job token_ specific to that job. | Token type | Generated by | Use | Lifetime | Unclustered agent token | Buildkite organization administrator | Initial agent registration and authentication. | Forever unless manually revoked. | Session token (internal) | Buildkite agent API during registration | Agent lifecycle APIs, polling for jobs, and starting jobs. | Until the agent disconnects. | Job token (internal) | Buildkite agent API when job is accepted | Local Job API access for CLI commands (including [annotate](/docs/agent/cli/reference/annotate), [artifact](/docs/agent/cli/reference/artifact), [meta-data](/docs/agent/cli/reference/meta-data), and [pipeline](/docs/agent/cli/reference/pipeline) commands). | Until the job finishes. > 📘 Job tokens not supported in agents prior to v3.39.0 > Agents prior to v3.39.0 use the session token for the `BUILDKITE_AGENT_ACCESS_TOKEN` environment variable and the job APIs. --- ### Code access URL: https://buildkite.com/docs/agent/self-hosted/code-access #### Self-hosted agent code access If your agent needs to clone your repositories using git and SSH, you'll need to configure your agent with a valid SSH key. This page explains how to configure your agent with valid SSH keys to gain access to your code in repositories, as well as [SSH keys for GitHub](#ssh-keys-for-github). ##### Finding your SSH key directory When the Buildkite agent runs any git operations, the agent will look for SSH keys in `~/.ssh` under the user the agent is running as. Each platform's [agent installation documentation](/docs/agent/self-hosted/install) specifies which user the agent runs as and in which directory the SSH keys are. For example, on Debian the agent runs as `buildkite-agent` and the SSH keys are in `/var/lib/buildkite-agent/.ssh/` but on macOS the agent runs as the user who started the `launchd` service, and the SSH keys are in that user's `.ssh` directory. ##### Debugging SSH key issues To help debug SSH issues, you can enable verbose logging by running your build with the following environment variable set: ```bash GIT_SSH_COMMAND="ssh -vvv" ``` ##### Creating a single SSH key The following shows an example of creating a new "machine user" SSH key for an agent: ```bash $ sudo su buildkite-agent # or whichever user your agent runs as $ mkdir -p ~/.ssh && cd ~/.ssh $ ssh-keygen -t rsa -b 4096 -C "dev+build@myorg.com" Generating public/private rsa key pair. Enter file in which to save the key (/var/lib/buildkite-agent/.ssh/id_rsa): Enter passphrase (empty for no passphrase): Enter same passphrase again: Your identification has been saved in /var/lib/buildkite-agent/.ssh/id_rsa. Your public key has been saved in /var/lib/buildkite-agent/.ssh/id_rsa.pub. The key fingerprint is: 4b:6f:7b:5f:8e:f7:5b:c1:fa:e3:dd:9a:8e:a8:e8:33 dev@org.com The key's randomart image is: +---[RSA 4096]----+ | | | | | | | . | | S o | | . o . .| | . o . o| | E. . o...*=| | .oo..o..oB*O| +-----------------+ $ ls id_rsa id_rsa.pub $ cat id_rsa.pub ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQDELESv1QGfoZ2hECJr.......Yho9hDPoNefDbcdZM4NdKWTVmyNGQo6YTzw== dev+build@myorg.com ``` You'd then add this key to the user's settings on [GitHub](#ssh-keys-for-github), Bitbucket, GitLab, etc. ##### Creating multiple SSH keys If you need to use multiple SSH keys for different pipelines, we support a special repository hostname format which you can use with your `~/.ssh/config`. To use a different key for a given pipeline, first change the repository hostname in your Buildkite pipeline settings from `server.com` to `server.com-mypipeline`, add an entry to the SSH config file on your agent machine for the host `server.com-mypipeline`, and specify your custom SSH key. For example, if you had a pipeline repository URL of `git@github.com:org/pipeline-1.git` you would change it in your Buildkite repository settings to `git@github.com-pipeline-1:org/pipeline-1.git` and create the following SSH config file: ``` Host github.com-pipeline-1 HostName github.com IdentityFile /var/lib/buildkite-agent/.ssh/id_rsa.pipeline-1 ``` The following example shows how to create the corresponding pipeline-specific SSH key: ```bash $ sudo su buildkite-agent # or whichever user your agent runs as $ cd ~/.ssh $ ssh-keygen -t rsa -b 4096 -C "dev+build-pipeline-1@myorg.com" Generating public/private rsa key pair. Enter file in which to save the key (/var/lib/buildkite-agent/.ssh/id_rsa): id_rsa.pipeline-1 Enter passphrase (empty for no passphrase): Enter same passphrase again: Your identification has been saved in id_rsa.pipeline-1. Your public key has been saved in id_rsa.pipeline-1.pub. The key fingerprint is: e4:60:69:a3:a0:63:bb:27:e6:ff:53:d3:4a:06:7f:e4 dev@org.com The key's randomart image is: +---[RSA 4096]----+ | | | . | | . * . | | . . = = . | |o. . o S | |... * E | | . + + | | o.. . . | |oo+.... | +-----------------+ $ ls id_rsa.pipeline-1 id_rsa.pipeline-1.pub ``` Alternatively, you can use a shorter approach to creating multiple SSH keys by adding pipeline-specific environments: > 📘 > Note that if you are using Elastic CI Stack for AWS, the following approach is redundant as the stack creates a [build secrets bucket](/docs/agent/self-hosted/aws/elastic-ci-stack/ec2-linux-and-windows/security#s3-secrets-bucket) and allows you to specify an SSH key per pipeline as `/{pipeline-slug}/private_ssh_key`. 1. Add a pipeline-specific environment (for example, by using [Elastic CI Stack for AWS's build secrets bucket](/docs/agent/self-hosted/aws/elastic-ci-stack/ec2-linux-and-windows/security#s3-secrets-bucket) or by having an Agent environment hook that switches on the repository URL or the pipeline slug): ```bash GIT_SSH_COMMAND="ssh -i ~/.ssh/id_rsa_mypipeline" ``` 1. Create an identity file at that location: ```bash ~/.ssh/id_rsa_mypipeline ``` 1. Add the public key for that identity file to `mypipeline` on the git repository provider. ##### Using multiple keys with ssh-agent If you need to use multiple keys, or want to use keys with pass-phrases, an alternative to the above hostname method is to use `ssh-agent`. After starting an `ssh-agent` process and adding the keys, ensure the `SSH_AUTH_SOCK` environment variable is exported by your [`environment` hook](/docs/agent/hooks#job-lifecycle-hooks). For example, if you set up `ssh-agent` like so: ```bash $ sudo su buildkite-agent $ ssh-agent -a ~/.ssh/ssh-agent.sock $ export SSH_AUTH_SOCK=/var/lib/buildkite-agent/.ssh/ssh-agent.sock $ ssh-add ~/.ssh/id_rsa-pipeline-1 Identity added: /var/lib/buildkite-agent/.ssh/id_rsa-pipeline-1 $ ssh-add ~/.ssh/id_rsa-pipeline-2 Identity added: /var/lib/buildkite-agent/.ssh/id_rsa-pipeline-2 ``` The following [`environment` hook](/docs/agent/hooks#job-lifecycle-hooks) will direct your build's git operations to use the `ssh-agent` socket: ```bash #!/bin/bash set -eu export SSH_AUTH_SOCK="/var/lib/buildkite-agent/.ssh/ssh-agent.sock" ``` ##### SSH keys for GitHub The Buildkite agent clones your source code directly from GitHub or GitHub Enterprise. The easiest way to provide it with access is by creating a "Buildkite agent" machine user in your organization, and adding it to a team that has access to the relevant repositories. > 📘 > If you're running a build agent on a local development machine which already has access to GitHub then you can skip this setup and start running builds. ###### Method 1: Machine user Creating a [machine user](https://docs.github.com/en/authentication/connecting-to-github-with-ssh/managing-deploy-keys#machine-users) is the simplest way to create a single SSH key which provides access to your organization's repositories. To set up a GitHub machine user: 1. On your agent machine, generate a key as per the [Creating a single SSH key](#creating-a-single-ssh-key) instructions. 1. Sign up to GitHub as a new user (using a valid email address), and add the SSH key to the user's settings. 1. Sign back into GitHub as an organization admin, create a new team, then add the new user and any required repositories to the team. ###### Method 2: Deploy keys An alternative method of providing access to your repositories is to use deploy keys. The advantage of deploy keys is they can provide read-only access to your source code, but the disadvantage is that you'll have to configure ssh on your build agents to handle multiple keys. To setup GitHub deploy keys with the Buildkite agent, you'll need to do the following for each repository: 1. On your agent machine, generate a key as per the [Creating multiple SSH keys](#creating-multiple-ssh-keys) instructions. 1. In GitHub, copy the key into the repository's "Deploy keys" settings. --- ### Overview URL: https://buildkite.com/docs/agent/self-hosted/agent-stack-k8s #### Agent Stack for Kubernetes overview The Buildkite Agent Stack for Kubernetes (`agent-stack-k8s`) is a Kubernetes [controller](https://kubernetes.io/docs/concepts/architecture/controller/) that uses Buildkite's [Agent API](/docs/apis/agent-api) to watch for scheduled jobs assigned to the controller's queue. ##### Architecture When a matching job is returned from the Agent REST API, the controller creates a Kubernetes job containing a single Pod with containers that will acquire and run the Buildkite job. The job contains a [PodSpec](https://kubernetes.io/docs/reference/kubernetes-api/workload-resources/pod-v1/#PodSpec) that defines all the containers required to acquire and run a Buildkite job: - Adding an init container to: * Copy the agent binary onto the workspace volume (`copy-agent`). * Check that other container images pull successfully before starting (`imagecheck`). - Adding a container to run the Buildkite agent (`agent`). - Adding a container to clone the source repository (`checkout`). - Modifying the (`container-N`) user-specified containers to: * Overwrite the entrypoint to the agent binary. * Run with the working directory set to the workspace. > 📘 > The Agent Stack for Kubernetes controller works with the Agent API in version 0.28.0 and later of the controller. Earlier versions of the controller work with the GraphQL API. ##### Before you start - A Kubernetes cluster running version 1.29.0 or later (required for Buildkite Agent Stack for Kubernetes version 0.35.0 and later, which uses native sidecar containers). For older Kubernetes clusters, use Agent Stack for Kubernetes version 0.34.0 or older. - A [Buildkite cluster](/docs/pipelines/security/clusters/manage) and an [agent token](/docs/agent/self-hosted/tokens#create-a-token) for this cluster. - (Optional) Create a unique [self-hosted queue](/docs/agent/queues/managing#create-a-self-hosted-queue) for this Buildkite cluster. * If [queue tags are not explicitly specified when the agent is started](/docs/agent/queues#assigning-a-self-hosted-agent-to-a-queue), then the controller will pull jobs from the [default queue](/docs/agent/queues#assigning-a-self-hosted-agent-to-a-queue-the-default-self-hosted-queue). You can define the queue name to be whatever suits your requirements to query the API for scheduled jobs assigned to that queue. However, the examples used throughout this documentation assume the queue name of **kubernetes**. - Helm version v3.8.0 or newer (as support for OCI-based registries is required). - If working with a version of the Agent Stack for Kubernetes controller prior to 0.28.0, a [Buildkite API access token with the GraphQL scope enabled](/docs/apis/graphql-api#authentication). > 📘 A note on using GraphQL API tokens > Since the Agent Stack for Kubernetes controller version 0.28.0 and later works with the [Agent REST API](/docs/apis/agent-api), the Buildkite GraphQL API is no longer used. Additionally, the organization slug and cluster UUID can be inferred using the Agent Token. Therefore, if you are upgrading from an older version of the controller to its current version, your Buildkite API access token with the GraphQL scope enabled, org, and cluster UUID can all be safely removed from your configuration or Kubernetes Secret. Only an [agent token](/docs/agent/self-hosted/tokens#create-a-token) for your Buildkite cluster is required. ##### Get started with the Agent Stack for Kubernetes Learn more about how to set up the Buildkite Agent Stack for Kubernetes from the [Installation](/docs/agent/self-hosted/agent-stack-k8s/installation) page. ##### Development and contributing Since the Buildkite Agent Stack for K8s is open source, you can make your own contributions to this project. Learn more about how to do this from in [Agent Stack K8s Development](https://github.com/buildkite/agent-stack-k8s/blob/main/DEVELOPMENT.md). --- ### Installation URL: https://buildkite.com/docs/agent/self-hosted/agent-stack-k8s/installation #### Installation Before proceeding, ensure that you have met the [prerequisites](/docs/agent/self-hosted/agent-stack-k8s#before-you-start) for the Buildkite Agent Stack for Kubernetes controller. > 🚧 > Starting with version 0.29.0 of the controller, [unclustered agent tokens](/docs/agent/self-hosted/tokens#working-with-unclustered-agent-tokens) are no longer supported. The Buildkite Agent Stack for Kubernetes requires a [Buildkite cluster](/docs/pipelines/security/clusters/manage) and an [agent token](/docs/agent/self-hosted/tokens#create-a-token) for this cluster in order to process Buildkite jobs. The recommended way to install the Buildkite Agent Stack for Kubernetes controller is to deploy a [Helm](https://helm.sh) chart by running the following command with your appropriate configuration values: ```bash helm upgrade --install agent-stack-k8s oci://ghcr.io/buildkite/helm/agent-stack-k8s \ --namespace buildkite \ --create-namespace \ --set agentToken= ``` > 📘 > Versions 0.28.1 and earlier of the Agent Stack for Kubernetes controller also requires you to specify a queue, using the argument: `--set-json='config.tags=["queue=arm64"]'`. If you do not specify a queue, then the queue name is assumed to be `kubernetes`. Alternatively, you can place these configuration values into a YAML configuration file by creating the YAML file in this format: ```yaml #### values.yml agentToken: "" #### Optionally: #### config: #### tags: #### - queue=some-queue ``` > 📘 > If using version 0.27.0 and earlier of the Agent Stack for Kubernetes controller, see [Early versions of the controller](#early-versions-of-the-controller) (below) for details on additional configuration requirements. Next, deploy the Helm chart, referencing the configuration values in the YAML file you've created: ```bash helm upgrade --install agent-stack-k8s oci://ghcr.io/buildkite/helm/agent-stack-k8s \ --namespace buildkite \ --create-namespace \ --values values.yml ``` Both of these deployment methods: - Create a Kubernetes deployment in the `buildkite` namespace with a single Pod containing the `controller` container. * The `buildkite` namespace is created if it does not already exist in the Kubernetes cluster. - Use the provided `agentToken` to query the Buildkite agent API looking for jobs: * In your Buildkite organization (associated with the `agentToken`) * Assigned to the [default queue](/docs/agent/queues#assigning-a-self-hosted-agent-to-a-queue-the-default-self-hosted-queue) in your Buildkite cluster (associated with the `agentToken`) ##### Early versions of the controller Versions 0.27.0 and earlier of the Agent Stack for Kubernetes controller also requires you to specify a [Buildkite API access token with the GraphQL scope enabled](/docs/apis/graphql-api#authentication), the organization slug, and the cluster UUID, as additional top-level configuration. For example, in the `values.yml` file: ```yaml graphqlToken: "" config: org: "" cluster-uuid: "" ``` To find the Buildkite cluster UUID from the Buildkite interface: 1. Select **Agents** in the global navigation to access your Buildkite organization's [**Clusters** page](https://buildkite.com/organizations/-/clusters). 1. Select the cluster containing your configured queue. 1. Select **Settings**. 1. On the **Cluster Settings** page, scroll down to the **GraphQL API Integration** section and your Buildkite cluster's UUID is shown as the `id` parameter value. ##### Storing Buildkite tokens in a Kubernetes Secret If you prefer to self-manage a Kubernetes Secret containing the agent token instead of allowing the Helm chart to create a secret automatically, the Buildkite Agent Stack for Kubernetes controller can reference a custom secret. Here is how a custom secret can be created: ```bash kubectl create namespace buildkite kubectl create secret generic -n buildkite \ --from-literal=BUILDKITE_AGENT_TOKEN='' ``` This Kubernetes Secret name can be provided to the controller with the `agentStackSecret` option, replacing the `agentToken` option. You can then reference your Kubernetes Secret by name during Helm chart deployments. To reference your Kubernetes Secret when setting up the Buildkite Agent Stack for Kubernetes controller, run the Helm chart deployment command with your appropriate configuration values: ```bash helm upgrade --install agent-stack-k8s oci://ghcr.io/buildkite/helm/agent-stack-k8s \ --namespace buildkite \ --create-namespace \ --set agentStackSecret= \ --set-json='config.tags=["queue=kubernetes"]' ``` Alternatively, to reference your Kubernetes Secret with your configuration values in a YAML file by creating the YAML file in this format: ```yaml #### values.yml agentStackSecret: "" config: tags: - queue=kubernetes ``` Next, deploy the Helm chart, referencing the configuration values in the YAML file you've created: ```bash helm upgrade --install agent-stack-k8s oci://ghcr.io/buildkite/helm/agent-stack-k8s \ --namespace buildkite \ --create-namespace \ --values values.yml ``` ##### Other installation methods You can also use the following chart as a dependency: ```yaml dependencies: - name: agent-stack-k8s version: "0.28.0" repository: "oci://ghcr.io/buildkite/helm" ``` Alternatively, you can also use this chart as a Helm [template](https://helm.sh/docs/chart_best_practices/templates/): ``` helm template oci://ghcr.io/buildkite/helm/agent-stack-k8s --values values.yaml ``` The latest and earlier versions (with digests) of the Buildkite Agent Stack for Kubernetes controller can be found under [Releases](https://github.com/buildkite/agent-stack-k8s/releases) in the Buildkite Agent Stack for Kubernetes controller [GitHub repository](https://github.com/buildkite/agent-stack-k8s/). ##### Controller configuration Learn more about detailed configuration options in [Controller configuration](/docs/agent/self-hosted/agent-stack-k8s/controller-configuration). ##### Running builds After the Buildkite Agent Stack for Kubernetes controller has been configured and deployed, you are ready to [run a Buildkite build](/docs/agent/self-hosted/agent-stack-k8s/running-builds). --- ### Git credentials URL: https://buildkite.com/docs/agent/self-hosted/agent-stack-k8s/git-credentials #### Git credentials Similarly to a standalone [Buildkite agent installation](/docs/agent/self-hosted/install), to access and clone private Git repositories, Git credentials must be made available for the Agent to use. These credentials can be in the form of an SSH key for cloning over `ssh://`, or using a `.git-credentials` file for cloning over `https://`. ##### Cloning repositories using SSH keys To use SSH to clone your private Git repositories, you'll need to create a Kubernetes Secret containing an authorized SSH private key and configure the Buildkite Agent Stack for Kubernetes to mount this Secret into the `checkout` container that is used to perform the Git repository cloning. ###### Create a Kubernetes Secret using an SSH private key > 🚧 Warning! > Support for DSA keys was removed from OpenSSH in early 2025. This removal affects `buildkite/agent` version `v3.88.0` and newer. Please migrate to `RSA`, `ECDSA`, or `EDDSA` keys. After creating an SSH key pair and registering its public key with the remote repository provider (for example, [GitHub](https://docs.github.com/en/authentication/connecting-to-github-with-ssh/adding-a-new-ssh-key-to-your-github-account)), you can create a Kubernetes Secret using the SSH private key file. Ensure that the environment variable name matches the recognized names (`SSH_PRIVATE_*_KEY`) in [`docker-ssh-env-config`](https://github.com/buildkite/docker-ssh-env-config). - `SSH_PRIVATE_ECDSA_KEY` - `SSH_PRIVATE_ED25519_KEY` - `SSH_PRIVATE_RSA_KEY` A script from this project is included in the default entry point of the default [`buildkite/agent`](https://hub.docker.com/r/buildkite/agent) Docker image. It will process the value of the Kubernetes Secret and write out a private key to the `~/.ssh` directory of the checkout container. To create a Kubernetes Secret named `my-git-ssh-credentials` containing the contents of the SSH private key file `$HOME/.ssh/id_rsa`: ```bash kubectl create secret generic my-git-ssh-credentials --from-file=SSH_PRIVATE_RSA_KEY="$HOME/.ssh/id_rsa" -n buildkite ``` This Kubernetes Secret can be referenced by the Buildkite Agent Stack for Kubernetes controller using [EnvFrom](https://kubernetes.io/docs/tasks/inject-data-application/distribute-credentials-secure/#configure-all-key-value-pairs-in-a-secret-as-container-environment-variables) from within the controller configuration or via the `gitEnvFrom` config of the `kubernetes` plugin. ###### Provide a Kubernetes Secret through a configuration file Using [`pod-spec-patch`](/docs/agent/self-hosted/agent-stack-k8s/container-resource-limits#using-the-podspec-patch-in-the-controller-values-yaml-configuration-file), you can specify the Kubernetes Secret containing your SSH private key in your configuration values YAML file using `envFrom`: ```yaml #### values.yaml ... config: ... pod-spec-patch: containers: - name: checkout # 📘 > If you are using the `kubernetes` plugin to provide the Kubernetes Secret's SSH private key, you need to _define this configuration for every step_ that requires access to the private Git repository. > If you are defining the Kubernetes Secret using the Buildkite Agent Stack for Kubernetes controller configuration, you'll only needs to configure it once. ###### Provide an SSH private key to non-checkout containers The configurations above only provide the SSH private key as a Kubernetes Secret to the `checkout` container. If Git SSH credentials are required in user-defined job containers, you have these options instead: - Use a container image based on the default `buildkite/agent` Docker image, which preserves the default entry point by not overriding the command in the job spec. - Include or reproduce the functionality of the [`ssh-env-config.sh`](https://github.com/buildkite/docker-ssh-env-config/blob/-/ssh-env-config.sh) script in the entry point for your job container image to source from recognized environment variable names. The following example shows how to set up an SSH private key in `container-0`. ```yaml #### values.yaml ... config: ... pod-spec-patch: containers: ... - name: container-0 env: - name: SSH_PRIVATE_RSA_KEY valueFrom: secretKeyRef: name: my-git-ssh-credentials # > ~/.ssh/known_hosts echo "$SSH_PRIVATE_RSA_KEY" | tr -d '\r' | ssh-add - ``` In your pipeline YAML, you can now add git operations such as `git clone` in the command step. ```yaml #### pipeline.yaml steps: - label: "\:kubernetes\: Hello World!" command: git clone -v git@github.com:{repo_name}.git plugins: - kubernetes: podSpec: containers: - image: buildkite/agent:alpine-k8s ``` ##### Cloning repositories using Git credentials To use HTTPS to clone private Git repositories, you can use a `.git-credentials` file stored in a secret, and refer to this secret using the `gitCredentialsSecret` checkout parameter. ###### Create a Kubernetes Secret from a Git credentials file Create a `.git-credentials` file formatted in the manner expected by the `store` [Git credential helper](https://git-scm.com/docs/git-credential-store). After this file has been created, you can create a Kubernetes Secret containing the contents of this file: ```bash kubectl create secret generic my-git-https-credentials --from-file='.git-credentials'="$HOME/.git-credentials" -n buildkite ``` ###### Provide a Kubernetes Secret through a configuration file Using `default-checkout-params`, you can define your Kubernetes Secret as follows through the configuration yaml file: ```yaml #### values.yaml ... config: ... default-checkout-params: gitCredentialsSecret: secretName: my-git-https-credentials ``` If you wish to use a different key within the Kubernetes Secret than `.git-credentials`, you can [project it](https://kubernetes.io/docs/tasks/inject-data-application/distribute-credentials-secure/#project-secret-keys-to-specific-file-paths) to `.git-credentials` by using `items` within `gitCredentialsSecret`. ```yaml #### values.yaml ... default-checkout-params: gitCredentialsSecret: secretName: my-git-https-credentials items: - key: funky-creds path: .git-credentials ``` ###### Provide a Kubernetes Secret using the Kubernetes plugin Under the `kubernetes` plugin, specify the name of the Kubernetes Secret using the `checkout.gitCredentialsSecret` config: ```yaml #### pipeline.yaml steps: - label: "\:kubernetes\: Hello World!" command: echo Hello World! agents: queue: kubernetes plugins: - kubernetes: checkout: gitCredentialsSecret: secretName: my-git-https-credentials # 📘 Version requirement > To implement the following configuration options, `v0.13.0` or newer of the Agent Stack for Kubernetes controller is required. For some steps, you may wish to avoid checkout (cloning a source repository). This can be done with the `checkout` block under the `kubernetes` plugin: ```yaml steps: - label: "\:kubernetes\: Hello World!" agents: queue: kubernetes plugins: - kubernetes: checkout: skip: true # prevents scheduling the checkout container ``` ##### Using default-checkout-params Using `default-checkout-params`, `envFrom` can be added to all checkout, command, and sidecar containers separately, either per-step in the pipeline or for all jobs in `values.yaml`. Pipeline example (note that the blocks are `checkout`, `commandParams`, and `sidecarParams`): ```yaml #### pipeline.yml ... kubernetes: checkout: envFrom: - prefix: GITHUB_ secretRef: name: github-secrets commandParams: interposer: vector envFrom: - prefix: DEPLOY_ secretRef: name: deploy-secrets sidecarParams: envFrom: - prefix: LOGGING_ configMapRef: name: logging-config ``` An example of how this would be done using a `values.yml` configuration file: ```yaml #### values.yml config: default-checkout-params: envFrom: - prefix: GITHUB_ secretRef: name: github-secrets default-command-params: interposer: vector envFrom: - prefix: DEPLOY_ secretRef: name: deploy-secrets default-sidecar-params: envFrom: - prefix: LOGGING_ configMapRef: name: logging-config ``` --- ### Default parameters URL: https://buildkite.com/docs/agent/self-hosted/agent-stack-k8s/default-parameters #### Default parameters This page describes how to add environment variables to the default parameters of the [checkout](#default-checkout-parameters), [command](#default-command-parameters), and [sidecar](#default-sidecar-parameters) containers in your Buildkite Agent Stack for Kubernetes controller setup using the `envFrom` feature. ##### Default checkout parameters You can add `envFrom` to all `checkout` containers in two ways: - Per-step in your pipeline configuration, for example: ```yaml # pipeline.yml ... kubernetes: checkout: envFrom: - prefix: GITHUB_ # This prefix is added to all variable names secretRef: name: github-secrets # References a Secret named "github-secrets" ... ``` - Or globally for all jobs using a `values.yml` file, for example: ```yaml # values.yml config: default-checkout-params: envFrom: - prefix: GITHUB_ # This prefix is added to all variable names secretRef: name: github-secrets # References a Secret named "github-secrets" ... ``` ##### Default command parameters You can add `envFrom` to all user-defined command containers in two ways: - Per-step in your pipeline configuration, for example: ```yaml # pipeline.yml ... kubernetes: commandParams: envFrom: - prefix: DEPLOY_ # This prefix is added to all variable names secretRef: name: deploy-secrets # References a Secret named "deploy-secrets" ... ``` - Or alternatively, for all jobs using a `values.yml` file, for example: ```yaml # values.yml config: default-command-params: envFrom: - prefix: DEPLOY_ # This prefix is added to all variable names secretRef: name: deploy-secrets # References a Secret named "deploy-secrets" ... ``` ##### Default sidecar parameters You can add `envFrom` to all `sidecar` containers in two ways: - Per-step in your pipeline configuration, for example: ```yaml # pipeline.yml ... kubernetes: sidecarParams: envFrom: - prefix: LOGGING_ # This prefix is added to all variable names configMapRef: name: logging-config # References a ConfigMap named "logging-config" ... ``` - Or alternatively, for all jobs using a `values.yml` file, for example: ```yaml # values.yml config: default-sidecar-params: envFrom: - prefix: LOGGING_ # This prefix is added to all variable names configMapRef: name: logging-config # References a ConfigMap named "logging-config" ... ``` --- ### Controller configuration URL: https://buildkite.com/docs/agent/self-hosted/agent-stack-k8s/controller-configuration #### Controller configuration This page covers the available commands for: - `agent-stack-k8s [flags]` - `agent-stack-k8s [command]` All references to "controller" on this page refer to the Agent Stack for Kubernetes controller. ##### Available commands | Command | Description | |-------------|-------------------------------------------------------------------| | `completion`| Generate the autocompletion script for the specified shell | | `help` | Help about any command | | `lint` | A tool for linting Buildkite pipelines | | `version` | Prints the version | Use `agent-stack-k8s [command] --help` for more information about a command. ##### Flags | Flag and value type if applicable | Description | `` ** Type:** `` | **Default:** `` ##### Kubernetes node selection The Buildkite Agent Stack for Kubernetes controller can be deployed to particular Kubernetes Nodes, using the Kubernetes PodSpec [`nodeSelector`](https://kubernetes.io/docs/tasks/configure-pod-container/assign-pods-nodes/#create-a-pod-that-gets-scheduled-to-your-chosen-node) field. The `nodeSelector` field can be defined in the controller's configuration: ```yaml #### values.yml ... nodeSelector: teamowner: "services" config: ... ``` ##### Additional environment variables for the controller container If the Buildkite Agent Stack for Kubernetes controller container requires extra environment variables in order to correctly operate inside your Kubernetes cluster, they can be added to your values YAML file and applied during a deployment with Helm. The `controllerEnv` field can be used to define extra Kubernetes EnvVar environment variables that will apply to the Buildkite Agent Stack for Kubernetes controller container: ```yaml #### values.yml ... controllerEnv: - name: KUBERNETES_SERVICE_HOST value: "10.10.10.10" - name: KUBERNETES_SERVICE_PORT value: "8443" config: ... ``` ##### Custom annotations for the controller If you need to add custom annotations to the Agent Stack for Kubernetes controller pod, these annotations can be defined in your values YAML file and applied during a deployment with Helm. Note that the controller pod will also have the annotations `checksum/config` and `checksum/secrets` to track changes to the configuration and secrets. The `annotations` field can be used to define custom annotations that will be applied to the Buildkite Agent Stack for Kubernetes controller pod: ```yaml #### values.yml ... annotations: kubernetes.io/description: "Agent Stack K8s Controller" prometheus.io/scrape: "true" prometheus.io/port: "8080" config: ... ``` ##### Cleaning up old Buildkite Pipelines jobs If you are using Kubernetes v1.23 and earlier, you may sometimes find that old jobs are still present in your Kubernetes cluster and are not getting automatically cleaned up. This may consume unnecessary space and potentially cause other disruptions with deployments. If you notice old Buildkite Pipelines jobs still present in your Kubernetes cluster, you can use the [`clean-up-job.yaml`](https://github.com/buildkite/agent-stack-k8s/blob/main/utils/clean-up-job.yaml) script (with usage instructions provided at the top of this file) located in [Agent Stack for Kubernetes](https://github.com/buildkite/agent-stack-k8s) repository to clean up your old Buildkite jobs. --- ### Agent configuration URL: https://buildkite.com/docs/agent/self-hosted/agent-stack-k8s/agent-configuration #### Agent configuration options > 📘 Minimum version requirement > To implement the agent configuration options described on this page, version 0.16.0 or later of the Agent Stack for Kubernetes controller is required. The `agent-config` block within `values.yaml` can be used to set a subset of the [Buildkite agent configuration](/docs/agent/self-hosted/configure) options. ```yaml #### values.yaml config: agent-config: no-http2: false experiment: ["use-zzglob"] shell: "/bin/bash" no-color: false strict-single-hooks: true no-multipart-artifact-upload: false trace-context-encoding: json disable-warnings-for: ["submodules-disabled"] no-pty: false no-command-eval: true no-local-hooks: true no-plugins: true plugin-validation: false ``` > 📘 > If `no-command-eval` or `no-plugins` are set to `true`, the Kubernetes plugin may still be able to override everything, since it is interpreted by the Agent Stack for Kubernetes controller and not the Buildkite agent itself. > To avoid being overridden, the `no-command-eval` or `no-plugins` options should be used together with the [`prohibit-kubernetes-plugin`](/docs/agent/self-hosted/agent-stack-k8s/securing-the-stack) option. ##### Pipeline signing The following sections describe optional methods for implementing pipeline signing with the Buildkite Agent Stack for Kubernetes controller. ###### JWKS file configuration containing a signing key This option applies to the `config.agent-config.verification-jwks-file` file. Specifies the relative/absolute path of the JWKS file containing a signing key. When an absolute path is provided, this will be the mount path for the JWKS file. When a relative path (or filename) is provided, this will be appended to `/buildkite/signing-jwks` to create the mount path for the JWKS file. Default value: `key`. ``` config: agent-config: signing-jwks-file: key ``` ###### JWKS signing key ID configuration This option applies to the `signing-jwks-key-id` configuration parameter. The value that was provided for `--key-id` during JWKS key pair generation. If you don't specify a `signing-jwks-key-id` in your configuration and your JWKS file contains only one key, then this JWKS file's key will be used. ``` config: agent-config: signing-jwks-key-id: my-key-id ``` ###### Volume configuration containing a JWKS signing key This option applies to the `config/agent-config/signing-jwks-file` configuration parameter. Creates a Kubernetes volume, which is mounted to the user-defined command containers at the path specified by `config/agent-config/signing-jwks-file`, containing the JWKS signing key data from a Kubernetes Secret. ``` config: agent-config: signingJWKSVolume: name: buildkite-signing-jwks secret: secretName: my-signing-key ``` ###### JWKS file configuration containing a verification key This option applies to the `config/agent-config/verification-jwks-file` configuration parameter. Specifies the relative/absolute path of the JWKS file containing a verification key. When an absolute path is provided, this will be the mount path for the JWKS file. When a relative path (or filename) is provided, this will be appended to `/buildkite/verification-jwks` to create the mount path for the JWKS file. Default value: `key`. ``` config: agent-config: verification-jwks-file: key ``` ###### Verification of failure behavior configuration This option applies to the `config/agent-config/verification-failure-behavior` configuration parameter. This setting determines the Buildkite agent's response when it receives a job without a proper signature, and also specifies how strictly the agent should enforce signature verification for incoming jobs. Valid options are: - `warn`: The agent will emit a warning about missing or invalid signatures but will still proceed to execute the job. - `block`: Prevents any job without a valid signature from running, ensuring a secure pipeline environment. Default value: `block`. ``` config: agent-config: verification-failure-behavior: warn ``` ###### Volume configuration containing a JWKS verification key This option applies to the `config/agent-config/verificationJWKSVolume` configuration parameter. Creates a Kubernetes Volume, which is mounted to the `agent` containers at the path specified by `config/agent-config/verification-jwks-file`, containing the JWKS verification key data from a Kubernetes Secret. ``` config: agent-config: verificationJWKSVolume: name: buildkite-verification-jwks secret: secretName: my-verification-key ``` --- ### Running builds URL: https://buildkite.com/docs/agent/self-hosted/agent-stack-k8s/running-builds #### Running builds After you've [installed](/docs/agent/self-hosted/agent-stack-k8s/installation), [configured](/docs/agent/self-hosted/agent-stack-k8s/controller-configuration), and [set up](/docs/agent/self-hosted/agent-stack-k8s/agent-configuration) the Buildkite Agent Stack for Kubernetes controller, and it is monitoring the Agent API for jobs assigned to the `kubernetes` queue, you can start creating builds in your pipelines. ##### Defining steps A pipeline step can target the `kubernetes` queue with [agent tags](/docs/agent/queues). For example: ```yaml steps: - label: "\:kubernetes\: Hello World!" command: echo Hello World! agents: queue: kubernetes ``` This YAML step configuration creates a Buildkite job containing an agent tag of `queue=kubernetes`. The `agent-stack-k8s` controller retrieves this job using the Agent API and converts it into a Kubernetes job. The Kubernetes job contains a single Pod with containers that will check out the pipeline's Git repository and use the `buildkite/agent:latest` (default image) container to run the `echo Hello World!` command. ###### Kubernetes plugin For defining of more complicated pipeline steps, additional configurations can be used with the `kubernetes` plugin. Unlike other [Buildkite plugins](/docs/pipelines/integrations/plugins), there is no corresponding plugin repository for the `kubernetes` plugin. Instead, this `kubernetes` plugin syntax is reserved for and interpreted by the `agent-stack-k8s` controller. For example, defining `checkout.skip: true` will skip cloning the pipeline's repo for the job: > 📘 Runtime plugin configuration > The Buildkite Agent Stack for Kubernetes controller consumes the `kubernetes` plugin configuration when it creates the Kubernetes Job. Do not rely on the `BUILDKITE_PLUGINS` [environment variable](/docs/pipelines/configure/environment-variables#BUILDKITE_PLUGINS) inside runtime containers to include controller-only settings such as `podTemplate`, `podSpec`, `podSpecPatch`, or `checkout`. ```yaml steps: - label: "\:kubernetes\: Hello World!" command: echo Hello World! agents: queue: kubernetes plugins: - kubernetes: checkout: skip: true ``` ##### Cloning private repositories As is the case with standalone [Buildkite agent installations](/docs/agent/self-hosted/install), to access and clone private repositories, you need to make [Git credentials](/docs/agent/self-hosted/agent-stack-k8s/git-credentials) available for the agent to use. These credentials can be in the form of a SSH key for cloning over `ssh://` or with a `.git-credentials` file for cloning over `https://`. ##### Kubernetes node selection The Buildkite Agent Stack for Kubernetes controller can schedule your Buildkite jobs to run on particular Kubernetes Nodes with matching [_labels_](https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/) using Kubernetes PodSpec [`nodeSelector`](#kubernetes-node-selection-nodeselector) and [`nodeName`](#kubernetes-node-selection-nodename) fields. ###### nodeSelector The [`nodeSelector`](https://kubernetes.io/docs/tasks/configure-pod-container/assign-pods-nodes/#create-a-pod-that-gets-scheduled-to-your-chosen-node) field of the PodSpec can be used to schedule your Buildkite jobs on a chosen Kubernetes Node with matching labels. The `nodeSelector` field can be defined in the controller's configuration using `pod-spec-patch`. This will apply to all Buildkite jobs processed by the controller: ```yaml #### values.yml ... config: pod-spec-patch: nodeSelector: nodecputype: "amd64" # 'arm64' ... ``` ###### nodeName The [`nodeName`](https://kubernetes.io/docs/tasks/configure-pod-container/assign-pods-nodes/#create-a-pod-that-gets-scheduled-to-specific-node) field of the PodSpec can be used to schedule your Buildkite jobs on a specific Kubernetes Node. The `nodeName` field can be defined in the controller's configuration using `pod-spec-patch`. This will apply to all Buildkite jobs processed by the controller: ```yaml #### values.yml ... config: pod-spec-patch: nodeName: "k8s-worker-01" ... ``` The `nodeName` field can also be defined in under `podSpecPatch` using the `kubernetes` plugin. It will apply only to this job and will override `nodeName` in the controller's configuration: ```yaml #### pipeline.yaml steps: - label: "\:kubernetes\: Hello World!" command: echo Hello World! agents: queue: kubernetes plugins: - kubernetes: podSpecPatch: nodeName: "k8s-worker-03" # 'k8s-worker-03' ... ``` --- ### Long-running jobs URL: https://buildkite.com/docs/agent/self-hosted/agent-stack-k8s/long-running-jobs #### Long-running jobs > 📘 Minimum version requirement > To implement the configuration options described on this page, version 0.24.0 or later of the Agent Stack for Kubernetes controller is required. The Agent Stack for Kubernetes controller supports the `activeDeadlineSeconds` field of the Kubernetes [JobSpec](https://kubernetes.io/docs/reference/kubernetes-api/workload-resources/job-v1/#JobSpec), which can be achieved by setting the Job's active deadline (that is, the number of seconds specified in its `activeDeadlineSeconds` field). Learn more about this in Kubernetes' documentation on [Job termination and cleanup](https://kubernetes.io/docs/concepts/workloads/controllers/job/#job-termination-and-cleanup). ##### Controller configuration for increasing maximum job duration (for all jobs) By default, Kubernetes Jobs created by the Agent Stack for Kubernetes controller will run for a maximum duration of `21600` seconds (6 hours). After this duration has been exceeded, all of the running Pods are terminated and the Job status will be `type: Failed`. In the Buildkite interface, this will be reflected as `Exited with status -1 (agent lost)`. If long-running jobs are common in your Buildkite Organization, this value should be increased in your controller configuration values YAML file: ```yaml #### values.yaml ... config: job-active-deadline-seconds: 86400 # 24h ... ``` ##### Kubernetes plugin configuration for increasing maximum job duration (on a per-job basis) It is also possible to override this configuration using the `kubernetes` plugin directly in your pipeline steps, which will only apply to the Kubernetes Job running this `command` step: ```yaml steps: - label: Long-running job command: echo "Hello world" && sleep 43200 plugins: - kubernetes: jobActiveDeadlineSeconds: 43500 ``` Additional information on configuring `jobActiveDeadlineSeconds` can be found in the `--job-active-deadline-seconds` flag description of the [Flags](/docs/agent/self-hosted/agent-stack-k8s/controller-configuration#flags) section, on the [Controller configuration](/docs/agent/self-hosted/agent-stack-k8s/controller-configuration) page. --- ### Git settings URL: https://buildkite.com/docs/agent/self-hosted/agent-stack-k8s/git-settings #### Git settings In the Buildkite Agent Stack for Kubernetes controller version v0.13.0 and later, flags for `git clone` and `git fetch` can be overridden on a per-step basis (similar to `BUILDKITE_GIT_CLONE_FLAGS` and `BUILDKITE_GIT_FETCH_FLAGS` env vars) with the `checkout` block: ```yaml #### pipeline.yml steps: - label: Hello World! agents: queue: kubernetes plugins: - kubernetes: checkout: cloneFlags: -v --depth 1 fetchFlags: -v --prune --tags ``` In the Buildkite Agent Stack for Kubernetes controller version v0.16.0 and later, more Git flags and options are supported by the agent: `cleanFlags`, `noSubmodules`, `submoduleCloneConfig`, `gitMirrors` (`cloneFlags`, `lockTimeout`, and `skipUpdate`) and are configurable with the `checkout` block. For example: ```yaml #### pipeline.yml steps: - label: Hello World! agents: queue: kubernetes plugins: - kubernetes: checkout: cleanFlags: -ffxdq noSubmodules: false submoduleCloneConfig: ["key=value", "something=else"] gitMirrors: path: /buildkite/git-mirrors # optional with volume volume: name: my-special-git-mirrors persistentVolumeClaim: claimName: block-pvc lockTimeout: 600 skipUpdate: true cloneFlags: -v ``` To avoid setting `checkout` on every step, you can use `default-checkout-params` within `values.yaml` when deploying the stack. These will apply the settings to every job. For example: ```yaml #### values.yaml ... config: default-checkout-params: # The available options are the same as `checkout` within `plugin.kubernetes`. cloneFlags: -v --depth 1 noSubmodules: true gitMirrors: volume: name: host-git-mirrors hostPath: path: /var/lib/buildkite/git-mirrors type: Directory ``` ##### Git mirrors and migrating from Elastic CI Stack for AWS to Kubernetes If you are migrating to the Buildkite Agent Stack for Kubernetes from the Elastic CI Stack for AWS, you may be accustomed to enabling [Git mirrors](/docs/agent/self-hosted/configure/git-mirrors) by setting the `BuildkiteAgentEnableGitMirrors` CloudFormation parameter to `true`. In this setup, the agent automatically manages a shared directory for Git mirrors on each EC2 instance, typically `/var/lib/buildkite-agent/git-mirrors`, with little additional configuration required. When moving to the Buildkite Agent Stack for Kubernetes, support for Git mirrors is equally powerful but requires explicit configuration to suit the dynamic and distributed nature of Kubernetes. Instead of a single EC2 instance, each build runs in its own pod, and persistent storage must be configured to ensure that Git mirrors are shared and retained between jobs. ###### Configuring Git mirrors in Kubernetes The Buildkite Agent Stack for Kubernetes supports flexible Git mirror configuration to optimize repository cloning and fetching for your builds. To enable Git mirrors in Kubernetes, specify the mirror storage location and volume type. For persistent, cluster-wide storage, use PersistentVolumeClaim (PVC): ```yaml #### values.yaml config: default-checkout-params: cloneFlags: -v --depth 1 noSubmodules: true gitMirrors: volume: name: my-special-git-mirrors persistentVolumeClaim: claimName: your-pvc ``` This approach is recommended for production environments as it provides resilience and allows mirrors to persist and be shared across pods and nodes, depending on your storage class. > 🚧 > Make sure the referenced PVC already exists in your Kubernetes cluster. For simpler or development setups, use a `hostPath` volume to mount a directory from the Kubernetes node: ```yaml #### values.yaml config: default-checkout-params: gitMirrors: volume: name: host-git-mirrors hostPath: path: /var/lib/buildkite/git-mirrors type: Directory ``` ###### Key differences In Elastic CI Stack, mirrors are local to each EC2 instance and automatically managed, whereas in the Buildkite Agent Stack for Kubernetes, you must explicitly configure persistent storage for mirrors, and the type of storage you choose (hostPath or PVC) affects performance, availability, and scalability. Additionally, each build runs in a new pod, so a persistent storage is essential for effective mirroring in Kubernetes. ###### Best practices and troubleshooting For large repositories or monorepos, Git mirrors can significantly reduce checkout times and network usage. Ensure your storage backend is fast and reliable, and consider using SSD-backed persistent volumes for best performance. If you encounter issues such as lock contention or mirror corruption, review your `lockTimeout` settings and consult the troubleshooting advice in the [Git mirrors documentation](/docs/agent/self-hosted/configure/git-mirrors#common-issues-with-git-mirrors). By configuring Git mirrors appropriately in the Buildkite Agent Stack for Kubernetes, you can maintain the same performance and reliability benefits you experienced in the Elastic CI Stack, while taking full advantage of Kubernetes’ scalability and flexibility. --- ### Pipeline signing URL: https://buildkite.com/docs/agent/self-hosted/agent-stack-k8s/pipeline-signing #### Pipeline signing > 📘 Minimum version requirement > To implement the configuration options described on this page, version 0.16.0 or later of the Agent Stack for Kubernetes controller is required. The Buildkite Agent Stack for Kubernetes controller supports Buildkite's [signed pipelines](/docs/agent/self-hosted/security/signed-pipelines) feature. A JWKS key pair is stored as Kubernetes Secrets and mounted to the `agent` and user-defined command containers. ##### Generating a JWKS key pair Using the `buildkite-agent` CLI, [generate a JWKS key pair](https://buildkite.com/docs/agent/self-hosted/security/signed-pipelines#self-managed-key-creation-step-1-generate-a-key-pair): ```shell buildkite-agent tool keygen --alg EdDSA --key-id my-jwks-key ``` This will create a pair of files in the current directory: ``` EdDSA-my-jwks-key-private.json EdDSA-my-jwks-key-public.json ``` ##### Creating Kubernetes Secrets for a JWKS key pair After using `buildkite-agent` to generate a JWKS key pair, create a Kubernetes Secret for the JWKS signing key that will be used by user-defined command containers: ```shell kubectl create secret generic my-signing-key --from-file='key'="./EdDSA-my-jwks-key-private.json" ``` Next, create a Kubernetes Secret for the JWKS verification key that will be used by the `agent` container: ```shell kubectl create secret generic my-verification-key --from-file='key'="./EdDSA-my-jwks-key-public.json" ``` ##### Updating the configuration values file To use the Kubernetes Secrets containing your JWKS key pair, update the `agent-config` of your configuration values YAML file: ```yaml #### values.yaml config: agent-config: signing-jwks-file: key signing-jwks-key-id: my-jwks-key signingJWKSVolume: name: buildkite-signing-jwks secret: secretName: my-signing-key verification-jwks-file: key verification-failure-behavior: warn # optional, default behavior is 'block' verificationJWKSVolume: name: buildkite-verification-jwks secret: secretName: my-verification-key ``` Learn more about configuring JWKS key pairs for signing/verification on the [Agent configuration](/docs/agent/self-hosted/agent-stack-k8s/agent-configuration#pipeline-signing) page. --- ### Agent hooks and plugins URL: https://buildkite.com/docs/agent/self-hosted/agent-stack-k8s/agent-hooks-and-plugins #### Using agent hooks and plugins > 📘 Minimum version requirement > To implement the configuration options described on this page, version 0.16.0 or later of the Agent Stack for Kubernetes controller is required. However, agent hooks are supported in [earlier versions of the controller](#agent-hooks-in-earlier-versions). ##### Agent hooks The `agent-config` block within the controller's configuration file (`values.yaml`) accepts a value for [`hooks-path`](/docs/agent/self-hosted/configure#hooks-path) as part of the `hooksVolume` configuration. If configured, a corresponding volume named `buildkite-hooks` will be automatically mounted on `checkout` and command containers, with the Buildkite agent configured to use them. You can specify any volume source for agent hooks, but a common choice is to use a [ConfigMap](https://kubernetes.io/docs/concepts/configuration/configmap/), since hooks generally aren't large and ConfigMaps are made available across the cluster. To create a ConfigMap containing agent hooks: ```shell kubectl create configmap buildkite-agent-hooks --from-file=/tmp/hooks -n buildkite ``` All the hooks needed are under the `/tmp/hooks` directory and a ConfigMap created with the name `buildkite-agent-hooks` in the `buildkite` namespace of the Kubernetes cluster. Example of using hooks from a ConfigMap: ```yaml config: agent-config: hooks-path: /buildkite/hooks hooksVolume: name: buildkite-hooks configMap: defaultMode: 493 name: buildkite-agent-hooks ``` ###### Permissions and availability The `defaultMode` value of `493` sets the Unix permissions to `755`, which enables the hooks to be executable. ###### Hooks mount point The `hooks-path` Buildkite agent config option can be used to change the mount point of the corresponding `buildkite-hooks` volume. This will also set `BUILDKITE_HOOKS_PATH` to the defined path on `checkout` and command containers. The default mount point is `/buildkite/hooks`. ##### Agent hooks in earlier versions If you are running the Buildkite Agent Stack Kubernetes controller 0.15.0 or earlier, your agent hooks must be present on the instances where the Buildkite agent runs. These hooks need to be accessible to the Kubernetes pod where the `checkout` and command containers will be running. The recommended approach is to create a ConfigMap with the agent hooks and mount the ConfigMap as a volume to the containers. To create a ConfigMap containing agent hooks: ```shell kubectl create configmap buildkite-agent-hooks --from-file=/tmp/hooks -n buildkite ``` All the hooks needed are under the `/tmp/hooks` directory and a ConfigMap created with the name `buildkite-agent-hooks` in the `buildkite` namespace of the Kubernetes cluster. In order for the agent to use these hooks, a volume containing the ConfigMap is defined and then mounted to all containers using `extraVolumeMounts` at `/buildkite/hooks`, using the `kubernetes` plugin: ```yaml steps: - label: "\:pipeline\: Pipeline Upload" agents: queue: kubernetes plugins: - kubernetes: extraVolumeMounts: - mountPath: /buildkite/hooks name: agent-hooks podSpec: containers: - command: - echo hello-world image: alpine:latest env: - name: BUILDKITE_HOOKS_PATH value: /buildkite/hooks volumes: - configMap: defaultMode: 493 name: buildkite-agent-hooks name: agent-hooks ``` > 📘 Permissions and availability > The `defaultMode` value of `493` sets the Unix permissions to `755`, which enables the hooks to be executable. ##### Agent hook execution differences With jobs created by the Buildkite Agent Stack for Kubernetes controller, there are key differences with hook execution. The primary difference is with the `checkout` container and user-defined `command` containers. - The `environment` hook is executed multiple times, once within the `checkout` container, and once within each of the user-defined `command` containers. - Checkout-related hooks (`pre-checkout`, `checkout`, `post-checkout`) are only executed within the `checkout` container. - Command-related hooks (`pre-command`, `command`, `post-command`) are only executed within the `command` container(s). > 📘 Exporting environment variables > Since hooks are executed from within separate containers for checkout and command phases of the job's lifecycle, any environment variables exported during the execution of hooks with the `checkout` container will _not_ be available to the command container(s). This is operationally different from how hooks are [sourced](/docs/agent/hooks#hook-scopes) outside of the Buildkite Agent Stack for Kubernetes. If the env `BUILDKITE_HOOKS_PATH` is set at pipeline level instead of at the container level, as shown in the earlier pipeline configuration examples, then the hooks will run for both `checkout` container and `command` container(s). Here is the pipeline config where env `BUILDKITE_HOOKS_PATH` is exposed to all containers in the pipeline: ```yaml steps: - label: "\:pipeline\: Pipeline Upload" env: BUILDKITE_HOOKS_PATH: /buildkite/hooks agents: queue: kubernetes plugins: - kubernetes: extraVolumeMounts: - mountPath: /buildkite/hooks name: agent-hooks podSpec: containers: - command: - echo - hello-world image: alpine:latest volumes: - configMap: defaultMode: 493 name: buildkite-agent-hooks name: agent-hooks ``` This happens because agent hooks will be present in both containers and `environment` hook will also run in both containers. Here is what the resulting build output will look like: ``` Running global environment hook # 📘 Plugins mount point > The `plugins-path` Buildkite agent config option can be used to change the mount point of the corresponding volume. This will also set `BUILDKITE_PLUGINS_PATH` to the defined path on `checkout` and command containers. The default mount point is `/buildkite/plugins`. --- ### Pipeline validation URL: https://buildkite.com/docs/agent/self-hosted/agent-stack-k8s/pipeline-validation #### Validating your pipeline Buildkite plugin specifications are unstructured by nature, which can lead to configuration errors that cause agent pod startup failures. These issues can be difficult and time-consuming to troubleshoot. To prevent configuration problems before deployment, we suggest using a linter that uses [JSON Schema](https://json-schema.org/) to validate your pipeline and plugin configurations. Even such linters currently can't catch every type of error and you might still get a reference to a Kubernetes volume that doesn't exist or other similar errors. However, using a JSON Schema linter will help validating that the fields match the expected API specifications. The [JSON schema](https://github.com/buildkite/agent-stack-k8s/blob/main/cmd/linter/schema.json) found in the Agent Stack for Kubernetes controller's open source repository, can also be used with editors that support JSON Schema, by configuring your editor to validate against this controller's schema. --- ### Job metadata URL: https://buildkite.com/docs/agent/self-hosted/agent-stack-k8s/job-metadata #### Default job metadata The Buildkite Agent Stack for Kubernetes controller can automatically add labels and annotations to the Kubernetes Jobs it creates. Default annotations and labels can be set in the controller's YAML configuration values file, through `default-metadata`. Such a configuration applies its defined annotations and labels to all Jobs created by the controller: ```yaml #### values.yaml ... default-metadata: annotations: imageregistry: "https://hub.docker.com/" mycoolannotation: llamas labels: argocd.argoproj.io/tracking-id: example-id-here mycoollabel: alpacas ... ``` Alternatively, you can set the default labels for individual steps in a pipeline using the `metadata` configuration of the `kubernetes` plugin: ```yaml #### pipeline.yaml ... plugins: - kubernetes: metadata: annotations: imageregistry: "https://hub.docker.com/" myannotation: "ci-pipeline" labels: argocd.argoproj.io/tracking-id: "example-id-here" mylabel: "backend" ... ``` --- ### Sidecars URL: https://buildkite.com/docs/agent/self-hosted/agent-stack-k8s/sidecars #### Sidecars You can add sidecar containers to your job by specifying them under the `sidecars` key of the `kubernetes` plugin. These containers are started at the same time as the job's `command` containers. However, there is no guarantee that your `sidecar` containers will have started before the commands in your job's `command` containers are executed. Therefore, using retries or a tool like [wait-for-it](https://github.com/vishnubob/wait-for-it) is recommended to avoid failed dependencies in the event that the `sidecar` container is still in the process of getting started. > 📘 Sidecar container differences prior to 0.35 > Prior to 0.35.0, the `sidecar` containers configured by the Agent Stack for Kubernetes controller differ from [sidecar containers](https://kubernetes.io/docs/concepts/workloads/pods/sidecar-containers/) defined by Kubernetes. True Kubernetes sidecar containers run as init containers, whereas `sidecar` containers defined by the controller run as application containers in the Pod alongside the job's `command` containers. The following pipeline example shows how to use an `nginx` container as a sidecar container and run `curl` from the job's `command` container to interact with the `nginx` container: ``` steps: - label: ":k8s: Use nginx sidecar" agents: queue: "kubernetes" plugins: - kubernetes: sidecars: - image: nginx:latest podSpec: containers: - image: curlimages/curl:latest name: curl command: - curl --retry 10 --retry-all-errors localhost:80 ``` --- ### Kubernetes PodSpec URL: https://buildkite.com/docs/agent/self-hosted/agent-stack-k8s/podspec #### Kubernetes PodSpec Using the `kubernetes` plugin allows you to specify a [`PodSpec`](https://kubernetes.io/docs/reference/kubernetes-api/workload-resources/pod-v1/#PodSpec) Kubernetes API resource that will be used in a Kubernetes [`Job`](https://kubernetes.io/docs/reference/kubernetes-api/workload-resources/job-v1/#Job). ##### Kubernetes PodSpec generation The Agent Stack for Kubernetes controller allows you to define some or all of the Kubernetes `PodSpec` from the following locations: - Controller configuration: `pod-spec-patch`. - Buildkite job, using the `kubernetes` plugin: `podSpec`, `podSpecPatch`. With multiple `PodSpec` inputs provided, here is how the Agent Stack for Kubernetes controller generates a Kubernetes `PodSpec`: 1. Create a simple `PodSpec` containing a single container with the `Image` defined in the controller's configuration and the value of the Buildkite job's command (`BUILDKITE_COMMAND`). If the `kubernetes` plugin is present in the Buildkite job's plugins and contains a `podSpec`, use this as the starting `PodSpec` instead. 1. Apply the `/workspace` Volume. 1. Apply any `extra-volume-mounts` defined by the `kubernetes` plugin. 1. Modify any `containers` defined by the `kubernetes` plugin, overriding the `command` and `args`. 1. Add the `agent` container to the `PodSpec`. 1. Add the `checkout` container to the `PodSpec` (if `skip.checkout` is set to `false`). 1. Add `init` containers for the `imagecheck-#` containers, based on the number of unique images defined in the `PodSpec`. 1. Apply `pod-spec-patch` from the controller's configuration, using a [strategic merge patch](https://kubernetes.io/docs/tasks/manage-kubernetes-objects/update-api-object-kubectl-patch/) in the controller. 1. Apply `podSpecPatch` from the `kubernetes` plugin, using a [strategic merge patch](https://kubernetes.io/docs/tasks/manage-kubernetes-objects/update-api-object-kubectl-patch/) in the controller. 1. Ensure the `checkout` container is not present after applying patching using `pod-spec-patch`, `podSpecPatch` (if `skip.checkout` is set to `true`). 1. Remove any duplicate `VolumeMounts` present in `PodSpec` after patching. 1. Create a Kubernetes Job with the final `PodSpec`. ##### PodSpec command and interpretation of arguments In a `podSpec`, `command` _must_ be a list of strings, since it is [defined by Kubernetes](https://kubernetes.io/docs/reference/kubernetes-api/workload-resources/pod-v1/#entrypoint). However, the Buildkite Agent Stack for Kubernetes controller runs the Buildkite agent instead of the container's default entrypoint. To run a command, the controller must _re-interpret_ `command` into input for the Buildkite agent. By default, the controller treats `command` as a sequence of multiple commands, similar to steps and commands in a `pipeline.yaml` file which is different to the interpretation of `command` (as an entrypoint vector run without a shell as a single command) in Kubernetes. This _interposer_ behavior can be changed using `commandParams/interposer`, which can have one of the following values: - `buildkite` is the default, in which the Agent Stack for Kubernetes controller treats `command` as a sequence of multiple commands, and `args` as extra arguments added to the end of the last command, which are then typically interpreted by the shell. - `vector` emulates the Kubernetes' interpretation in which `command` and `args` specify components of a single command intended to be run directly. - `legacy` is the behavior of the Agent Stack for Kubernetes controller version 0.14.0 and earlier, where `command` and `args` are joined directly into a single command with spaces. An example using `buildkite` interposer behavior: ```yaml steps: - label: Hello World! agents: queue: kubernetes plugins: - kubernetes: commandParams: interposer: buildkite # This is the default, and can be omitted podSpec: containers: - image: alpine:latest command: - set -euo pipefail - |- # hello.txt cat hello.txt | buildkite-agent annotate ``` If you have a multi-line `command`, specifying the `args` could lead to confusion. Therefore, it is recommended to just use `command`. An example using `vector` interposer behavior: ```yaml steps: - label: Hello World! agents: queue: kubernetes plugins: - kubernetes: commandParams: interposer: vector podSpec: containers: - image: alpine:latest command: ['sh'] args: - '-c' - |- set -eu echo Hello World! > hello.txt cat hello.txt | buildkite-agent annotate ``` ###### Custom images In version 0.30.0 and later of the Agent Stack for Kubernetes controller, you can use the [`image` attribute](/docs/pipelines/configure/step-types/command-step#container-image-attributes) in a command step to specify a container image for the step's job. Almost any container image may be used, but the image _must_ have a POSIX shell available to be executed at `/bin/sh`. ```yaml #### pipeline.yaml steps: - name: Hello World! image: "alpine:latest" # <- New in v0.30.0 commands: - echo -n Hello! ``` For versions of the controller prior to 0.30.0, you can specify a different image using `podSpecPatch`. See [Custom images](/docs/agent/self-hosted/agent-stack-k8s/custom-images) for detailed information on container types, image requirements, and configuration options. ###### Environment variables precedence During its bootstrap phase, the Buildkite agent receives some of its environment variables from the Buildkite platform. These environment variables are normally set using the `env` keyword in pipeline.yaml file. During the generation of the Kubernetes `podSpec`, the `podSpec` receives some of its environment variables from the Agent Stack for Kubernetes controller itself, some controller-specific environment variables defined in the values.yaml file, as well as environment variables that can be set in various `podSpec` configuration steps of the pipeline.yaml file. Be aware that currently, environment variables defined as part of a `podSpec` take higher precedence over environment variables set using the `env` keyword in the pipeline.yaml file. If you have a need for a more flexible environment variable setup, use [Agent hooks](/docs/agent/hooks) to implement a precedence rule suite to your organization. --- ### Custom images URL: https://buildkite.com/docs/agent/self-hosted/agent-stack-k8s/custom-images #### Custom images The [Agent Stack for Kubernetes controller](/docs/agent/self-hosted/agent-stack-k8s) creates Pods with several containers, each with different purposes and image requirements. You can customize the images used for command containers to match your build environment needs. ##### Container types and image requirements When the controller creates a Pod to run a Buildkite job, it includes the following containers: Container | Purpose | Image requirements --------- | ------- | ------------------ `copy-agent` (init) | Copies `buildkite-agent` and `tini-static` binaries into `/workspace` | Uses the controller's default image. Cannot be customized. `agent` | Runs `buildkite-agent start` and coordinates the job lifecycle | Uses the controller's default image. Cannot be customized. `checkout` | Clones the repository and runs plugin checkout phases | Requires `git` and `buildkite-agent-entrypoint`. Custom images must be built from `buildkite/agent`. `container-N` | Executes job commands | Requires a POSIX shell at `/bin/sh`. The `buildkite-agent` binary is available from `/workspace`. `sidecar-N` | User-defined sidecar containers | No specific requirements from the controller. The `copy-agent` init container copies the `buildkite-agent` binary and `tini-static` into `/workspace`, making them available to all command containers regardless of the base image. This means command containers don't need these binaries pre-installed. ##### Specifying custom images You can specify custom images for command containers using one of the three methods outlined below. ###### Using the image attribute In version 0.30.0 and later of the controller, use the `image` attribute directly in your pipeline step: ```yaml steps: - label: "Run tests" agents: queue: kubernetes image: node:20-alpine commands: - npm install - npm test ``` This is the simplest approach for specifying a custom image for a single step. ###### Using podSpecPatch For more control over container configuration, use `podSpecPatch` in the `kubernetes` plugin: ```yaml steps: - label: "Run tests" agents: queue: kubernetes commands: - npm install - npm test plugins: - kubernetes: podSpecPatch: containers: - name: container-0 image: node:24-alpine ``` The container name must match the name assigned by the controller. The first command container is always `container-0`. ###### Using controller configuration To set a default image for all jobs processed by a controller, configure `pod-spec-patch` in the controller's `values.yaml`: ```yaml config: pod-spec-patch: containers: - name: container-0 image: your-registry.example.com/custom-build-image:latest ``` This applies to all jobs unless overridden at the pipeline level. ##### Building custom images When building custom images for command containers, consider the following requirements and recommendations. ###### Minimum requirements Command containers require a POSIX-compatible shell available at `/bin/sh`. The controller uses this shell to execute commands, so images like `scratch` or `distroless` won't work without modification. The `buildkite-agent` binary is automatically available from `/workspace/buildkite-agent` after the `copy-agent` init container runs. ###### Recommended additions Depending on your build requirements, you may want to include: - `bash` if your commands or plugins require Bash-specific features - `git` if you need to run Git commands during the build (separate from checkout) - `curl` or `wget` for downloading artifacts or dependencies - Build tools specific to your language or framework ###### Using the Buildkite agent image as a base You can use `buildkite/agent` as a base image for custom images that need agent tooling pre-installed: ```dockerfile FROM buildkite/agent:3 #### Install additional dependencies RUN apk add --no-cache nodejs npm #### Add custom tooling COPY scripts/build-tools.sh /usr/local/bin/ ``` ###### Building from scratch For minimal images, start from `alpine` or a language-specific base image: ```dockerfile FROM alpine:3.23 #### Install any required build tools RUN apk add --no-cache \ bash \ curl \ git ``` ##### Customizing the checkout container The checkout container clones your repository before commands run. You can customize it to add tools like [Git LFS](https://git-lfs.com/) or configure environment variables. ###### Controller-level configuration To use a custom checkout image for all jobs processed by a controller, configure `pod-spec-patch` in the controller's `values.yaml`: ```yaml config: pod-spec-patch: containers: - name: checkout image: your-registry.example.com/custom-checkout:latest env: - name: GIT_TERMINAL_PROMPT value: "0" ``` ###### Pipeline-level configuration To customize the checkout container for a specific step, use `podSpecPatch` in the `kubernetes` plugin: ```yaml steps: - label: "Build" agents: queue: kubernetes commands: - make build plugins: - kubernetes: podSpecPatch: containers: - name: checkout image: your-registry.example.com/custom-checkout:latest env: - name: GIT_TERMINAL_PROMPT value: "0" ``` If you don't need repository checkout, skip it using the `checkout.skip` option: ```yaml steps: - label: "Build from artifact" agents: queue: kubernetes commands: - buildkite-agent artifact download "source.tar.gz" . - tar -xzf source.tar.gz - make build plugins: - kubernetes: checkout: skip: true ``` ##### Image pull configuration When using private registries, configure image pull secrets in the controller or at the pipeline level. ###### Controller-level configuration Add image pull secrets to the controller's `values.yaml`: ```yaml config: pod-spec-patch: imagePullSecrets: - name: my-registry-secret ``` ###### Pipeline-level configuration Add image pull secrets for a specific step using `podSpecPatch`: ```yaml steps: - label: "Build" agents: queue: kubernetes image: your-registry.example.com/private-image:latest commands: - make build plugins: - kubernetes: podSpecPatch: imagePullSecrets: - name: my-registry-secret ``` ##### Troubleshooting This section covers some common issues you might run into when using custom images and how to solve these issues. ###### Command fails with "sh: not found" The image doesn't have a shell at `/bin/sh`. Use an image with a shell installed, or modify your Dockerfile to include one. ###### Agent binary not found If commands can't find `buildkite-agent`, check that: - The `/workspace` volume is mounted correctly - The `copy-agent` init container completed successfully - Your command uses the correct path (`/workspace/buildkite-agent` or just `buildkite-agent` if PATH is configured) ###### Plugins fail to run Some plugins require specific binaries. For example: - Docker-related plugins need the Docker CLI - AWS plugins may need the AWS CLI - Plugins using Bash features need `bash` Check the plugin's documentation for the requirements. --- ### Pod templates URL: https://buildkite.com/docs/agent/self-hosted/agent-stack-k8s/pod-template #### Pod templates From v0.32.2 of agent-stack-k8s, the `kubernetes` plugin allows you to specify a `podTemplate`. The `podTemplate` attribute specifies the name of a [`PodTemplate` resource](https://kubernetes.io/docs/reference/kubernetes-api/workload-resources/pod-template-v1/) in the same namespace as the stack controller. Pod templates function similarly to [`podSpec`](podspec), but hide the details of a `podSpec` from the pipeline definition. Stack operators (who can create Kubernetes resources) can set up a shared library of templates. This allows updating pod specs separately from both the stack controller (as with `pod-spec-patch`) and all the pipelines using them, and avoids storing unnecessary platform details within each pipeline definition. ##### How to use 1. Ensure you are using agent-stack-k8s v0.32.2 or later. 1. Create `PodTemplate` resources in the same namespace as the stack controller. 1. Refer to `PodTemplate` resources by name using the `podTemplate` key of the `kubernetes` plugin. ##### Notes `podTemplate` operates similarly to `podSpec`. It provides the initial spec of a pod that is then adjusted by the stack controller. If a `podSpec` is provided, `podTemplate` is ignored. To adjust a `podTemplate` within a step, use `podSpecPatch`. Other options that change the initial spec include: * Step attributes such as `image` and `command` * The resource class tag * Checkout parameters, such as `skip` * `pod-spec-patch` controller configuration and `podSpecPatch` plugin attribute In addition to `spec` (a `PodSpec`), a `PodTemplate` can also specify metadata within the template. This metadata is ignored by the stack controller - only `spec` is used. ##### Example This example manifest defines a `PodTemplate` called `go-with-cache` in the `buildkite` namespace, that * sets the container image to `golang:latest`, * configures Go to use a [caching tool](https://github.com/bradfitz/go-tool-cache), * attaches a persistent volume claim, * sets a security context to change the user and group. ```yaml apiVersion: v1 kind: PodTemplate metadata: name: go-with-cache namespace: buildkite # Note: must be the same namespace as agent-stack-k8s template: spec: containers: - name: container-0 image: golang:latest env: - name: GOCACHEPROG value: "/tools/go-cacher -cache-server=http://gocached.default.svc.cluster.local:31364" volumeMounts: - name: tools mountPath: /tools readOnly: true volumes: - name: tools persistentVolumeClaim: claimName: tools-shared securityContext: runAsNonRoot: true runAsUser: 1000 runAsGroup: 1001 ``` Once a `PodTemplate` has been created in the cluster, it can be referred to from the `kubernetes` plugin. Here is a pipeline definition that uses the `go-with-cache` template (defined above) multiple times: ```yaml steps: - label: "Go Build" command: go build -o /tmp/out . plugins: - kubernetes: podTemplate: go-with-cache - label: "Go Test" command: go test ./... plugins: - kubernetes: podTemplate: go-with-cache - label: "Go Vet" command: go vet plugins: - kubernetes: podTemplate: go-with-cache ``` The equivalent pipeline using only `podSpec` is quite lengthy, with repetitive, deeply-nested configurations containing platform details that are largely irrelevant to the pipeline steps (such as volume configuration): ```yaml steps: - label: "Go Build" command: go build -o /tmp/out . plugins: - kubernetes: podSpec: containers: - name: container-0 image: golang:latest env: - name: GOCACHEPROG value: "/tools/go-cacher -cache-server=http://gocached.default.svc.cluster.local:31364" volumeMounts: - name: tools mountPath: /tools readOnly: true volumes: - name: tools persistentVolumeClaim: claimName: tools-shared securityContext: runAsNonRoot: true runAsUser: 1000 runAsGroup: 1001 - label: "Go Test" command: go test ./... plugins: - kubernetes: podSpec: containers: - name: container-0 image: golang:latest env: - name: GOCACHEPROG value: "/tools/go-cacher -cache-server=http://gocached.default.svc.cluster.local:31364" volumeMounts: - name: tools mountPath: /tools readOnly: true volumes: - name: tools persistentVolumeClaim: claimName: tools-shared securityContext: runAsNonRoot: true runAsUser: 1000 runAsGroup: 1001 - label: "Go Vet" command: go vet plugins: - kubernetes: podSpec: containers: - name: container-0 image: golang:latest env: - name: GOCACHEPROG value: "/tools/go-cacher -cache-server=http://gocached.default.svc.cluster.local:31364" volumeMounts: - name: tools mountPath: /tools readOnly: true volumes: - name: tools persistentVolumeClaim: claimName: tools-shared securityContext: runAsNonRoot: true runAsUser: 1000 runAsGroup: 1001 ``` While other techniques can be used to shorten this example (such as YAML anchors/aliases or the controller `pod-spec-patch` configuration), they are less flexible than using `podTemplate`, and changes to the pod spec would require either updating the pipeline or the controller configuration. --- ### Container resource limits URL: https://buildkite.com/docs/agent/self-hosted/agent-stack-k8s/container-resource-limits #### Container resources (requests and limits) Default resources for requests and limits can be allocated to Pods and their containers using the PodSpec patch in the Buildkite [Agent Stack for Kubernetes controller's values YAML configuration file](#using-the-podspec-patch-in-the-controller-values-yaml-configuration-file), which applies across the board, or within a [pipeline's YAML file](#overriding-the-podspec-patch-for-a-single-job), which can override those defined in the values YAML configuration file. Alternatively, Resource Classes can be configured, allowing workload to decide resources by specifying `resource_class` Agent tags. ##### Using resource class Resource Classes allow you to define reusable resource configurations that can be applied to CI workloads based on agent tags. > 📘 Minimum version requirement > To implement the agent configuration options described on this section, version 0.31.0 or later of the Agent Stack for Kubernetes controller is required. ###### Configuration Resource classes are defined in the controller configuration under the `resource-classes` key: ```yaml #### values.yaml config: resource-classes: class-name: resource: # Optional: Kubernetes resource requirements requests: cpu: "100m" memory: "128Mi" limits: cpu: "200m" memory: "256Mi" nodeSelector: # Optional: Kubernetes node selector instance-type: "small" zone: "us-west-2a" ``` - **resource** (optional): [Kubernetes ResourceRequirements object](https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/) that will be applied to the command container. * **requests**: Resource requests (CPU, memory, etc). * **limits**: Resource limits (CPU, memory, etc). - **nodeSelector** (optional): Key-value pairs for [Kubernetes node selection](https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#nodeselector). ###### Usage To use a resource class in your CI pipeline, specify the `resource_class` agent tag: ```yaml #### pipeline.yaml steps: - label: "Build" command: "make build" agents: resource_class: "medium" # 📘 Minimum version requirement > To configure a default resource class, version 0.37.0 or later of the Agent Stack for Kubernetes controller is required. You can specify a default resource class that applies to jobs without an explicit `resource_class` agent tag. This ensures all jobs receive resource requests and limits, even when pipeline steps don't specify a resource class. Configure this default using the `default-resource-class-name` key, which must reference a named resource class from `resource-classes`: ```yaml #### values.yaml config: resource-classes: small: resource: requests: cpu: "500m" memory: "512Mi" large: resource: requests: cpu: "2" memory: "4Gi" default-resource-class-name: "small" ``` With this configuration: - Jobs without a `resource_class` agent tag receive the `small` resource class. - Jobs that explicitly specify `resource_class: large` (or any other defined class) use that class instead. The controller validates that `default-resource-class-name` references an existing resource class at startup. If the specified class doesn't exist in `resource-classes`, the controller fails to start with an error. ##### Using the PodSpec patch in the controller values YAML configuration file In the Buildkite Agent Stack for Kubernetes controller's values YAML configuration file, you can specify the default resources (requests and limits) to apply to the Pods and containers: ```yaml #### values.yaml agentStackSecret: config: pod-spec-patch: initContainers: - name: copy-agent resources: requests: cpu: 100m memory: 50Mi limits: memory: 100Mi containers: - name: agent # this container acquires the job resources: requests: cpu: 100m memory: 50Mi limits: memory: 1Gi - name: checkout # this container clones the repository resources: requests: cpu: 100m memory: 50Mi limits: memory: 1Gi - name: container-0 # the job runs in a container with this name by default resources: requests: cpu: 100m memory: 50Mi limits: memory: 1Gi ``` ##### Overriding the PodSpec patch for a single job Following on from the Agent Stack for Kubernetes controller's YAML configuration values file above, all the Kubernetes Jobs created by the controller will have the resources (defined in this file) applied to them. To override these resources for a single job, use the `kubernetes` plugin with `podSpecPatch` to define container resources. For example: ```yaml #### pipelines.yaml agents: queue: kubernetes steps: - name: Hello from a container with more resources command: echo Hello World! plugins: - kubernetes: podSpecPatch: containers: - name: container-0 # <-- Specify this exactly as `container-0`. resources: # Currently under experimentation to make this more ergonomic. requests: cpu: 1000m memory: 50Mi limits: memory: 1Gi - name: Hello from a container with default resources command: echo "Hello World!" ``` ##### Configuring imagecheck-* containers To define [CPU](https://kubernetes.io/docs/tasks/configure-pod-container/assign-cpu-resource/#cpu-units) and [memory](https://kubernetes.io/docs/tasks/configure-pod-container/assign-memory-resource/#memory-units) resource limits for your containers, use the `image-check-container-cpu-limit` and `image-check-container-memory-limit` configuration values: ``` #### values.yaml config: image-check-container-cpu-limit: 100m image-check-container-memory-limit: 128Mi ``` --- ### Volume mounts URL: https://buildkite.com/docs/agent/self-hosted/agent-stack-k8s/volume-mounts #### Volume mounts You can attach extra volume mounts (in addition to the `/workspace` one) to some or all of the pod containers. This can be useful when using [git mirrors](/docs/agent/self-hosted/configure/experiments#promoted-experiments-git-mirrors), which are mounted as extra volumes. To attach extra volume mounts to _all_ containers (`checkout`, `agent`, `command`, `sidecar`, etc.), you can use the `kubernetes` plugin. For example: ```yaml steps: - label: ":file_cabinet: Share file across containers using volume mount" key: share-file-using-scratch-volume env: SCRATCH_VOLUME_PATH: "/tmp/scratch" SCRATCH_VOLUME_PATH_TIMEOUT_SECONDS: "10" plugins: - kubernetes: podSpec: containers: - image: alpine:latest command: - touch $${SCRATCH_VOLUME_PATH}/foo-$${BUILDKITE_JOB_ID}.txt - image: alpine:latest command: - |- COUNT=0 until [[ $$((COUNT++)) == $${SCRATCH_VOLUME_PATH_TIMEOUT_SECONDS} ]]; do [[ -f "$${SCRATCH_VOLUME_PATH}/foo-$${BUILDKITE_JOB_ID}.txt" ]] && break echo "⚠️ Waiting for $${SCRATCH_VOLUME_PATH}/foo-$${BUILDKITE_JOB_ID}.txt to be written... (Attempt $${COUNT}/$${SCRATCH_VOLUME_PATH_TIMEOUT_SECONDS})" sleep 1 done if ! [[ -f "$${SCRATCH_VOLUME_PATH}/foo-$${BUILDKITE_JOB_ID}.txt" ]]; then echo "⛔ $${SCRATCH_VOLUME_PATH}/foo-$${BUILDKITE_JOB_ID}.txt has not been written" exit 1 fi echo "✅ $${SCRATCH_VOLUME_PATH}/foo-$${BUILDKITE_JOB_ID}.txt has been written" rm -f "$${SCRATCH_VOLUME_PATH}/foo-$${BUILDKITE_JOB_ID}.txt" volumes: - name: scratch-volume hostPath: path: "/tmp/volumes/scratch" type: DirectoryOrCreate extraVolumeMounts: - name: scratch-volume mountPath: /tmp/scratch ``` ##### Checkout containers only To attach extra volumes only to your `checkout` containers, define `config.default-checkout-params.extraVolumeMounts` in your YAML configuration. For example: ```yaml #### values.yaml config: default-checkout-params: gitCredentialsSecret: secretName: my-git-credentials extraVolumeMounts: - name: checkout-extra-dir mountPath: /extra-checkout pod-spec-patch: containers: - name: checkout image: "buildkite/agent:latest" volumes: - name: checkout-extra-dir hostPath: path: /my/extra/dir/checkout type: DirectoryOrCreate ``` Alternatively, you can also do this via `checkout.extraVolumeMounts` in the `kubernetes` plugin. For example: ```yaml #### pipeline.yml ... kubernetes: checkout: extraVolumeMounts: - name: checkout-extra-dir mountPath: /extra-checkout podSpecPatch: containers: - name: checkout image: "buildkite/agent:latest" volumes: - name: checkout-extra-dir hostPath: path: /my/extra/dir/checkout type: DirectoryOrCreate ``` ##### Command containers only To attach extra volumes only to your `container-#` (`command`) containers, define `config.default-command-params.extraVolumeMounts` in your YAML configuration. For example: ```yaml #### values.yaml config: default-command-params: extraVolumeMounts: - name: command-extra-dir mountPath: /extra-command pod-spec-patch: containers: - name: container-0 image: "buildkite/agent:latest" volumes: - name: command-extra-dir hostPath: path: /my/extra/dir/command type: DirectoryOrCreate ``` Alternatively, you can also do this via `commandParams.extraVolumeMounts` in the `kubernetes` plugin. For example: ```yaml #### pipeline.yml ... kubernetes: commandParams: extraVolumeMounts: - name: command-extra-dir mountPath: /extra-command podSpecPatch: containers: - name: container-0 image: "buildkite/agent:latest" volumes: - name: command-extra-dir hostPath: path: /my/extra/dir/command type: DirectoryOrCreate ``` ##### Sidecar containers only To attach extra volumes only to your `sidecar` containers, define `config.default-sidecar-params.extraVolumeMounts` in your YAML configuration. For example: ```yaml #### values.yaml config: default-sidecar-params: extraVolumeMounts: - name: sidecar-extra-dir mountPath: /extra-sidecar pod-spec-patch: containers: - name: checkout image: "buildkite/agent:latest" volumes: - name: sidecar-extra-dir hostPath: path: /my/extra/dir/sidecar type: DirectoryOrCreate ``` Alternatively, you can also do this via `sidecarParams.extraVolumeMounts` in the `kubernetes` plugin. For example: ```yaml #### pipeline.yml ... kubernetes: sidecars: - image: nginx:latest sidecarParams: extraVolumeMounts: - name: sidecar-extra-dir mountPath: /extra-sidecar podSpecPatch: containers: - name: checkout image: "buildkite/agent:latest" volumes: - name: sidecar-extra-dir hostPath: path: /my/extra/dir/sidecar type: DirectoryOrCreate ``` --- ### Command override URL: https://buildkite.com/docs/agent/self-hosted/agent-stack-k8s/overriding-commands #### Overriding commands You can alter the `command` or `args` for `command` containers using PodSpecPatch. These will be re-wrapped in the necessary `buildkite-agent` invocation. However, PodSpecPatch will not modify the `command` or `args` values for containers with the following names or patterns (provided by the Agent Stack for Kubernetes controller): - `copy-agent` - `imagecheck-*` - `agent` - `checkout` Instead, if an attempt is made to modify the `command` or `args` values for these containers, an error is returned. If modifying the commands of these containers is something you want to do, consider other potential solutions: - To override checkout behaviour, consider writing a `checkout` hook, or disabling the checkout container entirely with `checkout: skip: true`. - To run additional containers without `buildkite-agent` in them, consider using a [sidecar](/docs/agent/self-hosted/agent-stack-k8s/sidecars). > 📘 > Buildkite is continually looking into adding ways to make the Buildkite Agent Stack for Kubernetes more flexible while ensuring core functionality is maintained. ##### Important considerations and precautions Avoid using PodSpecPatch to override `command` or `args` of the containers added by the Agent Stack for Kubernetes controller. Such modifications, if not done with extreme care and detailed knowledge about how the controller constructs PodSpecs, are very likely to break the agent's functionality within the pod. If the replacement command for the checkout container does not invoke `buildkite-agent bootstrap`: - The container will not connect to the `agent` container, and the agent will not finish the job normally because there was not an expected number of other containers connecting to it. - The logs from the container will not be visible in Buildkite Pipelines. - The hooks will not be executed automatically. - The plugins will not be checked out or executed automatically and various other functions provided by `buildkite-agent` may not work. If the command for the `agent` container is overridden, and the replacement command does not invoke `buildkite-agent start`, then the job will not be acquired at all on Buildkite Pipelines. If you still wish to disable this precaution, and override the raw `command` or `args` of these controller-provided containers using PodSpecPatch, you may do so with the `allow-pod-spec-patch-unsafe-command-modification` config option. --- ### Securing the stack URL: https://buildkite.com/docs/agent/self-hosted/agent-stack-k8s/securing-the-stack #### Securing the Agent Stack for Kubernetes > 📘 Minimum version requirement > To implement the configuration options described on this page, version 0.13.0 or later of the Agent Stack for Kubernetes controller is required. To secure Buildkite Pipelines jobs on the Agent Stack for Kubernetes controller, the `prohibit-kubernetes-plugin` configuration option can be used to prevent users from overriding a controller-defined `pod-spec-patch`. With the `prohibit-kubernetes-plugin` configuration enabled, any Pipelines job including the `kubernetes` plugin will fail. ##### Using inline configuration Add the `--prohibit-kubernetes-plugin` argument to your Helm deployment: ```bash helm upgrade --install agent-stack-k8s oci://ghcr.io/buildkite/helm/agent-stack-k8s \ --namespace buildkite \ --create-namespace \ --set agentToken= \ --set-json='config.tags=["queue=kubernetes"]' \ --prohibit-kubernetes-plugin ``` ##### Using a YAML configuration file You can also enable the `prohibit-kubernetes-plugin` option in your configuration values YAML file: ```yaml #### values.yaml ... config: prohibit-kubernetes-plugin: true pod-spec-patch: # Override the default podSpec here. ... ``` --- ### Prometheus metrics URL: https://buildkite.com/docs/agent/self-hosted/agent-stack-k8s/prometheus-metrics #### Prometheus metrics All [Prometheus metrics](https://prometheus.io/) exported by the Agent Stack for Kubernetes controller begin with `buildkite_`. The second component of the metric name refers to the controller component that produces the metric. ##### How to enable Prometheus monitoring The Agent Stack for Kubernetes controller can expose Prometheus metrics for monitoring and observability. To enable Prometheus monitoring, complete these two steps: 1. Enable metrics port exposure in the Helm chart. 1. Create a PodMonitor resource for scraping. > 📘 > The instructions that follow assume that you have [Prometheus Operator](https://prometheus-operator.dev/) installed in your [cluster](/docs/pipelines/security/clusters). If you're using a different Prometheus setup, you'll need to configure scraping manually. ###### Enabling metrics port exposure Configure the `prometheus-port` option in your Helm deployment to expose the metrics endpoint. You can use either the command-line or the value file approach. ###### Command-line approach Use the following command to expose the metrics endpoint: ```bash helm upgrade --install agent-stack-k8s oci://ghcr.io/buildkite/helm/agent-stack-k8s \ --namespace buildkite \ --create-namespace \ --set agentToken= \ --set config.prometheus-port=8080 ``` ###### Values file approach Set the following configuration in your values file: ```yaml #### values.yml agentToken: "" config: prometheus-port: 8080 tags: - queue=kubernetes ``` And run the following command: ```bash helm upgrade --install agent-stack-k8s oci://ghcr.io/buildkite/helm/agent-stack-k8s \ --namespace buildkite \ --create-namespace \ --values values.yml ``` This exposes metrics on port 8080 at the `/metrics` endpoint within the controller pod. ###### Creating a PodMonitor for scraping If you're using [Prometheus Operator](https://prometheus-operator.dev/), create a `PodMonitor` resource to automatically scrape metrics from the controller: ```yaml #### buildkite-podmonitor.yml apiVersion: monitoring.coreos.com/v1 kind: PodMonitor metadata: name: buildkite-agent-stack namespace: buildkite labels: app: buildkite-agent-stack spec: selector: matchLabels: app: agent-stack-k8s # Replace with your actual Helm release name followed by "-agent-stack-k8s" podMetricsEndpoints: - port: metrics path: /metrics interval: 30s ``` Apply the PodMonitor: ```bash kubectl apply -f buildkite-podmonitor.yml ``` ###### Verification Verify that monitoring is working correctly: ```bash #### Check that the metrics port is exposed kubectl get pods -n buildkite -o wide kubectl port-forward -n buildkite deployment/agent-stack-k8s 8080:8080 #### In another terminal, test metrics endpoint curl http://localhost:8080/metrics #### Verify PodMonitor is created and discovered kubectl get podmonitor -n buildkite ``` ##### Notes on using the metrics Most metrics below are counter metrics, designed to be used in conjunction with the `rate` and a time window. These are named ending in `_total`. PromQL examples: - `rate(buildkite_scheduler_job_create_success_total[10m])` - jobs successfully created per second over a 10 minute window. - `rate(buildkite_scheduler_job_create_errors_total[10m])` - errors per second of failures to create jobs over a 10-minute window. Some metrics are gauges, which can be useful for diagnosing particular issues. A few metrics are native histograms, which requires the Prometheus feature flag to be enabled (`--enable-feature=native-histograms`). These are mostly latency histograms named ending in `_seconds`, and again work well with `rate`: - `rate(buildkite_job_end_to_end_seconds[10m])` - histogram of time in seconds that jobs spent between being returned from a query to Buildkite and being created in Kubernetes, over a 10 minute window. - `rate(buildkite_monitor_job_query_seconds[10m])` - histogram of time spent querying Buildkite for jobs that can be scheduled, over a 10 minute window. ##### Labels and their meanings Label name | Description | Values --- | --- | --- `source` | The event that caused a counter to increase. | `Handle` - the previous component`OnAdd` - the Kubernetes Informer (e.g. an existing job, or a job created by another instance of agent-stack-k8s)`OnDelete` - the Kubernetes Informer (e.g. the job was deleted externally)`OnUpdate` - the Kubernetes Informer (e.g. the job was modified externally or changed state automatically) `reason`, `error_reason` | For operations on Kubernetes, the Kubernetes reason associated with an error | Examples `TooManyRequests` - the Kubernetes server is overloaded`AlreadyExists` - the resource (e.g. job) already exists in the cluster`Invalid` - the resource (e.g. job) couldn't be created because it was invalid `reason` | For the limiter, a classification of the error returned by the downstream component | `duplicate` - a latter component or Kubernetes determined the job is a duplicate`stale` - the job was cancelled or no longer existed by the time it was possible to start work on it`other` - some other error prevented the job from being handled `eviction_reason` | The reason an eviction was created | `image_pull_failure` - One or more container images couldn't be pulled within a timeout`bk_job_cancelled` - The corresponding Buildkite job was cancelled on Buildkite ##### completion_watcher Full metric name | Labels | Description --- | --- | --- `buildkite_completion_watcher_cleanup_errors_total` | `reason` | Count of errors during attempts to clean up a job with a finished agent `buildkite_completion_watcher_cleanups_total` | - | Count of jobs with finished agents successfully cleaned up `buildkite_completion_watcher_onadd_events_total` | - | Count of OnAdd informer events `buildkite_completion_watcher_onupdate_events_total` | - | Count of OnUpdate informer events ##### deduper Full metric name | Labels | Description --- | --- | --- `buildkite_deduper_job_handler_calls_total` | - | Count of jobs that were passed to the next handler in the chain `buildkite_deduper_job_handler_errors_total` | - | Count of jobs that weren't scheduled because the next handler in the chain returned an error `buildkite_deduper_jobs_already_not_running_total` | `source` | Count of times a job was already missing from inFlight `buildkite_deduper_jobs_already_running_total` | `source` | Count of times a job was already present in inFlight `buildkite_deduper_jobs_marked_running_total` | `source` | Count of times a job was added to inFlight `buildkite_deduper_jobs_running` | - | Current number of running jobs according to deduper `buildkite_deduper_jobs_unmarked_running_total` | `source` | Count of times a job was removed from inFlight `buildkite_deduper_onadd_events_total` | - | Count of OnAdd informer events `buildkite_deduper_ondelete_events_total` | - | Count of OnDelete informer events `buildkite_deduper_onupdate_events_total` | - | Count of OnUpdate informer events ##### job_watcher Full metric name | Labels | Description --- | --- | --- `buildkite_job_watcher_cleanup_errors_total` | `reason` | Count of errors during attempts to clean up a stalled job `buildkite_job_watcher_cleanups_total` | - | Count of stalled jobs successfully cleaned up `buildkite_job_watcher_job_fail_on_buildkite_errors_total` | - | Count of errors when jobWatcher tried to acquire and fail a job on Buildkite `buildkite_job_watcher_jobs_failed_on_buildkite_total` | - | Count of jobs that jobWatcher successfully acquired and failed on Buildkite `buildkite_job_watcher_jobs_finished_without_pod_total` | - | Count of jobs that entered a terminal state (Failed or Succeeded) without a pod `buildkite_job_watcher_jobs_stalled_without_pod_total` | - | Count of jobs that ran for too long without a pod `buildkite_job_watcher_num_ignored_jobs` | - | Current count of jobs ignored for jobWatcher checks `buildkite_job_watcher_num_stalling_jobs` | - | Current number of jobs that are running but have no pods `buildkite_job_watcher_onadd_events_total` | - | Count of OnAdd informer events `buildkite_job_watcher_ondelete_events_total` | - | Count of OnDelete informer events `buildkite_job_watcher_onupdate_events_total` | - | Count of OnUpdate informer events ##### limiter Full metric name | Labels | Description --- | --- | --- `buildkite_limiter_job_handler_calls_total` | - | Count of jobs that were passed to the next handler in the chain `buildkite_limiter_job_handler_errors_total` | `reason` | Count of jobs that weren't scheduled because the next handler in the chain returned an error `buildkite_limiter_max_in_flight` | - | Configured limit on number of jobs simultaneously in flight `buildkite_limiter_onadd_events_total` | - | Count of OnAdd informer events `buildkite_limiter_ondelete_events_total` | - | Count of OnDelete informer events `buildkite_limiter_onupdate_events_total` | - | Count of OnUpdate informer events `buildkite_limiter_token_overflows_total` | `source` | Count of attempts to return a token when the bucket was full `buildkite_limiter_token_underflows_total` | `source` | Count of attempts to take a token when the bucket was empty `buildkite_limiter_token_wait_duration_seconds` | - | Time spent waiting for a limiter token to become available `buildkite_limiter_tokens_available` | - | Limiter tokens currently available `buildkite_limiter_waiting_for_token` | - | Number of limiter workers currently waiting for a token `buildkite_limiter_waiting_for_work` | - | Number of limiter workers currently waiting for work `buildkite_limiter_work_queue_length` | - | Amount of enqueued work in the limiter `buildkite_limiter_work_wait_duration_seconds` | - | Time spent waiting in the limiter for work to become available ##### monitor Full metric name | Labels | Description --- | --- | --- `buildkite_monitor_job_handler_errors_total` | - | Count of jobs that weren't scheduled because the next handler in the chain returned an error `buildkite_monitor_job_queries_total` | - | Count of queries to Buildkite to fetch jobs `buildkite_monitor_job_query_errors_total` | - | Count of errors from queries to Buildkite to fetch jobs `buildkite_monitor_job_query_seconds` | - | Time taken to fetch jobs from Buildkite `buildkite_monitor_jobs_filtered_out_total` | - | Count of jobs that didn't match the configured agent tags `buildkite_monitor_jobs_handled_total` | - | Count of jobs that were passed to the next handler in the chain `buildkite_monitor_jobs_returned_total` | - | Count of jobs returned from queries to Buildkite `buildkite_monitor_monitor_up` | - | Whether the monitor loop is running (0 = stopped, 1 = running) ##### pod_watcher Full metric name | Labels | Description --- | --- | --- `buildkite_pod_watcher_job_fail_on_buildkite_errors_total` | - | Count of errors when podWatcher tried to acquire and fail a job on Buildkite `buildkite_pod_watcher_jobs_failed_on_buildkite_total` | - | Count of jobs that podWatcher successfully acquired and failed on Buildkite `buildkite_pod_watcher_num_ignored_jobs` | - | Current count of jobs ignored for podWatcher checks `buildkite_pod_watcher_num_job_cancel_checkers` | - | Current count of job cancellation checkers `buildkite_pod_watcher_num_watching_for_image_failure` | - | Current count of pods being watched for potential image-related failures `buildkite_pod_watcher_onadd_events_total` | - | Count of OnAdd informer events `buildkite_pod_watcher_ondelete_events_total` | - | Count of OnDelete informer events `buildkite_pod_watcher_onupdate_events_total` | - | Count of OnUpdate informer events `buildkite_pod_watcher_pod_eviction_errors_total` | `eviction_reason`, `error_reason` | Count of failures to create pod evictions by podWatcher `buildkite_pod_watcher_pods_evicted_total` | `eviction_reason` | Count of evictions created for pods by podWatcher ##### scheduler Full metric name | Labels | Description --- | --- | --- `buildkite_scheduler_job_create_calls_total` | - | Count of jobs that were passed to Kubernetes to create `buildkite_scheduler_job_create_errors_total` | `reason` | Count of jobs that weren't created in Kubernetes because of an error `buildkite_scheduler_job_create_success_total` | - | Count of jobs that were successfully created in Kubernetes `buildkite_scheduler_job_fail_on_buildkite_errors_total` | - | Count of errors when scheduler tried to acquire and fail a job on Buildkite `buildkite_scheduler_jobs_failed_on_buildkite_total` | - | Count of jobs that scheduler successfully acquired and failed on Buildkite ##### Other Full metric name | Labels | Description --- | --- | --- `buildkite_job_end_to_end_seconds` | - | End-to-end processing times of jobs. Specifically, for each job, the duration between starting the query that returned the job from Buildkite, and successfully creating that job in Kubernetes. --- ### Buildah URL: https://buildkite.com/docs/agent/self-hosted/agent-stack-k8s/buildah-container-builds #### Buildah container builds [Buildah](https://buildah.io/) provides a lightweight, daemonless approach to building [Open Container Initiative (OCI)](https://github.com/containers/buildah)-compliant container images, making it a suitable choice for Agent Stack for Kubernetes in cases where running a Docker daemon within build containers might not be desired or possible. ##### Buildah daemonless builds Buildah operates without a need for a persistent daemon, unlike Docker. Buildah can build containers from Dockerfiles or [Containerfiles](https://github.com/containers/buildah/discussions/3170) (the OCI standard format) or through its native command-line interface. This approach provides better security isolation and works well within Kubernetes environments. ##### Using Buildah with Agent Stack for Kubernetes Agent Stack for Kubernetes supports multiple Buildah configurations, each providing different security trade-offs. Choose the approach that best matches your environment's security policies: - **Privileged**: maximum compatibility, requires privileged containers, or - **Rootless**: enhanced security, runs as non-root user. ###### Privileged Buildah **Recommended**: When you need maximum compatibility and your cluster allows privileged containers. **Security impact**: Container has root access to host kernel features. Use only in trusted environments. **How it works**: Buildah runs as root with `privileged: true`, giving access to all kernel capabilities needed for container operations. ```yaml steps: - label: ":package: Buildah privileged container build" agents: queue: kubernetes command: | buildah bud \ --format docker \ --file Dockerfile \ --tag myimage:${BUILDKITE_BUILD_NUMBER} \ . plugins: - kubernetes: podSpec: volumes: - name: buildah-storage emptyDir: {} containers: - name: main image: quay.io/buildah/stable:latest env: - name: BUILDAH_ISOLATION value: "chroot" volumeMounts: - name: buildah-storage mountPath: "/var/lib/containers" securityContext: privileged: true ``` ###### Rootless Buildah **Recommended**: When you want secure container builds without privileged access (recommended for most environments). **Security impact**: Runs as a [non-root user](https://docs.docker.com/engine/security/rootless/) (`UID 1000`), significantly reducing attack surface compared to the privileged mode. **How it works**: Buildah uses user namespaces and rootless container runtime. Buildah runs as a regular user but can still build containers through user namespace mapping. ```yaml steps: - label: ":package: Buildah rootless container build" agents: queue: kubernetes command: | buildah bud \ --format docker \ --file Dockerfile \ --tag myimage:${BUILDKITE_BUILD_NUMBER} \ . plugins: - kubernetes: podSpec: volumes: - name: buildah-storage emptyDir: {} containers: - name: main image: quay.io/buildah/stable:latest env: - name: BUILDAH_ISOLATION value: "chroot" volumeMounts: - name: buildah-storage mountPath: "/home/build/.local/share/containers" securityContext: runAsNonRoot: true runAsUser: 1000 runAsGroup: 1000 ``` ##### Configuration comparison The following table highlights the key differences between privileged and rootless Buildah container configurations in Kubernetes environments. | Feature | Privileged | Rootless | | ----------------------- | ------------------------------- | ------------------------------- | | Container image | `quay.io/buildah/stable:latest` | `quay.io/buildah/stable:latest` | | Runs as user | root (0) | user (1000) | | Privileged access | Yes (`privileged: true`) | No | | Storage driver | overlay (default) | overlay (default) | | Storage path | `/var/lib/containers` | `/home/build/.local/share/containers` | | Kubernetes version | Any | Any | ##### Understanding the components This section covers the key components and configuration options for running Buildah in Kubernetes, including container images, security contexts, storage drivers and paths, and build isolation modes. ###### Container images The official Buildah image that runs in both privileged and rootless modes and supports both configurations is `quay.io/buildah/stable:latest`. ###### Security contexts - **Privileged**: container runs as root with `privileged: true`, bypassing most Kubernetes security controls. - **Rootless**: container runs as `user 1000` using user namespace mapping. Host kernel sees regular user, container sees root. ###### Storage driver Buildah uses container storage backends: - **`overlay`**: fast and efficient, used by default in both privileged and rootless modes. Modern Buildah images support overlay in rootless environments without requiring `/dev/fuse` or additional configuration. - **`vfs`**: fallback option that works in all environments but slower, especially with bigger images. Can be specified with `--storage-driver vfs` if overlay encounters issues. ###### Storage paths The storage location depends on who owns the Buildah process: - **Root user (privileged)**: uses system location `/var/lib/containers`. - **Regular user (rootless)**: uses user home directory `/home/build/.local/share/containers`. ###### Build isolation The recommended isolation mode for the Buildah container environments is `BUILDAH_ISOLATION=chroot`. It provides good isolation without requiring additional privileges, unlike other isolation modes that may need extra capabilities. ##### Customizing the build You can customize you Buildah builds by modifying the `buildah bud` command options using the approaches outlined below. ###### Using build arguments ```bash buildah bud \ --format docker \ --file Dockerfile \ --build-arg NODE_ENV=production \ --build-arg VERSION=$BUILDKITE_BUILD_NUMBER \ --tag myimage:${BUILDKITE_BUILD_NUMBER} \ . ``` ###### Targeting specific build stages ```bash buildah bud \ --format docker \ --file Dockerfile \ --target production \ --tag myimage:${BUILDKITE_BUILD_NUMBER} \ . ``` ###### Building and pushing to registry ```bash #### Build the image buildah bud \ --format docker \ --file Dockerfile \ --tag myregistry.com/myimage:${BUILDKITE_BUILD_NUMBER} \ . #### Push to registry buildah push \ --creds ${REGISTRY_USER}:${REGISTRY_PASSWORD} \ myregistry.com/myimage:${BUILDKITE_BUILD_NUMBER} ``` ###### Exporting as a tar file ```bash #### Build the image buildah bud \ --format docker \ --file Dockerfile \ --tag myimage:${BUILDKITE_BUILD_NUMBER} \ . #### Export to tar buildah push \ myimage:${BUILDKITE_BUILD_NUMBER} \ docker-archive:image.tar ``` ###### Using alternative storage driver If you encounter issues with the default overlay driver, you can use `vfs` as a fallback: ```bash buildah bud \ --storage-driver vfs \ --format docker \ --file Dockerfile \ --tag myimage:${BUILDKITE_BUILD_NUMBER} \ . ``` ##### Troubleshooting This section describes common issues for Buildah and the ways of solving these issues. ###### Permission denied errors - **Privileged**: ensure `securityContext.privileged: true` is configured. - **Rootless**: verify `runAsUser: 1000` and `runAsGroup: 1000` are set. - Verify storage mount at `/var/lib/containers` (for privileged) or `/home/build/.local/share/containers` (for rootless). ###### Storage driver errors - The default overlay driver should work in both privileged and rootless modes. - If overlay fails, try `--storage-driver vfs` as a fallback (this is a slower but more compatible approach). - Check that the storage volume has sufficient space. ###### Registry authentication failures - Use `buildah login` before pushing: `buildah login --username $USER --password $PASS registry.com` or pass credentials directly with `--creds` flag. - Ensure registry credentials are available as environment variables or secrets. ###### Image format compatibility issues - Use `--format docker` for Docker registry compatibility. - Use `--format oci` for strict OCI compliance. - Default format varies by Buildah version. ##### Debugging builds You can increase Buildah output verbosity with debug flags: ```bash buildah --log-level=debug bud \ --format docker \ --file Dockerfile \ --tag myimage:${BUILDKITE_BUILD_NUMBER} \ . ``` ##### Inspecting the built image Use the following Buildah commands to inspect the built image: ```bash #### List images buildah images #### Inspect image details buildah inspect myimage:${BUILDKITE_BUILD_NUMBER} #### List containers buildah containers ``` --- ### BuildKit URL: https://buildkite.com/docs/agent/self-hosted/agent-stack-k8s/buildkit-container-builds #### BuildKit container builds [BuildKit](https://docs.docker.com/build/buildkit/) is a Docker builder that provides advanced features for building container images in a daemonless environment, making it a good fit for Agent Stack for Kubernetes when running a Docker daemon within build containers may not be desired or possible. ##### BuildKit daemonless builds The BuildKit daemon can be run in [rootless mode](https://github.com/moby/buildkit/blob/b9322799388c6c0d598cb70236d22081c5db3c4b/docs/rootless.md) or embedded directly into your build process without requiring a persistent daemon. These deployment options provide better security isolation and work well within Kubernetes environments. ##### Using BuildKit with Agent Stack for Kubernetes Agent Stack for Kubernetes supports multiple BuildKit configurations, each providing different security trade-offs. Choose the approach that best matches your environment's security policies and container runtime restrictions: - **Privileged**: maximum compatibility, requires [privileged containers](https://docs.docker.com/enterprise/security/hardened-desktop/enhanced-container-isolation/#secured-privileged-containers). - **Rootless (Non-Privileged)**: enhanced security, runs as non-root user. - **Rootless (Strict)**: maximum security isolation with additional sandbox disabled. ###### Privileged BuildKit **Recommended**: When you need maximum compatibility and your cluster allows privileged containers. **Security impact**: Container has root access to host kernel features. Use only in trusted environments. **How it works**: Runs as root with `privileged: true`, giving access to all kernel capabilities needed for container operations. ```yaml steps: - label: "\:docker\: BuildKit daemonless container build" retry: manual: permit_on_passed: true agents: queue: kubernetes command: | buildctl-daemonless.sh build \ --frontend dockerfile.v0 \ --local context=. \ --local dockerfile=. \ --opt filename=Dockerfile \ --progress=plain plugins: - kubernetes: podSpec: volumes: - name: buildkit-cache emptyDir: {} - name: tmp-space emptyDir: {} containers: - name: main image: moby/buildkit:latest env: - name: BUILDKITD_FLAGS value: "" volumeMounts: - name: buildkit-cache mountPath: "/var/lib/buildkit" - name: tmp-space mountPath: "/tmp" securityContext: privileged: true ``` ###### Rootless BuildKit (non-privileged) **Recommended**: When your Kubernetes cluster blocks privileged containers but allows `runAsNonRoot`. **Security impact**: Runs as [non-root user](https://docs.docker.com/engine/security/rootless/) (`UID 1000`), significantly reducing attack surface. **How it works**: Uses user namespaces and rootless container runtime. BuildKit runs as regular user but can still build containers through user namespace mapping. ```yaml steps: - label: "\:docker\: BuildKit non-privileged container build" retry: manual: permit_on_passed: true agents: queue: kubernetes command: | buildctl-daemonless.sh build \ --frontend dockerfile.v0 \ --local context=. \ --local dockerfile=. \ --opt filename=Dockerfile \ --progress=plain plugins: - kubernetes: podSpec: volumes: - name: buildkit-cache emptyDir: {} - name: tmp-space emptyDir: {} containers: - name: main image: moby/buildkit:rootless env: - name: BUILDKITD_FLAGS value: "" volumeMounts: - name: buildkit-cache mountPath: "/home/user/.local/share/buildkit" - name: tmp-space mountPath: "/tmp" securityContext: runAsNonRoot: true runAsUser: 1000 runAsGroup: 1000 ``` ###### Rootless BuildKit (strict security) Uses `--oci-worker-no-process-sandbox` to work around Kubernetes limitations with PID namespaces. This mode is required when Kubernetes doesn't support `systempaths=unconfined`. ```yaml steps: - label: "\:docker\: BuildKit rootless daemonless build" retry: manual: permit_on_passed: true agents: queue: kubernetes command: | BUILDKITD_FLAGS="--oci-worker-no-process-sandbox" \ buildctl-daemonless.sh build \ --frontend dockerfile.v0 \ --local context=. \ --local dockerfile=. \ --opt filename=Dockerfile \ --progress=plain plugins: - kubernetes: podSpec: volumes: - name: buildkit-cache emptyDir: {} - name: tmp-space emptyDir: {} containers: - name: main image: moby/buildkit:rootless volumeMounts: - name: buildkit-cache mountPath: "/home/user/.local/share/buildkit" - name: tmp-space mountPath: "/tmp" securityContext: runAsNonRoot: true runAsUser: 1000 runAsGroup: 1000 seccompProfile: type: Unconfined appArmorProfile: type: Unconfined ``` ##### Configuration comparison The following table highlights the key differences between privileged and rootless BuildKit container configurations in Kubernetes environments. | Feature | Privileged | Rootless (Non-Privileged) | Rootless (Strict) | | ---------------------------- | ------------------------ | ------------------------------- | --------------------------------- | | **Container image** | `moby/buildkit:latest` | `moby/buildkit:rootless` | `moby/buildkit:rootless` | | **Runs as user** | root (0) | user (1000) | user (1000) | | **Privileged access** | Yes (`privileged: true`) | No | No | | **BuildKit process sandbox** | Enabled | Enabled | Disabled\* | | **Kernel security profiles** | Default | Default | Unconfined | | **Kubernetes version** | Any | Any | ≥1.19 (seccomp), ≥1.30 (AppArmor) | \*Process sandbox disabled due to Kubernetes limitations - reduces security within BuildKit container. >📘 >`Unconfined` profiles are required for rootless container operations. ##### Understanding the components This section covers the key components and configuration options for running BuildKit in Kubernetes, including image variants, security contexts, cache storage locations, and the trade-offs of rootless mode. ###### Container images - **`moby/buildkit:latest`**: full-featured image designed to run as root with privileged access. - **`moby/buildkit:rootless`**: specially built image that can run as a regular user through rootless container approach. ###### Security contexts - **Privileged**: container runs as root with `privileged: true`, bypassing most Kubernetes security controls. - **Rootless**: container runs as `user 1000` using user namespace mapping. Host kernel sees a regular user, container sees root. - **Security profiles**: [seccomp](https://docs.docker.com/engine/security/seccomp/) and [AppArmor](https://docs.docker.com/engine/security/apparmor/) profiles restrict system calls and operations. ###### Cache storage paths The cache location depends on who owns the BuildKit process: - **Root user** (privileged): uses system location `/var/lib/buildkit`. - **Regular user** (rootless): uses user home directory `/home/user/.local/share/buildkit`. ###### Rootless mode caveats The `--oci-worker-no-process-sandbox` flag disables BuildKit's internal process isolation: - Build steps can kill or ptrace other processes in the BuildKit container. - Processes that don't exit cleanly cannot be force-terminated. - Required in Kubernetes because `systempaths=unconfined` is not supported. This reduces security compared to rootless mode without the flag, but is necessary for Kubernetes compatibility. ##### Customizing the build Customize BuildKit builds by modifying the `buildctl-daemonless.sh` command options: ###### Targeting specific build stages ```bash buildctl-daemonless.sh build \ --frontend dockerfile.v0 \ --local context=. \ --local dockerfile=. \ --opt filename=Dockerfile \ --opt target=production \ --progress=plain ``` ###### Using build arguments ```bash buildctl-daemonless.sh build \ --frontend dockerfile.v0 \ --local context=. \ --local dockerfile=. \ --opt filename=Dockerfile \ --opt build-arg:NODE_ENV=production \ --opt build-arg:VERSION=$BUILDKITE_BUILD_NUMBER \ --progress=plain ``` ###### Exporting to registry Export the built images to a container registry: ```bash buildctl-daemonless.sh build \ --frontend dockerfile.v0 \ --local context=. \ --local dockerfile=. \ --output type=image,name=myregistry.com/myimage:$BUILDKITE_BUILD_NUMBER,push=true ``` ###### Exporting as tar file Export the built images as tar files: ```bash buildctl-daemonless.sh build \ --frontend dockerfile.v0 \ --local context=. \ --local dockerfile=. \ --output type=tar,dest=image.tar ``` ##### Troubleshooting This section describes common issues for BuildKit and the ways of solving these issues. ###### Permission denied errors - **Privileged**: ensure `securityContext.privileged: true` is configured. - **Non-privileged/Rootless**: verify `runAsUser: 1000` and `runAsGroup: 1000` are set. - **Rootless**: check that `seccompProfile` and `appArmorProfile` are set to `Unconfined`. ###### Cache mount issues - **Privileged**: verify cache mount at `/var/lib/buildkit`. - **Rootless (both modes)**: verify cache mount at `/home/user/.local/share/buildkit`. ###### BuildKit tools not found Use appropriate image for your build mode: - **Privileged** builds: `moby/buildkit:latest`. - **Non-privileged/Rootless** builds: `moby/buildkit:rootless`. ###### Rootless build failures Ensure `BUILDKITD_FLAGS="--oci-worker-no-process-sandbox"` is set to "rootless (strict)" mode. ###### Rootless builds failing on Bottlerocket OS (including EKS Auto Mode) Bottlerocket OS sets `user.max_user_namespaces=0` by default as a security hardening measure, disabling user namespaces required by rootless BuildKit. EKS Auto Mode runs nodes on Bottlerocket AMIs, but this issue affects any Kubernetes cluster running Bottlerocket nodes. Both rootless modes fail with: ``` [rootlesskit:parent] /proc/sys/user/max_user_namespaces needs to be set to non-zero. [rootlesskit:parent] error: failed to start the child: fork/exec /proc/self/exe: no space left on device ``` To fix this, apply the following [DaemonSet](https://github.com/moby/buildkit/blob/master/examples/kubernetes/sysctl-userns.privileged.yaml) to all nodes in the cluster before running rootless builds: ```yaml apiVersion: apps/v1 kind: DaemonSet metadata: labels: app: sysctl-userns name: sysctl-userns spec: selector: matchLabels: app: sysctl-userns template: metadata: labels: app: sysctl-userns spec: containers: - name: sysctl-userns image: busybox command: - sh - -euxc - sysctl -w user.max_user_namespaces=63359 && sleep infinity securityContext: privileged: true ``` After applying the DaemonSet, rootless (strict) mode works. Rootless (non-privileged) mode still fails with `runc run failed: unable to start container process: error mounting "proc"` — this mode is not supported on Bottlerocket OS. ###### Pod initialization issues For rootless builds, verify Kubernetes version supports the required security profiles (≥1.19 for seccomp, ≥1.30 for AppArmor). ###### Build processes not terminating Known limitation with `--oci-worker-no-process-sandbox` - BuildKit cannot force-kill processes that don't exit cleanly. ###### Debugging builds Increase BuildKit output verbosity by using `--progress=plain` and adding debug flags: ```bash buildctl-daemonless.sh --debug build \ --frontend dockerfile.v0 \ --local context=. \ --local dockerfile=. \ --progress=plain ``` --- ### Docker Compose URL: https://buildkite.com/docs/agent/self-hosted/agent-stack-k8s/docker-compose-container-builds #### Docker Compose builds The [Docker Compose plugin](https://buildkite.com/resources/plugins/buildkite-plugins/docker-compose-buildkite-plugin/) helps you build and run multi-container Docker applications. You can build and push container images using the Docker Compose plugin on agents that are auto-scaled by the Buildkite Agent Stack for Kubernetes. ##### Special considerations regarding Agent Stack for Kubernetes When running the [Docker Compose plugin](https://buildkite.com/resources/plugins/buildkite-plugins/docker-compose-buildkite-plugin/) within the Buildkite Agent Stack for Kubernetes, consider the following requirements and best practices for successful container builds. ###### Docker daemon access The Docker Compose plugin requires access to a Docker daemon and you can choose one of two primary approaches: - Mounting the host Docker socket - [Docker-in-Docker (DinD)](https://hub.docker.com/_/docker) Let's look into both approaches in more detail. ###### Mounting the host Docker socket Mount `/var/run/docker.sock` from the host into your pod. In this approach, you need to remember that the host's Docker daemon will be shared with all pods that mount it. Since all pods share the same Docker daemon, there's no resource isolation between them. If one pod's build exhausts or corrupts the daemon, all the other pods will be impacted. You're also limited to a single daemon configuration across all pods. This approach grants containers near-root-level access to the host, meaning any process with socket access can control the host Docker daemon. This poses container breakout risks if running untrusted workloads. > 🚧 Warning! > Only use this approach with trusted repositories, run your agents on dedicated nodes, and scope access according to your Kubernetes security policies. ###### Docker-in-Docker (DinD) Run a Docker daemon inside your pod using a DinD sidecar container. DinD can add complexity and resource overhead but it avoids sharing the host daemon. In this approach, you need to use a dedicated sidecar container for each build. Only set `DOCKER_TLS_CERTDIR=""` to disable TLS if the network scope is local to the pod. Avoid exposing host ports to restrict network access. Set resource limits to prevent excess consumption. Running a separate Docker daemon in each pod slows down build performance and increases resource usage. Operations and debugging can be more complex since you need to configure and maintain multiple daemons. You will need to handle network configuration for daemon communication within each pod. > 🚧 Warning! > The isolation in this approach is better than in the previous approach but still requires setting `privileged: true` or specific security capabilities. This increases the kernel attack surface inside your pod and misconfiguration can leave the Docker API exposed without proper authentication, creating a security risk. ###### Using Docker-in-Docker with pod-spec-patch To add a DinD sidecar container for the Buildkite Agent Stack for Kubernetes, use `pod-spec-patch` in the controller's configuration. This approach provides better isolation and security compared to mounting the host Docker socket. The configuration uses Kubernetes native sidecars (available in Kubernetes version 1.28+) by setting `restartPolicy: Always` on an initContainer, which starts before your build containers and continues running throughout the pod's lifecycle. You can configure the Docker daemon to be accessible using TCP socket or Unix socket, depending on your needs. ###### TCP socket configuration Configure the DinD sidecar to listen on a TCP socket, which allows the build containers to connect over the network: ```yaml #### values.yaml config: pod-spec-patch: initContainers: - name: docker-daemon image: docker:dind restartPolicy: Always securityContext: privileged: true args: - "--host=tcp://127.0.0.1:2375" env: - name: DOCKER_TLS_CERTDIR value: "" volumeMounts: - name: docker-storage mountPath: /var/lib/docker startupProbe: tcpSocket: port: 2375 initialDelaySeconds: 5 periodSeconds: 2 timeoutSeconds: 1 successThreshold: 1 failureThreshold: 30 volumes: - name: docker-storage emptyDir: {} ``` The `startupProbe` ensures the Docker daemon is listening on port `2375` before the build containers start. This prevents the build steps from attempting to connect to the Docker daemon before it's ready. Configure your pipeline steps to connect using TCP by setting the `DOCKER_HOST` environment variable: ```yaml steps: - label: "\:docker\: Build with DinD" plugins: - docker-compose#v5.12.1: build: app push: app env: DOCKER_HOST: tcp://127.0.0.1:2375 ``` This configuration exposes the Docker daemon on `127.0.0.1:2375` without TLS for use by your build step. The TCP socket (`tcp://127.0.0.1:2375`) is unencrypted, which is acceptable for local communication inside a single pod, but must not be exposed externally. For TLS-enabled communication (commonly port `2376`), provide certificates instead of disabling `DOCKER_TLS_CERTDIR`. ###### Unix socket configuration Alternatively, configure the DinD sidecar to use a Unix socket shared using a volume mount: ```yaml #### values.yaml config: pod-spec-patch: initContainers: - name: docker-daemon image: docker:dind restartPolicy: Always securityContext: privileged: true args: - "--host=unix:///var/run/docker.sock" env: - name: DOCKER_TLS_CERTDIR value: "" volumeMounts: - name: docker-storage mountPath: /var/lib/docker - name: docker-socket mountPath: /var/run volumeMounts: - name: docker-socket mountPath: /var/run volumes: - name: docker-storage emptyDir: {} - name: docker-socket emptyDir: {} ``` Configure your pipeline steps to connect using the Unix socket. While `/var/run/docker.sock` is the default location and `DOCKER_HOST` is optional, setting it explicitly makes the configuration clearer: ```yaml steps: - label: "\:docker\: Build with DinD" plugins: - docker-compose#v5.12.1: build: app push: app env: DOCKER_HOST: unix:///var/run/docker.sock ``` The Unix socket approach provides better security since the socket is only accessible within the pod and doesn't expose any network ports. However, the TCP socket approach is simpler to configure and debug. ###### Build context and volume mounts In Kubernetes, the build context is typically the checked-out repository in the pod's filesystem. By default, the [Docker Compose plugin](https://buildkite.com/resources/plugins/buildkite-plugins/docker-compose-buildkite-plugin/) uses the current working directory as the build context. If your `docker-compose.yml` references files outside this directory, you need to configure explicit volume mounts in your Kubernetes pod specification. For build caching or sharing artifacts across builds, mount persistent volumes or use Kubernetes persistent volume claims. Note that ephemeral pod storage is lost when the pod terminates. To learn more about caching, see [Caching best practices](/docs/pipelines/best-practices/caching). ###### Registry authentication Set up proper authentication for pushing to container registries: - Use the `docker-login` plugin for standard Docker registries - Use the `ecr` plugin for AWS ECR - Use the `gcp-workload-identity-federation` plugin for Google Artifact Registry When pushing services, ensure the `image:` field is set in `docker-compose.yml` to specify the full registry path. ###### Resource allocation Building container images can be resource-intensive, especially for large applications or when building multiple services. Configure your Kubernetes agent pod resources accordingly: - Allocate sufficient memory for the build process, Docker daemon, and any running services - Provide adequate CPU resources to avoid slow builds - Ensure sufficient ephemeral storage for Docker layers, build artifacts, and intermediate files - Account for DinD sidecar resource usage if using Docker-in-Docker If the resource requests and limits are not specified, Kubernetes may schedule your pods on nodes with insufficient resources. This causes builds to fail with Out of Memory (OOM) errors or cluster termination. Monitor resource usage during builds using `kubectl top pod` and adjust limits as needed. ##### Configuration approaches with the Docker Compose plugin The Docker Compose plugin supports different workflow patterns for building and pushing container images, each suited to specific use cases in Kubernetes environments. ###### Push to Buildkite Package Registries Push a built image directly to Buildkite Package Registries: ```yaml steps: - label: "\:docker\: Build and push to Buildkite Package Registries" plugins: - docker-login#v3.0.0: server: packages.buildkite.com/{org.slug}/{registry.slug} username: "${REGISTRY_USERNAME}" password-env: "REGISTRY_PASSWORD" - docker-compose#v5.12.1: build: app push: - app:packages.buildkite.com/{org.slug}/{registry.slug}/image-name:${BUILDKITE_BUILD_NUMBER} ``` ###### Basic Docker Compose build Build services defined in your `docker-compose.yml` file: ```yaml steps: - label: "Build with Docker Compose" plugins: - docker-compose#v5.12.1: build: app config: docker-compose.yml ``` Sample `docker-compose.yml` file: ```yaml services: app: build: context: . dockerfile: Dockerfile image: your-registry.example.com/your-team/app:bk-${BUILDKITE_BUILD_NUMBER} ``` ###### Building and pushing with the Docker Compose plugin Build and push images in a single step: ```yaml steps: - label: "\:docker\: Build and push" agents: queue: build plugins: - docker-compose#v5.12.1: build: app push: app ``` If you're using a private repository, add authentication: ```yaml steps: - label: "\:docker\: Build and push" agents: queue: build plugins: - docker-login#v3.0.0: server: your-registry.example.com username: "${REGISTRY_USERNAME}" password-env: "REGISTRY_PASSWORD" - docker-compose#v5.12.1: build: app push: app ``` ##### Customizing the build Customize your Docker Compose builds by using the Docker Compose plugin's configuration options to control build behavior, manage credentials, and optimize performance. ###### Using build arguments Pass build arguments to customize image builds at build time. Build arguments allow you to add parameters to Dockerfiles without directly embedding values in the file. ```yaml steps: - label: "\:docker\: Build with arguments" plugins: - docker-compose#v5.12.1: build: app args: - NODE_ENV=production - BUILD_NUMBER=${BUILDKITE_BUILD_NUMBER} - API_URL=${API_URL} ``` ###### Building specific services When your `docker-compose.yml` defines multiple services, build only the services you need rather than building everything. ```yaml steps: - label: "\:docker\: Build frontend only" plugins: - docker-compose#v5.12.1: build: frontend push: frontend ``` ###### Using BuildKit features with cache optimization [BuildKit](https://docs.docker.com/build/buildkit/) provides advanced build features including build cache optimization. BuildKit's inline cache stores cache metadata in the image itself, enabling cache reuse across different build agents. Here is an example configuration for building with BuildKit cache: ```yaml steps: - label: "\:docker\: Build with BuildKit cache" plugins: - docker-login#v3.0.0: server: your-registry.example.com username: "${REGISTRY_USERNAME}" password-env: "REGISTRY_PASSWORD" - docker-compose#v5.12.1: build: app cache-from: - app:your-registry.example.com/app:cache buildkit: true buildkit-inline-cache: true push: - app:your-registry.example.com/app:${BUILDKITE_BUILD_NUMBER} - app:your-registry.example.com/app:cache ``` ###### Using multiple compose files Combine multiple compose files to create layered configurations. This pattern works well for separating base configuration from environment-specific overrides: ```yaml steps: - label: "\:docker\: Build with compose file overlay" plugins: - docker-compose#v5.12.1: config: - docker-compose.yml - docker-compose.production.yml build: app push: app ``` ###### Custom image tagging on push You can push the same image with multiple tags to support different deployment strategies. This is useful for maintaining both immutable version tags and mutable environment tags: ```yaml steps: - label: "\:docker\: Push with multiple tags" plugins: - docker-compose#v5.12.1: build: app push: - app:your-registry.example.com/app:${BUILDKITE_BUILD_NUMBER} - app:your-registry.example.com/app:${BUILDKITE_COMMIT} - app:your-registry.example.com/app:latest - app:your-registry.example.com/app:${BUILDKITE_BRANCH} ``` ###### Using SSH agent for private repositories Enable SSH agent forwarding to access private Git repositories or packages during the build. Use this when Dockerfiles need to clone private dependencies: ```yaml steps: - label: "\:docker\: Build with SSH access" plugins: - docker-compose#v5.12.1: build: app ssh: true ``` Your Dockerfile needs to use BuildKit's SSH mount feature: ```dockerfile #### syntax=docker/dockerfile:1 FROM node:18 #### Install dependencies from private repository RUN --mount=type=ssh git clone git@github.com:yourorg/private-lib.git ``` ###### Propagating cloud credentials Automatically pass cloud provider credentials to containers for pushing images to cloud-hosted registries. For [AWS Elastic Container Registry (ECR)](https://aws.amazon.com/ecr/): ```yaml steps: - label: "\:docker\: Build and push to ECR" plugins: - ecr#v2.10.0: login: true account-ids: "123456789012" region: us-west-2 - docker-compose#v5.12.1: build: app push: - app:123456789012.dkr.ecr.us-west-2.amazonaws.com/app:${BUILDKITE_BUILD_NUMBER} ``` For [Google Artifact Registry (GAR)](https://docs.cloud.google.com/artifact-registry/docs): ```yaml steps: - label: "\:docker\: Build and push to GAR" plugins: - gcp-workload-identity-federation#v1.5.0: project-id: your-project service-account: your-service-account@your-project.iam.gserviceaccount.com - docker-compose#v5.12.1: build: app push: - app:us-central1-docker.pkg.dev/your-project/your-repository/app:${BUILDKITE_BUILD_NUMBER} ``` ##### Troubleshooting This section can help you to identify and solve the issues that most commonly arise when using Docker Compose container builds with Buildkite Pipelines. ###### Network connectivity Network policies, firewall rules, or DNS configuration issues can restrict Kubernetes networking. As a result, builds may fail with errors like "could not resolve host," "connection timeout," or "unable to pull image" when trying to pull base images from Docker Hub or push to your private registry. To resolve these issues, verify that your Kubernetes pods have network access to Docker Hub and your registry. Check your cluster's network policies, firewall rules, and DNS configuration. ###### Resource constraints Docker builds may fail with errors like "signal: killed," "build container exited with code 137," or builds that hang indefinitely and timeout. These usually signal insufficient memory or CPU resources allocated to your Kubernetes pods, causing the Linux kernel to kill processes (Out of Memory or OOM). To resolve these issues, check your pod's resource requests and limits. Use `kubectl describe pod` to view the current resource allocation and `kubectl top pod` to monitor actual usage. Increase the memory and CPU limits in your agent configuration if builds consistently fail due to resource constraints. ###### Build cache not working Docker builds rebuild all layers even when source files haven't changed. This happens when build cache is not preserved between builds or when cache keys don't match. To enable build caching with BuildKit: ```yaml plugins: - docker-compose#v5.12.1: build: app cache-from: - app:your-registry.example.com/app:cache buildkit: true buildkit-inline-cache: true push: - app:your-registry.example.com/app:${BUILDKITE_BUILD_NUMBER} - app:your-registry.example.com/app:cache ``` Ensure that the cache image exists in your registry before running the first build, or accept that the initial build will be slower. Subsequent builds will use the cached layers. ###### Environment variables not available during build Environment variables from your Buildkite pipeline aren't accessible inside your Dockerfile during the build process. Docker builds are isolated and don't automatically inherit environment variables. To pass environment variables to the build, use build arguments: ```yaml plugins: - docker-compose#v5.12.1: build: app args: - API_URL=${API_URL} - BUILD_NUMBER=${BUILDKITE_BUILD_NUMBER} ``` Then reference the passed environment variables in your Dockerfile: ```dockerfile ARG API_URL ARG BUILD_NUMBER RUN echo "Building version ${BUILD_NUMBER}" ``` Note that the `args` option in the Docker Compose plugin passes variables at build time, while the `environment` option passes variables at runtime (for running containers, not building images). ###### Image push failures Pushing images to registries fails with authentication errors or timeout errors. For authentication failures, ensure credentials are properly configured. Use the [`docker-login` plugin](https://buildkite.com/resources/plugins/buildkite-plugins/docker-login-buildkite-plugin/) before the [`docker-compose` plugin](https://buildkite.com/resources/plugins/buildkite-plugins/docker-compose-buildkite-plugin/): ```yaml plugins: - docker-login#v3.0.0: server: your-registry.example.com username: "${REGISTRY_USERNAME}" password-env: "REGISTRY_PASSWORD" - docker-compose#v5.12.1: build: app push: app ``` For cloud-provider registries, use the appropriate authentication plugins: ```yaml plugins: - ecr#v2.10.0: # For AWS ECR login: true account-ids: "123456789012" region: us-west-2 - docker-compose#v5.12.1: build: app push: app ``` And for Google Artifact Registry: ```yaml plugins: - gcp-workload-identity-federation#v1.5.0: project-id: your-project service-account: your-service-account@your-project.iam.gserviceaccount.com - docker-compose#v5.12.1: build: app push: app ``` For timeout or network failures, enable push retries: ```yaml plugins: - docker-compose#v5.12.1: build: app push: app push-retries: 3 ``` ##### Debugging builds When builds fail or behave unexpectedly, you need to enable verbose output and disable caching to diagnose the issue. ###### Enable verbose output Use the `verbose` option in the Docker Compose plugin to see detailed output from Docker Compose operations: ```yaml steps: - label: "\:docker\: Debug build" plugins: - docker-compose#v5.12.1: build: app verbose: true ``` This shows all Docker Compose commands being executed and their full output, helping identify where failures occur. ###### Disable build cache Disable caching to ensure builds run from scratch, which can reveal caching-related issues: ```yaml steps: - label: "\:docker\: Build without cache" plugins: - docker-compose#v5.12.1: build: app no-cache: true ``` ###### Inspect build logs in Kubernetes For builds running in Kubernetes, access pod logs to see detailed build output: ```bash #### List pods for your build kubectl get pods -l buildkite.com/job-id= #### View logs from the build pod kubectl logs #### Follow logs in real-time kubectl logs -f ``` ###### Test docker-compose locally Test your `docker-compose.yml` configuration locally before running in the pipeline: ```bash #### Validate compose file syntax docker compose config #### Build without the Docker Compose plugin docker compose build #### Check what images were created docker images ``` This helps identify issues with the compose configuration itself, separate from pipeline or Kubernetes concerns. --- ### Docker-in-Docker (DinD) URL: https://buildkite.com/docs/agent/self-hosted/agent-stack-k8s/dind-container-builds #### Docker-in-Docker (DinD) container builds [Docker-in-Docker (DinD)](https://hub.docker.com/_/docker) allows you to run a Docker daemon inside a container, enabling standard Docker commands like `docker build` and `docker run` within a [job](/docs/pipelines/glossary#job). This approach is useful when you need full Docker CLI compatibility or want to build and test container images using familiar Docker workflows. ##### How Docker-in-Docker works Docker-in-Docker enables container builds by running a Docker daemon inside a sidecar container alongside your main job container. The sidecar container runs the Docker daemon (`docker:dind`) with elevated privileges, while your main container connects to this daemon through a shared Docker socket. This setup allows your Buildkite jobs to execute standard Docker operations (like `docker build` and `docker push`) from within the main container, while the actual container management is handled by the daemon in the sidecar. ##### Using Docker-in-Docker as a sidecar container The following pipeline example demonstrates how to build a container image using Docker-in-Docker with the Buildkite Kubernetes plugin's [`sidecars` feature](https://buildkite.com/docs/agent/self-hosted/agent-stack-k8s/sidecars), and sharing the Docker socket using Volume mounts. ```yaml - label: "Testing the sidecar approach" env: DOCKER_HOST: tcp://localhost:2375 image: alpine/docker-with-buildx:latest command: docker build ./dind -t myregistry.com/myimage:latest plugins: - kubernetes: sidecars: - image: docker:dind command: [dockerd-entrypoint.sh] securityContext: privileged: true env: - name: DOCKER_TLS_CERTDIR value: "" ``` ###### Understanding the components This section describes the key components for configuring Docker-in-Docker with the sidecar pattern in Kubernetes. ###### Configure the sidecar container - **`image: docker:dind`**: The official Docker-in-Docker image containing the Docker daemon - **`command: [dockerd-entrypoint.sh]`**: Starts the Docker daemon in the sidecar - **`DOCKER_TLS_CERTDIR: ""`**: Disables TLS since sidecar containers use local socket communication - **`privileged: true`**: Provides elevated permissions on the host. This is required for the Docker daemon to create containers ###### Configure the main container for build in the command step - **`image:`**: Specify the image that contains the Docker CLI tools (`docker`, `docker-compose`, etc.) - **`command`**: Your Docker build commands ##### Security considerations Running Docker-in-Docker requires privileged containers. It is recommended to use Docker-in-Docker in trusted environments. Consider alternatives like [BuildKit](/docs/agent/self-hosted/agent-stack-k8s/buildkit-container-builds) for enhanced security. ##### Troubleshooting This section describes common issues with Docker-in-Docker and the ways to resolve them. ###### Cannot connect to the Docker daemon - Ensure that the DOCKER_HOST environment variable is set correctly - Check if there is a race condition in connecting to the Docker daemon between the main container and the sidecar container. Delay the main container's startup or add a wait before using any Docker build commands ###### Permission denied while trying to connect to the Docker daemon socket - Ensure the sidecar has `privileged: true` - Check that your cluster's security policies allow privileged containers --- ### Kaniko URL: https://buildkite.com/docs/agent/self-hosted/agent-stack-k8s/kaniko-container-builds #### Kaniko container builds [Kaniko](https://github.com/GoogleContainerTools/kaniko/tree/main#kaniko---build-images-in-kubernetes) is a tool for building container images from a Dockerfile, inside a container or Kubernetes cluster. Kaniko doesn't depend on a Docker daemon and executes each command within a Dockerfile completely in user space. This enables building container images in environments that can't easily or securely run a Docker daemon, such as a standard Kubernetes cluster. You will need to run Kaniko as an image: `gcr.io/kaniko-project/executor`. The Kaniko executor image is responsible for building an image from a Dockerfile and pushing it to a registry. Within the executor image, the filesystem of the base image (the `FROM` image in the Dockerfile) is extracted. Next, the commands in the Dockerfile are executed, taking snapshots of the filesystem in user space after running each command. After each command, a layer of changed files is appended to the base image (if such image exists) and the metadata of the image is updated. ##### Using Kaniko with Agent Stack for Kubernetes This page will explain how to use the Kaniko executor to perform the following: - Build an image and push to [Buildkite Package Registries](/docs/package-registries) - Build an image and push to [Google Artifact Registry](https://cloud.google.com/artifact-registry/docs/overview) - Build an image and push to [Amazon Elastic Container Registry](https://aws.amazon.com/ecr/) ###### Kaniko image availability Google has deprecated support for the Kaniko project and no longer publishes new images to `gcr.io/kaniko-project/`. However, [Chainguard has forked the project](https://github.com/chainguard-dev/kaniko) and continues to provide support and create new releases. There are several options available for running Kaniko in Docker. Refer to the [Kaniko image availability options](/docs/agent/self-hosted/aws/elastic-ci-stack/ec2-linux-and-windows/kaniko_container_builds#running-kaniko-in-docker-kaniko-image-availability) for more details. ###### Build an image and push to Buildkite Package Registries This section covers using the Kaniko executor for building container images and pushing them to [Buildkite Package Registries](/docs/package-registries). To be able to push images to Buildkite Package Registries, you need to do the following: 1. Perform a one-time package registry and OIDC policy setup 1. Create an agent hook to get OIDC token and set up Docker config 1. Mount the agent hook and buildkite-agent binary to the container ###### One-time package registry setup and OIDC policy Follow the instruction provided in [One-time package registry setup](/docs/agent/self-hosted/aws/elastic-ci-stack/ec2-linux-and-windows/kaniko-container-builds#one-time-package-registry-setup) to set up a [Buildkite Package Registry](/docs/package-registries) and the necessary OIDC policy. ###### Create an agent hook to get OIDC token and set up Docker config For the Kaniko executor container to be able to push the image it built to Buildkite Package Registry, Kaniko executor container needs to get an OIDC token using `buildkite-agent get oidc-token` and set up the Docker config. To achieve this, an [agent hook](/docs/agent/self-hosted/agent-stack-k8s/agent-hooks-and-plugins#agent-hooks) is needed. Below is the script for the necessary agent hook: ```yaml #!/bin/sh set -euo pipefail echo "--- Generating OIDC token for Kaniko inside container" #### Use buildkite-agent binary (mounted in the container at /workspace/buildkite-agent) OIDC_TOKEN="$(/workspace/buildkite-agent oidc request-token --audience "https://packages.buildkite.com/{BUILDKITE_ORGANIZATION_SLUG}/{PACKAGE_REGISTRY_SLUG}" --lifetime 300)" mkdir -p /kaniko/.docker cat >/kaniko/.docker/config.json 📘 Using the debug tag > The `debug` tag is used for the `executor` image, as the `latest` tag doesn't have a shell in it, and with `agent-stack-k8s`, a shell is needed. To use the `latest` tag, generate a custom image of the Kaniko executor with a shell included. ###### Build an image and push to Google Artifact Registry This section covers using the Kaniko executor for building container images and pushing them to [Google Artifact Registry](https://cloud.google.com/artifact-registry/docs/overview). To push images to Google Artifact Registry, you need a Kubernetes secret containing the token that provides the required permissions. For detailed information regarding what permissions are necessary and how to create the secret, refer to the [secret creation documentation](https://github.com/chainguard-dev/kaniko?tab=readme-ov-file#kubernetes-secret). Once the secret is created, mount the secret into the container, then export the secret as the environment variable `GOOGLE_APPLICATION_CREDENTIALS` into the executor: ```yaml agents: queue: kubernetes steps: - label: "\:kaniko\: Build image and push to Google Artifact Registry" plugins: - kubernetes: podSpecPatch: containers: - image: gcr.io/kaniko-project/executor:debug name: container-0 extraVolumeMounts: - name: workspace mountPath: /workspace volumeMounts: - name: kaniko-secret mountPath: /secret command: ["/busybox/sh"] args: - "-c" - | export GOOGLE_APPLICATION_CREDENTIALS=/secret/kaniko-secret.json /kaniko/executor \ --dockerfile=/workspace/build/buildkite/src/Dockerfile \ --destination=us-central1-docker.pkg.dev/gcp-project-id/testrepository/kaniko-test:latest volumes: - name: kaniko-secret secret: secretName: kaniko-secret ``` ###### Build an image and push to Amazon Elastic Container Registry This section covers pushing an image to [Amazon Elastic Container Registry](https://aws.amazon.com/ecr/). Similar to the previous section, you will need to set up [ECR credentials](https://github.com/chainguard-dev/kaniko?tab=readme-ov-file#pushing-to-amazon-ecr). The following example also shows how to expose the secret by exporting it to the Kaniko executor: ```yaml agents: queue: kubernetes steps: - label: "\:kaniko\: Build image and push to Elastic Container Registry" plugins: - kubernetes: podSpecPatch: containers: - image: gcr.io/kaniko-project/executor:debug name: container-0 extraVolumeMounts: - name: workspace mountPath: /workspace volumeMounts: - name: aws-creds mountPath: /root/.aws command: ["/busybox/sh"] args: - "-c" - | export AWS_SHARED_CREDENTIALS_FILE=/root/.aws/credentials export AWS_REGION=us-west-2 /kaniko/executor \ --dockerfile=/workspace/build/buildkite/src/Dockerfile \ --destination=123456789012.dkr.ecr.us-west-2.amazonaws.com/my-repo:latest volumes: - name: aws-creds secret: secretName: aws-ecr-credentials ``` --- ### Namespace remote builders URL: https://buildkite.com/docs/agent/self-hosted/agent-stack-k8s/namespace-container-builds #### Namespace remote builder container builds [Namespace](https://namespace.so) provides [remote Docker builders](https://namespace.so/docs/solutions/docker-builders) that execute builds on dedicated infrastructure outside of your Kubernetes cluster. Unlike [Buildah](/docs/agent/self-hosted/agent-stack-k8s/buildah-container-builds) or [BuildKit](/docs/agent/self-hosted/agent-stack-k8s/buildkit-container-builds) which run builds inside Kubernetes pods, Namespace executes builds on remote compute instances. This eliminates the need for privileged containers, security context configuration, or storage driver setup in your cluster. ##### How it works When using Namespace remote Docker builders with the [Buildkite Agent Stack for Kubernetes](/docs/agent/self-hosted/agent-stack-k8s): 1. The [Buildkite Agent Stack for Kubernetes](/docs/agent/self-hosted/agent-stack-k8s) pod authenticates with Namespace (see [Authentication](/docs/agent/self-hosted/agent-stack-k8s/namespace-container-builds#authentication)). 1. The Namespace CLI (`nsc`) configures [Docker Buildx](https://docs.docker.com/reference/cli/docker/buildx/) to use remote builders. 1. Namespace runs the actual build workloads remotely while Buildkite continues orchestrating the pipeline. 1. Built images are pushed to Namespace's container registry or any other registry. ##### Prerequisites - Namespace account with a workspace (you can [sign up for it](https://cloud.namespace.so/signin) if you don't have one). - Custom agent image with Docker CLI, Buildx, and Namespace CLI. - Properly configured authentication. ##### Authentication Namespace supports multiple authentication [methods](https://namespace.so/docs/federation). [Buildkite OIDC](/docs/pipelines/security/oidc) is recommended for most environments. To be able to start using it with Namespace, you will need to contact [support@namespace.so](mailto:support@namespace.so) to register `https://agent.buildkite.com` as a trusted issuer for your Namespace tenant. Alternatively, you can use [AWS Cognito federation](https://namespace.so/docs/federation/aws) for EKS clusters using [IAM Roles for Service Accounts (IRSA)](https://docs.aws.amazon.com/eks/latest/userguide/iam-roles-for-service-accounts.html). ##### AWS Cognito setup (for EKS) > 📘 > When using Buildkite OIDC (recommended), skip to [Building a custom agent image](/docs/agent/self-hosted/agent-stack-k8s/namespace-container-builds#building-a-custom-agent-image). ###### Setup First, create a [Cognito Identity Pool](https://docs.aws.amazon.com/cognito/latest/developerguide/cognito-identity.html) and establish trust with Namespace: ```bash #### Create pool aws cognito-identity create-identity-pool \ --identity-pool-name namespace-buildkite-federation \ --no-allow-unauthenticated-identities \ --developer-provider-name namespace.so \ --region #### Trust the pool (note the pool ID from output) nsc auth trust-aws-cognito-identity-pool \ --aws_region \ --identity_pool \ --tenant_id ``` Next, enable the [EKS OIDC provider](https://docs.aws.amazon.com/eks/latest/userguide/enable-iam-roles-for-service-accounts.html) and create an IAM role: ```bash #### Enable OIDC eksctl utils associate-iam-oidc-provider --cluster --approve #### Create role with Cognito permissions (check the official AWS documentation for policy details) aws iam create-role \ --role-name \ --assume-role-policy-document file://trust-policy.json #### Annotate service account kubectl annotate serviceaccount \ -n \ eks.amazonaws.com/role-arn=arn\:aws\:iam:::role/ ``` For the detailed IAM policy configuration, see [Namespace AWS federation documentation](https://namespace.so/docs/federation/aws). ##### Building a custom agent image Create a Dockerfile that includes Docker CLI, Buildx, and Namespace CLI: ```dockerfile #### Use the official Buildkite agent Alpine Kubernetes image as base FROM buildkite/agent:alpine-k8s #### Switch to root USER root #### Install bash, Docker CLI, and Buildx from the Alpine repositories RUN apk add --no-cache \ bash \ docker-cli \ docker-cli-buildx \ curl #### Install Namespace CLI RUN curl -fsSL https://get.namespace.so/cloud/install.sh | sh #### Add nsc to PATH ENV PATH="/root/.ns/bin:$PATH" #### Verify installations RUN docker --version && \ docker buildx version && \ test -f /root/.ns/bin/nsc && echo "nsc installed successfully" WORKDIR /workspace ``` Build and push the image to your container registry: ```bash docker build -t /:latest -f Dockerfile.buildkite-namespace . docker push /:latest ``` ##### Configure Agent Stack for Kubernetes Update Helm values to use the custom image: ```yaml config: agent-config: shell: /bin/bash -e -c image: /:latest tags: - queue=kubernetes pod-spec-patch: serviceAccountName: containers: [] ``` ##### Using Namespace remote builders Namespace integrates with Buildkite pipeline steps through the Namespace CLI. Select the authentication flow that matches the environment, then run the standard Docker Buildx commands against the remote builders. ###### Buildkite OIDC authentication (recommended) Use this option when [support@namespace.so](mailto:support@namespace.so) has been contacted to register `https://agent.buildkite.com` as a trusted issuer. ```yaml command: | # Authenticate using Buildkite OIDC OIDC_TOKEN=$$(buildkite-agent oidc request-token --audience federation.namespaceapis.com) /root/.ns/bin/nsc auth exchange-oidc-token \ --token "$$OIDC_TOKEN" \ --tenant_id ``` ###### AWS Cognito authentication Use this option to use AWS Cognito federation for EKS clusters with IAM Roles for Service Accounts (IRSA). The Buildkite agent pod authenticates using Cognito, then Namespace provisions the remote builders for the pipeline. ```yaml command: | # Authenticate using AWS Cognito /root/.ns/bin/nsc auth exchange-aws-cognito-token \ --aws_region \ --identity_pool \ --tenant_id ``` ##### Pushing to external registries Use Buildkite's registry plugins to handle authentication so the step from the [Complete pipeline example](#complete-pipeline-example) stays focused on the Namespace build. Add the relevant plugin block beneath the step's `agents` definition. ###### Docker Hub Use the [Docker Login Buildkite plugin](https://github.com/buildkite-plugins/docker-login-buildkite-plugin) to authenticate with Docker Hub before pushing images. ```yaml plugins: - docker-login#v2.1.0: registry: https://index.docker.io/v1/ username: "${DOCKER_USERNAME}" password-env: DOCKER_PASSWORD ``` ###### Amazon ECR Use the [ECR Buildkite plugin](https://github.com/buildkite-plugins/ecr-buildkite-plugin) to authenticate with Amazon ECR before pushing images. ```yaml plugins: - ecr#v3.3.0: login: true account-ids: - region: ``` ##### Complete pipeline example The following example shows a complete step with Namespace authentication, Buildx setup, and a registry plugin. Uncomment the authentication option and registry plugin that match the environment. ```yaml agents: queue: kubernetes steps: - label: ":docker: Build with Namespace" plugins: # Uncomment the registry plugin that matches your destination. # Docker Hub: # - docker-login#v2.1.0: # registry: https://index.docker.io/v1/ # username: "${DOCKER_USERNAME}" # password-env: DOCKER_PASSWORD # Amazon ECR: # - ecr#v3.3.0: # login: true # account-ids: # - # region: command: | # Option A: Authenticate using Buildkite OIDC (recommended) OIDC_TOKEN=$$(buildkite-agent oidc request-token --audience federation.namespaceapis.com) /root/.ns/bin/nsc auth exchange-oidc-token \ --token "$$OIDC_TOKEN" \ --tenant_id # Option B: Authenticate using AWS Cognito # /root/.ns/bin/nsc auth exchange-aws-cognito-token \ # --aws_region \ # --identity_pool \ # --tenant_id # Configure Namespace Buildx builder and push multi-platform image /root/.ns/bin/nsc docker buildx setup --background --use /root/.ns/bin/nsc docker login docker buildx build \ --builder nsc-remote \ --platform linux/amd64,linux/arm64 \ -t nscr.io//:latest \ --push \ . # Alternative: push to another registry (same one that was configured above with plugin) # docker buildx build \ # --builder nsc-remote \ # --platform linux/amd64,linux/arm64 \ # -t /:latest \ # --push \ # . ``` ##### Troubleshooting This section covers the possible issues that might arise when using Namespace remote builder container builds and how to fix them. ###### Authentication fails - OIDC "nothing matched" error: Contact [Namespace support](https://namespace.so/support) to register `https://agent.buildkite.com` as the OIDC issuer, or verify AWS Cognito setup. - Pod using node role: Verify that the EKS OIDC provider is enabled and the service account has IAM role annotation. - Cognito permission denied: Ensure that the IAM role policy includes `cognito-identity:GetOpenIdTokenForDeveloperIdentity`. ###### Registry authentication fails Run `nsc docker login` before building. ###### Builder not found Run `nsc docker buildx setup --background --use` before building. ###### Shell execution errors Configure agent to use bash in Helm values: ```yaml config: agent-config: shell: /bin/bash -e -c ``` --- ### Depot remote builders URL: https://buildkite.com/docs/agent/self-hosted/agent-stack-k8s/depot-agent-kubernetes-container-builds #### Container builds with Depot [Depot](https://depot.dev/) provides remote builders that accelerate Docker builds by running them on dedicated build infrastructure. You can use Depot to build container images on agents that are auto-scaled by the [Buildkite Agent Stack for Kubernetes](/docs/agent/self-hosted/agent-stack-k8s), offloading build workloads from your Kubernetes cluster to Depot's optimized build infrastructure. > 🚧 Warning! > The Depot installation method uses `curl | sh`, which executes scripts directly. Review the installation script before using it in production environments. Consider downloading and verifying the script separately, or installing [Depot CLI](https://github.com/depot/cli) in your base agent image for better security control. ##### Special considerations regarding Agent Stack for Kubernetes When using Depot with the Buildkite Agent Stack for Kubernetes, consider the following requirements and best practices for successful container builds. ###### Depot project configuration Depot requires a project ID to route builds to the correct infrastructure. You can configure your Depot project in a number of different ways by using either: 1. Environment variable `DEPOT_PROJECT_ID`. 1. Configuration file `depot.json` in your repository. 1. Command-line flag `--project` in `depot` commands. ###### Environment variable approach (recommended for Kubernetes) Set `DEPOT_PROJECT_ID` in your Kubernetes pod specification. This approach is recommended for Kubernetes environments as it's easier to manage via secrets and doesn't require repository changes: ```yaml #### values.yaml config: pod-spec-patch: env: - name: DEPOT_PROJECT_ID value: "your-project-id" ``` ###### Configuration file (depot.json) approach Use `depot init` to create a `depot.json` file in your repository. You'll need to authenticate with Depot first to select from your available projects: ```bash #### Authenticate with Depot depot login #### Initialize the project configuration depot init ``` The `depot init` command creates a `depot.json` file in the current directory with the following format: ```json { "id": "your-project-id" } ``` This file is automatically detected by the Depot CLI when present in your repository root. The `depot.json` file should be committed to your repository. ###### Command-line flag approach You can specify the project ID using the `--project` flag when using `depot` commands directly: ```yaml steps: - label: "\:docker\: Build with depot command" command: | curl -L https://depot.dev/install-cli.sh | DEPOT_INSTALL_DIR=/usr/local/bin sh depot build --project=your-project-id -t my-image . ``` Note that when you are using `depot configure-docker`, the project ID should be specified via `DEPOT_PROJECT_ID` environment variable or `depot.json` file, as this configures standard `docker build` commands to use Depot. For Kubernetes environments, using the environment variable approach is recommended as it provides the most flexibility and doesn't require repository changes. ###### Depot CLI installation Depot integrates with Docker via a CLI plugin. The [Depot CLI](https://github.com/depot/cli) must be installed in your build containers to enable remote builds. You can install it in your base agent image or as part of your build steps. Install the Depot CLI in your agent image: ```dockerfile FROM buildkite/agent:latest #### Install Depot CLI #### Note: Review the installation script before using in production RUN curl -L https://depot.dev/install-cli.sh | DEPOT_INSTALL_DIR=/usr/local/bin sh ``` Alternatively, you can install it at runtime in your build steps: ```yaml steps: - label: "Install Depot and build" command: | curl -L https://depot.dev/install-cli.sh | DEPOT_INSTALL_DIR=/usr/local/bin sh depot configure-docker docker build -t my-image . ``` ###### Authentication Depot requires authentication to access your projects. Depot supports [OIDC trust relationships with Buildkite](/docs/pipelines/security/oidc), which is the recommended authentication method as it provides ephemeral tokens without managing static credentials. ###### OIDC trust relationships (recommended) Configure an OIDC trust relationship between Buildkite and Depot to use ephemeral tokens automatically. This eliminates the need to manage static tokens and improves security. To do it, you need to set up the OIDC trust relationship in your Depot project settings, then configure your Buildkite pipeline to use it: ```yaml #### values.yaml config: pod-spec-patch: env: - name: DEPOT_PROJECT_ID value: "your-project-id" # OIDC authentication is handled automatically by Depot CLI # No DEPOT_TOKEN needed when using OIDC trust relationships ``` The Depot CLI automatically detects Buildkite's OIDC credentials and uses them for authentication when an OIDC trust relationship is configured. ###### Static token authentication (alternative) For environments where OIDC is not available, you can use static project tokens. Store your Depot token as a Kubernetes secret and mount it as an environment variable in your build pods. Create a Kubernetes secret for your Depot token: ```bash kubectl create secret generic depot-token \ --from-literal=token= \ --namespace buildkite ``` Configure the Agent Stack to use the Depot token: ```yaml #### values.yaml config: pod-spec-patch: env: - name: DEPOT_TOKEN valueFrom: secretKeyRef: name: depot-token key: token - name: DEPOT_PROJECT_ID value: "your-project-id" ``` > 🚧 Warning! > Static tokens persist until rotated. OIDC trust relationships provide ephemeral tokens that automatically expire, reducing the risk of credential exposure. Use OIDC whenever possible. ###### Build context and file access Depot builds require access to your build context, which is typically the checked-out repository in the pod's filesystem. Ensure your build context is accessible and includes all necessary files for the build. For large build contexts, Depot efficiently handles context uploads and can optimize transfers. However, consider using `.dockerignore` files to exclude unnecessary files from the build context, which Depot respects when uploading the build context. ###### Resource allocation Since builds run on Depot's infrastructure, your Kubernetes pods don't need to allocate resources for Docker daemons or build processes. This allows you to use smaller, more cost-effective pods that primarily handle: - Repository checkout - Build orchestration - Artifact handling - Post-build steps ##### Configuration approaches with Depot Depot supports various workflow patterns for building container images, each suited to specific use cases in Kubernetes environments. > 📘 > The examples below include `DEPOT_TOKEN` in the environment variables. If you're using OIDC trust relationships (recommended), you can omit `DEPOT_TOKEN` as authentication is handled automatically. Only include `DEPOT_TOKEN` when using static token authentication. ###### Basic Docker build with Depot You can build images using Depot's remote builders with standard `docker build` commands. Configure Depot before building: ```yaml steps: - label: "\:docker\: Build with Depot" command: | curl -L https://depot.dev/install-cli.sh | DEPOT_INSTALL_DIR=/usr/local/bin sh depot configure-docker docker build -t your-registry.example.com/app:${BUILDKITE_BUILD_NUMBER} . env: DEPOT_PROJECT_ID: "${DEPOT_PROJECT_ID}" DEPOT_TOKEN: "${DEPOT_TOKEN}" ``` A sample Dockerfile would look like this: ```dockerfile FROM node:18-alpine WORKDIR /app COPY package*.json ./ RUN npm ci --omit=dev COPY . . CMD ["node", "server.js"] ``` ###### Building and pushing with Depot Build and push images using Depot's remote builders: ```yaml steps: - label: "\:docker\: Build and push with Depot" command: | curl -L https://depot.dev/install-cli.sh | DEPOT_INSTALL_DIR=/usr/local/bin sh depot configure-docker docker build -t your-registry.example.com/app:${BUILDKITE_BUILD_NUMBER} . docker push your-registry.example.com/app:${BUILDKITE_BUILD_NUMBER} env: DEPOT_PROJECT_ID: "${DEPOT_PROJECT_ID}" DEPOT_TOKEN: "${DEPOT_TOKEN}" ``` If you're using a private repository, authenticate before pushing: ```yaml steps: - label: "\:docker\: Build and push with Depot" command: | curl -L https://depot.dev/install-cli.sh | DEPOT_INSTALL_DIR=/usr/local/bin sh depot configure-docker echo "${REGISTRY_PASSWORD}" | docker login your-registry.example.com -u "${REGISTRY_USERNAME}" --password-stdin docker build -t your-registry.example.com/app:${BUILDKITE_BUILD_NUMBER} . docker push your-registry.example.com/app:${BUILDKITE_BUILD_NUMBER} env: DEPOT_PROJECT_ID: "${DEPOT_PROJECT_ID}" DEPOT_TOKEN: "${DEPOT_TOKEN}" REGISTRY_USERNAME: "${REGISTRY_USERNAME}" REGISTRY_PASSWORD: "${REGISTRY_PASSWORD}" ``` ###### Using Depot with Docker Buildx Depot integrates with Docker Buildx for advanced build features, including multi-platform builds: ```yaml steps: - label: "\:docker\: Multi-platform build with Depot" command: | curl -L https://depot.dev/install-cli.sh | DEPOT_INSTALL_DIR=/usr/local/bin sh depot configure-docker docker buildx build \ --platform linux/amd64,linux/arm64 \ -t your-registry.example.com/app:${BUILDKITE_BUILD_NUMBER} \ --push \ . env: DEPOT_PROJECT_ID: "${DEPOT_PROJECT_ID}" DEPOT_TOKEN: "${DEPOT_TOKEN}" ``` ###### Using Depot with Docker Compose Depot works seamlessly with Docker Compose builds. Configure Depot before running compose builds: ```yaml steps: - label: "\:docker\: Build with Depot and Docker Compose" command: | curl -L https://depot.dev/install-cli.sh | DEPOT_INSTALL_DIR=/usr/local/bin sh depot configure-docker docker compose build docker compose push env: DEPOT_PROJECT_ID: "${DEPOT_PROJECT_ID}" DEPOT_TOKEN: "${DEPOT_TOKEN}" ``` Alternatively, you can use Depot's `bake` command for parallel Compose builds: ```yaml steps: - label: "\:docker\: Build with Depot bake" command: | curl -L https://depot.dev/install-cli.sh | DEPOT_INSTALL_DIR=/usr/local/bin sh depot bake --load -f ./docker-compose.yml env: DEPOT_PROJECT_ID: "${DEPOT_PROJECT_ID}" DEPOT_TOKEN: "${DEPOT_TOKEN}" ``` ##### Customizing builds with Depot You can customize your Depot builds by using Depot-specific features and configuration options. ###### Using build arguments Pass build arguments to customize image builds at build time: ```yaml steps: - label: "\:docker\: Build with arguments using Depot" command: | curl -L https://depot.dev/install-cli.sh | DEPOT_INSTALL_DIR=/usr/local/bin sh depot configure-docker docker build \ --build-arg NODE_ENV=production \ --build-arg BUILD_NUMBER=${BUILDKITE_BUILD_NUMBER} \ --build-arg API_URL=${API_URL} \ -t your-registry.example.com/app:${BUILDKITE_BUILD_NUMBER} \ . env: DEPOT_PROJECT_ID: "${DEPOT_PROJECT_ID}" DEPOT_TOKEN: "${DEPOT_TOKEN}" ``` ###### Multi-platform builds Build for multiple architectures using Depot's multi-platform support: ```yaml steps: - label: "\:docker\: Multi-platform build" command: | curl -L https://depot.dev/install-cli.sh | DEPOT_INSTALL_DIR=/usr/local/bin sh depot configure-docker docker buildx build \ --platform linux/amd64,linux/arm64 \ -t your-registry.example.com/app:${BUILDKITE_BUILD_NUMBER} \ -t your-registry.example.com/app:latest \ --push \ . env: DEPOT_PROJECT_ID: "${DEPOT_PROJECT_ID}" DEPOT_TOKEN: "${DEPOT_TOKEN}" ``` ###### Using Depot cache Depot provides automatic caching for faster builds. Depot manages cache automatically using its own infrastructure, but you can also configure registry-based cache for additional control: ```yaml steps: - label: "\:docker\: Build with Depot and registry cache" command: | curl -L https://depot.dev/install-cli.sh | DEPOT_INSTALL_DIR=/usr/local/bin sh depot configure-docker docker buildx build \ --cache-from type=registry,ref=your-registry.example.com/app:cache \ --cache-to type=registry,ref=your-registry.example.com/app:cache,mode=max \ -t your-registry.example.com/app:${BUILDKITE_BUILD_NUMBER} \ --push \ . env: DEPOT_PROJECT_ID: "${DEPOT_PROJECT_ID}" DEPOT_TOKEN: "${DEPOT_TOKEN}" ``` Depot provides native caching that works automatically when you use `depot configure-docker` — so no additional configuration is required. Depot manages cache layers on its infrastructure, which persist across builds within the same project. The registry cache example above is optional and provides additional cache persistence across different build environments or projects. ##### Troubleshooting This section can help you to identify and solve the issues that might arise when using Depot with Buildkite Pipelines on Kubernetes. ###### Depot authentication failures Builds fail with authentication errors when Depot cannot access your project. ###### Missing or invalid authentication credentials or project ID For OIDC trust relationships (recommended), ensure the trust relationship is configured in your Depot project settings and that `DEPOT_PROJECT_ID` is set in your pipeline: ```yaml config: pod-spec-patch: env: - name: DEPOT_PROJECT_ID value: "your-project-id" # OIDC authentication handled automatically, no DEPOT_TOKEN needed ``` For static token authentication, ensure your Depot token and project ID are correctly configured: ```yaml config: pod-spec-patch: env: - name: DEPOT_TOKEN valueFrom: secretKeyRef: name: depot-token key: token - name: DEPOT_PROJECT_ID value: "your-project-id" ``` Verify authentication by checking your Depot dashboard. For OIDC, ensure the trust relationship is active. For static tokens, verify the token has access to the specified project. ###### Depot CLI not found Builds fail with "depot: command not found" errors. ###### Depot CLI is not installed in the build container. You need to install Depot CLI before using it: ```yaml steps: - label: "Install Depot CLI" command: | curl -L https://depot.dev/install-cli.sh | DEPOT_INSTALL_DIR=/usr/local/bin sh depot configure-docker env: DEPOT_PROJECT_ID: "${DEPOT_PROJECT_ID}" DEPOT_TOKEN: "${DEPOT_TOKEN}" ``` Alternatively, include Depot CLI installation in your base agent image. ###### Build context upload failures Builds fail when uploading build context to Depot. ###### Network issues or build context too large. To troubleshoot this issue: - Check network connectivity from your Kubernetes pods to Depot - Verify firewall rules allow outbound HTTPS traffic to `depot.dev` - Use `.dockerignore` files to reduce build context size - Check Depot service status ###### Docker not configured for Depot Builds run locally instead of on Depot infrastructure. ###### Depot Docker plugin not configured Run `depot configure-docker` before building: ```yaml steps: - label: "Configure and build" command: | curl -L https://depot.dev/install-cli.sh | DEPOT_INSTALL_DIR=/usr/local/bin sh depot configure-docker docker build -t my-image . env: DEPOT_PROJECT_ID: "${DEPOT_PROJECT_ID}" DEPOT_TOKEN: "${DEPOT_TOKEN}" ``` You can confirm builds are using Depot by looking for `[depot]` prefixed log lines in the build output. ###### Registry push failures Pushing images to registries fails after Depot builds. ###### Authentication or network issues when pushing from Depot infrastructure. Ensure registry credentials are properly configured. For private registries, authenticate before pushing: ```yaml steps: - label: "\:docker\: Build and push with Depot" command: | curl -L https://depot.dev/install-cli.sh | DEPOT_INSTALL_DIR=/usr/local/bin sh depot configure-docker echo "${REGISTRY_PASSWORD}" | docker login your-registry.example.com -u "${REGISTRY_USERNAME}" --password-stdin docker build -t your-registry.example.com/app:${BUILDKITE_BUILD_NUMBER} . docker push your-registry.example.com/app:${BUILDKITE_BUILD_NUMBER} env: DEPOT_PROJECT_ID: "${DEPOT_PROJECT_ID}" DEPOT_TOKEN: "${DEPOT_TOKEN}" REGISTRY_USERNAME: "${REGISTRY_USERNAME}" REGISTRY_PASSWORD: "${REGISTRY_PASSWORD}" ``` Note that Depot builds run on Depot infrastructure, so registry authentication must be configured to work from remote builders. ##### Debugging builds When builds fail or behave unexpectedly with Depot, use these debugging approaches to diagnose issues. ###### Enable verbose output Use Docker's build output to see detailed build information. Depot builds will show `[depot]` prefixed log lines indicating Depot is handling the build: ```yaml steps: - label: "\:docker\: Debug build with Depot" command: | curl -L https://depot.dev/install-cli.sh | DEPOT_INSTALL_DIR=/usr/local/bin sh depot configure-docker docker build --progress=plain -t my-image . env: DEPOT_PROJECT_ID: "${DEPOT_PROJECT_ID}" DEPOT_TOKEN: "${DEPOT_TOKEN}" ``` The `--progress=plain` flag shows detailed build output, and you can verify Depot is being used by looking for `[depot]` prefixed lines in the build logs. ###### Check Depot build logs View build logs in the Depot dashboard to see detailed information about build execution, including: - Build context upload progress - Layer build steps - Cache hit/miss information - Error details Access your Depot dashboard to view build history and logs for troubleshooting. ###### Verify Depot configuration Test Depot configuration before running builds: ```yaml steps: - label: "Verify Depot setup" command: | curl -L https://depot.dev/install-cli.sh | DEPOT_INSTALL_DIR=/usr/local/bin sh depot configure-docker depot projects list env: DEPOT_PROJECT_ID: "${DEPOT_PROJECT_ID}" DEPOT_TOKEN: "${DEPOT_TOKEN}" ``` This verifies authentication and project access before attempting builds. ###### Test builds locally Test your Dockerfile and build configuration locally before running on Kubernetes: ```bash #### Install Depot CLI locally curl -L https://depot.dev/install-cli.sh | DEPOT_INSTALL_DIR=/usr/local/bin sh #### Configure Depot depot configure-docker #### Test build docker build -t my-image . #### Verify build uses Depot (look for [depot] in output) ``` This helps identify issues with build configuration before running in Kubernetes environments. --- ### Troubleshooting URL: https://buildkite.com/docs/agent/self-hosted/agent-stack-k8s/troubleshooting #### Troubleshooting If you're experiencing any issues with Buildkite Agent Stack for Kubernetes controller, it is recommended that you enable the debug mode and log collection to obtain better visibility and insight into such issues or any other related problems. ##### Enable debug mode Increasing the verbosity of Buildkite Agent Stack for Kubernetes controller's logs can be accomplished by enabling debug mode. Once enabled, the logs will emit individual, detailed actions performed by the controller while obtaining jobs from Buildkite's API, processing configurations to generate a Kubernetes PodSpec and creating a new Kubernetes Job. Debug mode can help to identify processing delays or incorrect job processing issues. Debug mode can be enabled during the [installation](/docs/agent/self-hosted/agent-stack-k8s/installation) (Helm chart deployment) of the Buildkite Agent Stack for Kubernetes controller via the command line: ```bash helm upgrade --install agent-stack-k8s oci://ghcr.io/buildkite/helm/agent-stack-k8s \ --namespace buildkite \ --create-namespace \ --set config.debug=true \ --values values.yml ``` Or within the controller's configuration values YAML file: ```yaml #### values.yaml ... config: debug: true ... ``` ##### Kubernetes log collection To enable log collection for the Buildkite Agent Stack for Kubernetes controller, use the [`utils/log-collector`](https://github.com/buildkite/agent-stack-k8s/blob/main/utils/log-collector) script in the controller repository. ###### Prerequisites - kubectl binary - kubectl setup and authenticated to correct k8s cluster ###### Inputs to the script When executing the `log-collector` script, you will be prompted for: - Kubernetes Namespace where the Buildkite Agent Stack for Kubernetes controller is deployed. - Buildkite job ID to collect Job and Pod logs. ###### Gathering of data and logs The `log-collector` script will gather the following information: - Kubernetes Job, Pod resource details for the Buildkite Agent Stack for Kubernetes controller. - Kubernetes Pod logs for the Buildkite Agent Stack for Kubernetes controller. - Kubernetes Job, Pod resource details for the Buildkite job ID (if provided). - Kubernetes Pod logs that executed the Buildkite job ID (if provided). The logs will be archived in a tarball named `logs.tar.gz` in the current directory. If requested, these logs may be provided via email to the Buildkite Support (`support@buildkite.com`). ##### Common issues and fixes Below are some common issues that users may experience when using the Buildkite Agent Stack for Kubernetes controller to process Buildkite jobs. ###### Jobs are being created, but not processed by controller The primary requirement to have the Buildkite Agent Stack for Kubernetes controller acquire and process a Buildkite job is a matching `queue` tag. If the controller is configured to process scheduled jobs with tag `"queue=kubernetes"` you will need to ensure that your pipeline YAML is [targeting the same queue](https://buildkite.com/docs/agent/queues#targeting-a-queue-from-a-pipeline) at either the pipeline-level or at each step-level. If a job is created without a queue target, the [default queue](https://buildkite.com/docs/agent/queues#assigning-a-self-hosted-agent-to-a-queue-the-default-self-hosted-queue) will be applied. The Buildkite Agent Stack for Kubernetes controller expects all jobs to have a `queue` tag explicitly defined, even for "default" cluster queues. Any job missing a `queue` tag will be skipped by the controller during processing and the controller emit the following log: ``` job missing 'queue' tag, skipping... ``` To view the agent tags applied to your job(s), the following GraphQL query can be executed (be sure to substitute your Organization's slug and Cluster ID): ```graphql query getClusterScheduledJobs { organization(slug: "") { jobs( state: [SCHEDULED] type: [COMMAND] order: RECENTLY_CREATED first: 100 clustered: true cluster: "" ) { count edges { node { ... on JobTypeCommand { url uuid agentQueryRules } } } } } } ``` This will return the `100` newest created jobs for the `` Cluster in the `` Organization that are in a `scheduled` state and waiting for the controller to convert them each to a Kubernetes Job. Each Buildkite job's agent tags will be defined under `agentQueryRules`. ###### Controller stops accepting new jobs from a cluster queue Sometimes the count of jobs in `waiting` state in the Buildkite Pipelines UI may increase, however, no new pods are created. Reviewing the logs may reveal a `max-in-flight reached` error, for example: ``` DEBUG limiter scheduler/limiter.go:77 max-in-flight reached {"in-flight": 25} ``` ###### Initial troubleshooting steps 1. Enable the debug log and look for errors related to `max-in-flight` reached. 1. Confirm that no new Kubernetes jobs are created while the UI displays the jobs as `waiting`. ###### Workaround Execute the `kubectl -n buildkite rollout restart deployment agent-stack-k8s` command to restart the controller pod and clear the `max-in-flight reached` condition as this will allow scheduling to resume. ###### Fix If you are using any version of the controller older than [v0.2.7](https://github.com/buildkite/agent-stack-k8s/releases/tag/v0.27.0), [upgrade](https://github.com/buildkite/agent-stack-k8s/releases) to the latest version. ###### Wrong exit code affects auto job retries Error code from the Kubernetes pods may not be passed through the agent, preventing the use of [exit-based retries](/docs/pipelines/configure/retry). This is what the error could look like: ``` The following init containers failed: CONTAINER EXIT CODE SIGNAL REASON MESSAGE My-agent 137 0 ContainerStatusUnknown The container could not be located when the pod was terminated ``` Such scenario might take place if in the Buildkite Pipelines UI, the exit code was `137`, however the exit code emitted from the container was `1`. As a result, the kickoff of retries will not happen if they were configured to happen for the exit code `1`. ###### Workaround Add a retry rule for all stack-level failures. An example of such configuration would look like this: ``` retry: - signal_reason: "stack_error" limit: 3 ``` ###### Fix Upgrading to version [v.0.29.0](https://github.com/buildkite/agent-stack-k8s/releases/tag/v0.29.0) is the recommended action in this case as a "stack_error" exit reason was added to the agent, to provide better visibility to stack-level errors. --- ### Amazon ECR authentication URL: https://buildkite.com/docs/agent/self-hosted/agent-stack-k8s/migrate-from-elastic-ci-stack-for-aws/ecr #### Amazon ECR authentication The [Elastic CI Stack for AWS](/docs/agent/self-hosted/aws/elastic-ci-stack) pre-configures the [`ecr` plugin](https://github.com/buildkite-plugins/ecr-buildkite-plugin) to run automatically as a local [agent hook](/docs/agent/self-hosted/agent-stack-k8s/agent-hooks-and-plugins) through the `pre-command` hook. This provides automatic authentication to Amazon ECR before each job runs, with no configuration required in your pipeline YAML. When using Agent Stack for Kubernetes, you need to add the `ecr` plugin to each [pipeline step](/docs/pipelines/configure/defining-steps) that needs ECR access and ensure AWS credentials are available to your jobs. > 📘 Other Docker registries > For Docker Hub, Google Container Registry, or other Docker registries, see [Docker registry authentication](/docs/agent/self-hosted/agent-stack-k8s/migrate-from-elastic-ci-stack-for-aws/docker-login) instead. The `docker-login` plugin provides authentication for non-ECR registries. ##### Migrating to Agent Stack for Kubernetes When migrating to Agent Stack for Kubernetes, you need to explicitly configure the `ecr` plugin in your pipeline YAML for each step that needs ECR access. The plugin automatically handles `docker login` before each step using AWS credentials that are automatically refreshed. ###### Provide AWS credentials to your Pods The `ecr` plugin requires AWS credentials to be available in your job Pods. You can provide these credentials using IAM Roles for Service Accounts (recommended for EKS clusters), AWS credentials stored as Kubernetes Secrets, or the [`aws-assume-role-with-web-identity` plugin](https://buildkite.com/resources/plugins/buildkite-plugins/aws-assume-role-with-web-identity-buildkite-plugin/) with [Buildkite OIDC](/docs/pipelines/security/oidc) tokens. Learn more about all available configuration options for the `ecr` plugin, see the plugin's [Options section of its README](https://github.com/buildkite-plugins/ecr-buildkite-plugin#options). ###### Using IRSA IAM Roles for Service Accounts (IRSA) is the recommended approach for EKS clusters. IRSA allows your Kubernetes Pods to assume AWS IAM roles automatically. AWS handles credential rotation, so you don't need to manage tokens manually. For more information, see the [AWS documentation](https://docs.aws.amazon.com/eks/latest/userguide/iam-roles-for-service-accounts.html) on IAM Roles for Service Accounts. To start using IRSA, first, create a Kubernetes [service account](https://kubernetes.io/docs/concepts/security/service-accounts/) with the IAM role annotation: ```yaml #### serviceaccount.yaml apiVersion: v1 kind: ServiceAccount metadata: name: buildkite-agent namespace: buildkite annotations: eks.amazonaws.com/role-arn: arn\:aws\:iam::123456789012:role/buildkite-agent-ecr-role ``` Then configure the controller to use this service account: ```yaml #### values.yaml config: pod-spec-patch: serviceAccountName: buildkite-agent ``` ###### Using AWS credentials as Kubernetes Secrets Alternatively, you can store AWS credentials as a Kubernetes Secret: ```bash kubectl create secret generic aws-credentials \ --from-literal=AWS_ACCESS_KEY_ID='your-access-key' \ --from-literal=AWS_SECRET_ACCESS_KEY='your-secret-key' \ -n buildkite ``` Then configure the controller to mount these credentials: ```yaml #### values.yaml config: default-command-params: envFrom: - secretRef: name: aws-credentials ``` With the credentials configured at the controller level, the credentials are automatically available to all job containers. Add the `ecr` plugin to your pipeline steps: ```yaml #### pipeline.yaml steps: - label: "\:docker\: Build and push to ECR" commands: | docker build -t myimage:latest . docker push 123456789012.dkr.ecr.us-east-1.amazonaws.com/myimage:latest agents: queue: kubernetes plugins: - ecr#v2.11.0: region: us-east-1 - kubernetes: podSpec: containers: - image: my-custom-image:latest ``` > 📘 Container image requirements > The `ecr` plugin requires both the AWS CLI and Docker to be available in your container. You'll need a custom image that includes both tools. ###### Using the AWS assume-role-with-web-identity plugin The [AWS assume-role-with-web-identity plugin](https://github.com/buildkite-plugins/aws-assume-role-with-web-identity-buildkite-plugin) uses Buildkite OIDC tokens to assume an AWS IAM role without storing AWS credentials. You won't need to manage long-lived credentials in Kubernetes Secrets. Before using this plugin, you must configure an OIDC identity provider in AWS with a provider URL of `https://agent.buildkite.com` and an audience of `sts.amazonaws.com`. See the plugin's [AWS configuration documentation](https://buildkite.com/resources/plugins/buildkite-plugins/aws-assume-role-with-web-identity-buildkite-plugin/) for detailed setup instructions. Add the plugin before the `ecr` plugin in your pipeline: ```yaml #### pipeline.yaml steps: - label: "\:docker\: Build and push to ECR" commands: | docker build -t myimage:latest . docker push 123456789012.dkr.ecr.us-east-1.amazonaws.com/myimage:latest agents: queue: kubernetes plugins: - aws-assume-role-with-web-identity#v1.4.0: role-arn: arn\:aws\:iam::123456789012:role/ecr-access-role - ecr#v2.11.0: region: us-east-1 - kubernetes: podSpec: containers: - image: my-custom-image:latest ``` > 📘 Container image requirements > Both the `aws-assume-role-with-web-identity` and `ecr` plugins require the AWS CLI to be available in your container, and the commands require Docker. You'll need a custom image that includes both the AWS CLI and Docker. ##### Using imagePullSecrets for pulling container images If you need Kubernetes to be able to authenticate when pulling private container images from ECR for your job Pods, configure authentication for the [kubelet](https://kubernetes.io/docs/reference/command-line-tools-reference/kubelet/). It is separate from the `ecr` plugin, which handles authentication for Docker commands that run inside your job containers. Kubernetes provides two approaches for kubelet authentication to ECR. You can use the kubelet credential provider, which dynamically retrieves credentials without storing them in your cluster (recommended), or create a Docker registry secret with static credentials that expire after 12 hours. ###### Using kubelet credential provider The kubelet credential provider is the recommended approach. It dynamically retrieves ECR credentials without storing them as secrets in your cluster. This eliminates the 12-hour token expiry issue and reduces credential management overhead. This approach requires Kubernetes 1.26 or later and cluster-level configuration access to install the credential provider plugin on all nodes. For setup instructions, see the [Kubernetes documentation](https://kubernetes.io/docs/tasks/administer-cluster/kubelet-credential-provider/) on kubelet credential providers. Configure the credential provider on your cluster nodes with a configuration file: ```yaml #### /etc/kubernetes/credentialproviders/config.yaml apiVersion: kubelet.config.k8s.io/v1 kind: CredentialProviderConfig providers: - name: ecr-credential-provider matchImages: - "*.dkr.ecr.*.amazonaws.com" - "*.dkr.ecr.*.amazonaws.com.cn" - "*.dkr.ecr-fips.*.amazonaws.com" defaultCacheDuration: "12h" apiVersion: credentialprovider.kubelet.k8s.io/v1 ``` Once configured at the cluster level, the kubelet automatically authenticates to ECR when pulling images. No pipeline configuration changes are required. ###### Using Docker registry secrets If you cannot configure the kubelet credential provider, you can create a Kubernetes secret with ECR credentials: ```bash kubectl create secret docker-registry ecr-credentials \ --docker-server=123456789012.dkr.ecr.us-east-1.amazonaws.com \ --docker-username=AWS \ --docker-password="$(aws ecr get-login-password --region us-east-1)" \ -n buildkite ``` > 📘 Token expiry > ECR tokens expire after 12 hours. You'll need to refresh this secret periodically using a Kubernetes CronJob that runs every few hours to fetch a new token and update the secret. For more information about CronJobs, see the [Kubernetes documentation on CronJobs](https://kubernetes.io/docs/concepts/workloads/controllers/cron-jobs/). ###### Configure imagePullSecrets in your pipeline When using Docker registry secrets, add the `imagePullSecrets` configuration to your pipeline using the Kubernetes plugin: ```yaml #### pipeline.yaml steps: - label: "\:docker\: Run private ECR image" command: echo "Running from private ECR image" agents: queue: kubernetes plugins: - kubernetes: podSpec: imagePullSecrets: - name: ecr-credentials containers: - image: 123456789012.dkr.ecr.us-east-1.amazonaws.com/my-private-image:latest ``` ###### Configure imagePullSecrets at the controller level When using Docker registry secrets, you can configure `imagePullSecrets` at the controller level to apply them to all jobs in your cluster: ```yaml #### values.yaml config: pod-spec-patch: imagePullSecrets: - name: ecr-credentials ``` This configuration automatically adds the image pull secret to all job Pods without requiring per-pipeline configuration. --- ### Docker registry authentication URL: https://buildkite.com/docs/agent/self-hosted/agent-stack-k8s/migrate-from-elastic-ci-stack-for-aws/docker-login #### Docker registry authentication The [Elastic CI Stack for AWS](/docs/agent/self-hosted/aws/elastic-ci-stack) pre-configures the [`docker-login` plugin](https://github.com/buildkite-plugins/docker-login-buildkite-plugin) to run automatically as a local [agent hook](/docs/agent/self-hosted/agent-stack-k8s/agent-hooks-and-plugins) through the `pre-command` hook. This provides automatic authentication to Docker registries before each job runs, with no configuration required in your pipeline YAML. The Agent Stack for Kubernetes requires explicit configuration in your pipeline YAML. The `docker-login` plugin must be added to each [pipeline step](/docs/pipelines/configure/defining-steps) that needs registry access, and credentials must be managed as Kubernetes Secrets. > 📘 Amazon ECR registries > For Amazon ECR registries, see [Amazon ECR authentication](/docs/agent/self-hosted/agent-stack-k8s/migrate-from-elastic-ci-stack-for-aws/ecr) instead. The `ecr` plugin provides a better experience for ECR by automatically handling authentication and credential refresh. ##### Migrating to Agent Stack for Kubernetes Learn more about all available configuration options for the `docker-login` plugin, in the plugin's [Configurations section of its README](https://github.com/buildkite-plugins/docker-login-buildkite-plugin#configurations). ###### Store credentials as a generic secret Create a Kubernetes Secret containing your Docker registry password: ```bash kubectl create secret generic docker-login-credentials \ --from-literal=DOCKER_LOGIN_PASSWORD='your-password-here' \ -n buildkite ``` ###### Configure the plugin in your pipeline Add the `docker-login` plugin to each step that requires Docker registry access: ```yaml #### pipeline.yaml steps: - label: "\:docker\: Build and push" commands: | docker build -t myimage:latest . docker push myimage:latest agents: queue: kubernetes plugins: - docker-login#v3.0.0: username: myusername password-env: DOCKER_LOGIN_PASSWORD server: docker.io # optional, defaults to Docker Hub - kubernetes: podSpec: containers: - image: docker:latest env: - name: DOCKER_LOGIN_PASSWORD valueFrom: secretKeyRef: name: docker-login-credentials key: DOCKER_LOGIN_PASSWORD ``` ###### Using controller configuration for all jobs If all jobs in your cluster need to authenticate to the same Docker registry, you can configure the credentials at the controller level instead of per-pipeline: ```yaml #### values.yaml config: default-command-params: envFrom: - secretRef: name: docker-login-credentials ``` You'll still need to add the `docker-login` plugin to your pipeline steps, but the credentials will be automatically available to all containers. ##### Using imagePullSecrets for pulling container images If you need Kubernetes to authenticate when pulling private container images for your job pods, use `imagePullSecrets`. This is a Kubernetes-native feature separate from the `docker-login` plugin. For more information about `imagePullSecrets`, see the [Kubernetes documentation](https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/). ###### Create a Docker registry secret Use the `kubectl create secret docker-registry` command to create a Kubernetes secret specifically for pulling images: ```bash kubectl create secret docker-registry my-registry-credentials \ --docker-server=docker.io \ --docker-username=myusername \ --docker-password=mypassword \ --docker-email=my@email.com \ -n buildkite ``` ###### Configure imagePullSecrets in your pipeline Add the `imagePullSecrets` configuration to your pipeline using the Kubernetes plugin: ```yaml #### pipeline.yaml steps: - label: "\:docker\: Run private image" command: echo "Running from private image" agents: queue: kubernetes plugins: - kubernetes: podSpec: imagePullSecrets: - name: my-registry-credentials containers: - image: myusername/my-private-image:latest ``` ###### Configure imagePullSecrets at the controller level To use the same registry credentials for all jobs in your cluster, configure `imagePullSecrets` in your controller values file: ```yaml #### values.yaml config: pod-spec-patch: imagePullSecrets: - name: my-registry-credentials ``` This automatically adds the image pull secret to all job pods without requiring per-pipeline configuration. --- ### Pre-installed packages URL: https://buildkite.com/docs/agent/self-hosted/agent-stack-k8s/migrate-from-elastic-ci-stack-for-aws/packages #### Pre-installed packages The [Elastic CI Stack for AWS](/docs/agent/self-hosted/aws/elastic-ci-stack) AMIs include pre-installed system packages and tools that your builds may depend on. When migrating to Agent Stack for Kubernetes, you need to ensure the required tools are available in your container images. This guide covers the differences between the pre-installed packages in the Elastic CI Stack for AWS and the default Buildkite agent container image, and how to handle missing packages. ##### Package comparison The Elastic CI Stack for AWS AMI includes the following packages: | Package | Available in `buildkite/agent:latest` | Notes | | ------- | ------------------------------------- | ----- | | `git` | Yes | Core functionality | | `git-lfs` | No | Required for repositories using Git LFS | | `jq` | Yes | JSON processing | | `python` | Yes | Python runtime | | `unzip` | Yes | Archive extraction | | `wget` | Yes | File downloads | | `lsof` | Yes | Process diagnostics | | `docker` | Yes | Container builds | | `zip` | No | Archive creation | | `pigz` | No | Parallel compression | | `aws-cli` | No | AWS operations | | `amazon-ecr-credential-helper` | No | ECR authentication | | `amazon-cloudwatch-agent` | No | AWS-specific monitoring | | `amazon-ssm-agent` | No | AWS-specific management | | `aws-cfn-bootstrap` | No | AWS CloudFormation | | `ec2-instance-connect` | No | AWS-specific SSH | | `mdadm` | No | RAID management | | `nvme-cli` | No | NVMe disk management | | `python-pip` | No | Python package management | | `python-setuptools` | No | Python package building | | `bind-utils` | No | DNS utilities (`dig`, `nslookup`) | | `rsyslog` | No | System logging | | `gnupg2` | No | GPG signing and verification | ##### Handling missing packages When a package your builds require is not available in the default agent image, you have three options: - Use a Buildkite [plugin](#handling-missing-packages-using-plugins) that provides the functionality. - Create a [custom container image](#handling-missing-packages-using-custom-container-images) with the required packages. - Install packages at runtime using an [agent hook](#handling-missing-packages-using-agent-hooks). ###### Using plugins Plugins can provide tool functionality without modifying your container image. This approach works well for tools with existing plugin support. For AWS CLI operations, use the [`aws-assume-role-with-web-identity` plugin](https://github.com/buildkite-plugins/aws-assume-role-with-web-identity-buildkite-plugin) with OIDC, or provide AWS credentials to a container that includes the AWS CLI. Browse the [plugins directory](/docs/pipelines/integrations/plugins/directory) for plugins that may provide the functionality you need. ###### Using custom container images For packages used frequently across many pipelines, create a custom container image based on the Buildkite agent image or another base image. Create a Dockerfile with the additional packages: ```dockerfile FROM buildkite/agent:latest USER root RUN apt-get update && apt-get install -y \ git-lfs \ zip \ pigz \ python3-pip \ dnsutils \ gnupg2 \ && rm -rf /var/lib/apt/lists/* USER buildkite-agent ``` For AWS CLI, install using pip or download the official installer: ```dockerfile FROM buildkite/agent:latest USER root RUN apt-get update && apt-get install -y \ curl \ unzip \ && curl "https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip" -o "awscliv2.zip" \ && unzip awscliv2.zip \ && ./aws/install \ && rm -rf awscliv2.zip aws \ && rm -rf /var/lib/apt/lists/* USER buildkite-agent ``` Build and push the image to your container registry: ```bash docker build -t my-registry/buildkite-agent-custom:latest . docker push my-registry/buildkite-agent-custom:latest ``` Use the custom image in your pipeline: ```yaml steps: - label: "Build" command: "make build" agents: queue: kubernetes image: my-registry/buildkite-agent-custom:latest ``` ###### Using agent hooks For packages needed occasionally or for testing, install them at runtime using an agent hook. This approach adds latency to job startup but avoids maintaining custom images. Create a `pre-command` hook that installs the required packages: ```bash #!/bin/bash set -euo pipefail if ! command -v zip &> /dev/null; then apt-get update && apt-get install -y zip fi ``` Create a ConfigMap with the hook: ```bash kubectl create configmap buildkite-hooks \ --from-file=pre-command=pre-command \ --namespace buildkite ``` Configure the controller to use the hook: ```yaml config: agent-config: hooks-path: /buildkite/hooks hooksVolume: name: buildkite-hooks configMap: name: buildkite-hooks defaultMode: 493 ``` > 🚧 Runtime installation limitations > Installing packages at runtime requires root access in your container and adds latency to every job. This approach works for testing but is not recommended for production workloads. ##### AWS-specific packages Several packages in the Elastic CI Stack for AWS are AWS-specific and may not be needed when running on Kubernetes: - `amazon-ssm-agent`: Provides AWS Systems Manager access. Not applicable in Kubernetes. - `aws-cfn-bootstrap`: Used for CloudFormation stack signaling. Not applicable in Kubernetes. - `ec2-instance-connect`: Provides SSH access to EC2 instances. Use `kubectl exec` for pod access instead. - `amazon-cloudwatch-agent`: For CloudWatch metrics and logs. Use Kubernetes-native observability tools or configure container logging to forward to CloudWatch if required. - `mdadm` and `nvme-cli`: Low-level disk management tools. Kubernetes manages storage through PersistentVolumes. If your builds use the AWS CLI for operations like S3 uploads or ECR authentication, include it in a custom container image or use the appropriate Buildkite plugins. See [Amazon ECR authentication](/docs/agent/self-hosted/agent-stack-k8s/migrate-from-elastic-ci-stack-for-aws/ecr) for ECR-specific guidance. --- ### Hook execution differences URL: https://buildkite.com/docs/agent/self-hosted/agent-stack-k8s/migrate-from-elastic-ci-stack-for-aws/hook-execution-differences #### Hook execution differences There is a difference in how agent hooks execute in [Elastic CI Stack for AWS](/docs/agent/self-hosted/aws/elastic-ci-stack) and [Buildkite Agent Stack for Kubernetes](/docs/agent/self-hosted/agent-stack-k8s). On the Elastic CI Stack for AWS instances, all hooks run in a single agent process. On the Agent Stack for Kubernetes instances, checkout and command phases run in _separate containers_. This separation significantly impacts how hooks can share state and communicate with each other. ##### Separate container execution Hooks are categorized by their lifecycle phase (see [job lifecycle hooks](/docs/agent/hooks#job-lifecycle-hooks)). On the Agent Stack for Kubernetes, they can be categorized as `checkout`, `command` and `environment` hooks. These phases execute in separate containers. ###### Checkout phase hooks Checkout phase hooks (`pre-checkout`, `checkout`, `post-checkout`) run only in the `checkout` container. They have access to checkout-phase environment variables and can modify files in the workspace (which is shared with command containers). However, environment variables set during this phase cannot be directly passed to command containers. ###### Command phase hooks Command phase hooks (`pre-command`, `command`, `post-command`) run only in the user-defined `command` container(s). They do not have access to environment variables set during the checkout hooks, but can access files created by checkout hooks via the shared workspace. ###### Environment hook The environment hook runs _multiple times_ per job (once per container). It executes in the checkout container first, then executes again in each command container. Each execution is isolated. > 🚧 Critical difference from the Elastic CI Stack for AWS > Environment variables set during the checkout phase (`pre-checkout`, `checkout`, `post-checkout` hooks) will _not_ be available during the command phase (`pre-command`, `command`, `post-command` hooks). This is operationally different from how hooks are [sourced](/docs/agent/hooks#hook-scopes) on EC2-based Elastic CI Stack agents. ##### Migration strategies When migrating hooks that worked on the Elastic CI Stack for AWS, consider these approaches: ###### 1. Sharing environment variables between phases On the Elastic CI Stack for AWS, this approach works: ```bash #### .buildkite/hooks/post-checkout export MY_CUSTOM_VAR="value" #### .buildkite/hooks/pre-command echo $MY_CUSTOM_VAR # ✅ Available on EC2 ``` On the Agent Stack for Kubernetes, it does not work as expected: ```bash #### .buildkite/hooks/post-checkout export MY_CUSTOM_VAR="value" # Only available in checkout container #### .buildkite/hooks/pre-command echo $MY_CUSTOM_VAR # ❌ Not available in command container ``` **Solution:** Use pipeline-level environment variables or shared files: ```bash #### .buildkite/hooks/post-checkout echo "value" > /workspace/my_custom_var #### .buildkite/hooks/pre-command MY_CUSTOM_VAR=$(cat /workspace/my_custom_var) echo $MY_CUSTOM_VAR # ✅ Works on Kubernetes ``` Or set the variable at pipeline level: ```yaml steps: - label: "My step" env: MY_CUSTOM_VAR: "value" command: echo $MY_CUSTOM_VAR ``` ###### 2. Environment hook runs once per container On the Elastic CI Stack for AWS, the hook runs once per job: ```bash #### .buildkite/hooks/environment echo "Running environment hook" # Prints once #### Logs: #### Running environment hook ``` On the Agent Stack for Kubernetes, the hook runs once per container: ```bash #### .buildkite/hooks/environment echo "Running environment hook" # Prints multiple times #### Logs: #### Running environment hook # <-- checkout container #### Running environment hook # <-- command container ``` **Solution:** Add guards for operations that should only happen once: ```bash #### .buildkite/hooks/environment if [[ "$BUILDKITE_BOOTSTRAP_PHASES" == *"checkout"* ]]; then # Only run in checkout container echo "Running once in checkout container" fi if [[ "$BUILDKITE_BOOTSTRAP_PHASES" == *"command"* ]]; then # Only run in command container echo "Running hook in command container" fi ``` ###### 3. Checkout skip behavior When using `checkout: skip: true`: On Elastic CI Stack for AWS agents, hooks still run in the agent process, even when checkout is skipped. On Agent Stack for Kubernetes, the checkout container is not created, so: - Checkout-related hooks do not execute at all - Only the `environment` hook and command-related hooks run in the command container(s) **Solution:** If your hooks depend on checkout hooks running, ensure they don't rely on this behavior when checkout is skipped, or move the logic to command-phase hooks. ###### 4. Plugin permission issues with non-root users On the Elastic CI Stack for AWS, plugins run with consistent user permissions throughout the job lifecycle. Command hooks have access to plugin resources without permission conflicts since the agent process runs with the same user context. On the Agent Stack for Kubernetes, plugins owned by root in the Agent Stack for Kubernetes can cause permission issues when command containers run with non-root users. This results in plugin access failures when command-phase hooks attempt to execute or read plugin files. **Solution:** Adjust file permissions on plugin files to allow non-root users to access them. Set appropriate read and execute permissions on the files at the command container (for example, `chmod 755` for directories, `chmod 644` for files, or `chmod 755` for executable scripts). ##### Testing your migration When migrating from the Elastic CI Stack to the Agent Stack for Kubernetes: 1. **Audit your hooks:** Review all agent hooks and repository hooks for cross-phase dependencies. 1. **Test in isolation:** Set up a test cluster with the Agent Stack for Kubernetes. 1. **Verify environment variables:** Ensure critical environment variables are set at the pipeline level, not in hooks. 1. **Check side effects:** If your `environment` hook has side effects (logging, API calls, counters), ensure they work correctly when run multiple times. 1. **Monitor build logs:** Compare build output between Elastic CI Stack and Agent Stack for Kubernetes to identify unexpected behavior. ##### Additional resources - [Agent hooks and plugins on Kubernetes](/docs/agent/self-hosted/agent-stack-k8s/agent-hooks-and-plugins) - [Agent hook execution differences](/docs/agent/self-hosted/agent-stack-k8s/agent-hooks-and-plugins#agent-hook-execution-differences) - [Buildkite agent hooks reference](/docs/agent/hooks) - [Environment variables](/docs/pipelines/configure/environment-variables) --- ### Secrets URL: https://buildkite.com/docs/agent/self-hosted/agent-stack-k8s/migrate-from-elastic-ci-stack-for-aws/secrets #### Migrating secrets When migrating from the [Elastic CI Stack for AWS](/docs/agent/self-hosted/aws/elastic-ci-stack) to the Buildkite Agent Stack for Kubernetes ([agent-stack-k8s](https://github.com/buildkite/agent-stack-k8s)), you need to establish a new approach for managing secrets that were previously stored in S3 buckets. The Elastic CI Stack for AWS automatically retrieves secrets from S3 and makes them available to jobs. This functionality needs to be replaced when moving to Kubernetes. This guide covers three approaches for migrating secrets when moving to Kubernetes and provides detailed examples for each. ##### S3 secrets in Elastic CI Stack for AWS The Elastic CI Stack for AWS uses an S3 bucket to store secrets that are automatically retrieved by agents and made available to your builds. The stack supports several types of secrets stored at specific paths: - SSH private keys for repository access (`/private_ssh_key`) - Environment variable files (`/env` or `/environment`) - Git credentials for HTTPS cloning (`/git-credentials`) - Individual secret files (`/secret-files/*`) - Pipeline-specific variants of the above (`/{pipeline-slug}/...`) For complete details about S3 secrets in the Elastic CI Stack for AWS, refer to the [S3 secrets bucket](/docs/agent/self-hosted/aws/elastic-ci-stack/ec2-linux-and-windows/security#s3-secrets-bucket) documentation. ##### Migration approaches When migrating to the Buildkite Agent Stack for Kubernetes, here are three approaches to consider for handling secrets: - Keeping your existing S3 bucket and using the `elastic-ci-stack-s3-secrets-hooks` repository to retrieve secrets - Moving secrets into [Kubernetes Secrets](https://kubernetes.io/docs/concepts/configuration/secret/) and exposing them through controller configuration - Moving secrets into [Buildkite secrets](/docs/pipelines/security/secrets/buildkite-secrets) and referencing them in your pipeline YAML or through the [agent CLI](/docs/agent/cli/reference) Each approach has different characteristics: | Consideration | S3 with Hooks | Kubernetes Secrets | Buildkite secrets | |--------------|---------------|-------------------|-------------------| | **Migration effort** | Low (reuse existing S3 bucket) | Medium (requires secret extraction and creation) | Medium (requires secret migration to Buildkite Pipelines) | | **Operational complexity** | Medium (requires AWS credentials, hook configuration) | Low (native Kubernetes) | Low (managed by Buildkite Pipelines) | | **Access control** | AWS IAM policies | Kubernetes RBAC | Buildkite access policies | | **Cross-platform** | AWS-specific | Kubernetes-specific | Platform-agnostic | | **Cost** | S3 storage + data transfer | Included with Kubernetes | Included with Buildkite | ##### Continue using S3 secrets bucket This approach uses the [`elastic-ci-stack-s3-secrets-hooks`](https://github.com/buildkite/elastic-ci-stack-s3-secrets-hooks) repository to continue retrieving secrets from your existing S3 bucket. The hooks run in the checkout and command containers to fetch secrets from S3 during job execution. This minimizes migration effort because your secrets remain in S3. ###### Prerequisites - Existing S3 secrets bucket from Elastic CI Stack for AWS - AWS credentials with read access to the S3 bucket - Kubernetes cluster with [Agent Stack for Kubernetes](https://github.com/buildkite/agent-stack-k8s) version 0.16.0 or later installed + For earlier versions, see the [agent hooks documentation](/docs/agent/self-hosted/agent-stack-k8s/agent-hooks-and-plugins#agent-hooks-in-earlier-versions) for alternative configuration ###### Implementation The hooks depend on the `s3secrets-helper` binary and the `git-credential-s3-secrets` script. You will need to obtain the required files: ```bash #### Download the hooks repository git clone https://github.com/buildkite/elastic-ci-stack-s3-secrets-hooks.git cd elastic-ci-stack-s3-secrets-hooks #### Option 1: Download pre-built binary from GitHub releases RELEASE_VERSION="v2.8.0" # Check https://github.com/buildkite/elastic-ci-stack-s3-secrets-hooks/releases for latest version curl -Lo s3secrets-helper \ "https://github.com/buildkite/elastic-ci-stack-s3-secrets-hooks/releases/download/${RELEASE_VERSION}/s3secrets-helper-linux-amd64" chmod +x s3secrets-helper #### Option 2: Build the binary from source (requires Go) #### cd s3secrets-helper #### go build -o ../s3secrets-helper #### cd .. ``` Create a ConfigMap for the hook scripts: ```bash kubectl create configmap buildkite-agent-hooks \ --from-file=environment=hooks/environment \ --from-file=pre-exit=hooks/pre-exit \ --namespace buildkite ``` Create a separate ConfigMap for the helper binary and git credential script: ```bash kubectl create configmap s3-secrets-helpers \ --from-file=git-credential-s3-secrets=git-credential-s3-secrets \ --from-file=s3secrets-helper=s3secrets-helper \ --namespace buildkite ``` Create a Kubernetes Secret with AWS credentials: ```bash kubectl create secret generic aws-credentials \ --from-literal=AWS_ACCESS_KEY_ID='YOUR_AWS_ACCESS_KEY' \ --from-literal=AWS_SECRET_ACCESS_KEY='YOUR_AWS_SECRET_KEY' \ --from-literal=AWS_DEFAULT_REGION='us-east-1' \ --namespace buildkite ``` Configure the Agent Stack for Kubernetes controller to mount the hooks, binaries, and provide AWS credentials. Add this to your `values.yaml`: > 📘 Version requirement > The `agent-config` configuration requires Agent Stack for Kubernetes version 0.16.0 or later. For earlier versions, see the [agent hooks documentation](/docs/agent/self-hosted/agent-stack-k8s/agent-hooks-and-plugins#agent-hooks-in-earlier-versions). ```yaml #### values.yaml config: agent-config: hooks-path: /buildkite/hooks hooksVolume: name: buildkite-hooks configMap: defaultMode: 493 # This is 0755 in octal name: buildkite-agent-hooks default-checkout-params: extraVolumeMounts: - name: s3-helpers mountPath: /usr/local/bin/s3secrets-helper subPath: s3secrets-helper - name: s3-helpers mountPath: /usr/local/bin/git-credential-s3-secrets subPath: git-credential-s3-secrets default-command-params: extraVolumeMounts: - name: s3-helpers mountPath: /usr/local/bin/s3secrets-helper subPath: s3secrets-helper - name: s3-helpers mountPath: /usr/local/bin/git-credential-s3-secrets subPath: git-credential-s3-secrets pod-spec-patch: containers: - name: checkout env: - name: BUILDKITE_PLUGIN_S3_SECRETS_BUCKET value: "example-secrets-bucket" envFrom: - secretRef: name: aws-credentials - name: container-0 env: - name: BUILDKITE_PLUGIN_S3_SECRETS_BUCKET value: "example-secrets-bucket" envFrom: - secretRef: name: aws-credentials volumes: - name: s3-helpers configMap: name: s3-secrets-helpers defaultMode: 0755 ``` Apply the configuration: ```bash helm upgrade agent-stack-k8s oci://ghcr.io/buildkite/helm/agent-stack-k8s \ --namespace buildkite \ --values values.yaml ``` ###### Considerations This approach maintains your existing S3 secret management but requires: - Agent Stack for Kubernetes version 0.16.0 or newer (for the `agent-config` configuration method explained above) - AWS credentials accessible from Kubernetes pods - Network connectivity to AWS S3 - The `s3secrets-helper` binary and `git-credential-s3-secrets` script - Maintenance of hook scripts, binaries, and AWS credential lifecycle - Potential latency from S3 API calls during job startup - Regular updates to binaries when new versions are released This approach works well as a temporary migration step or when you need to maintain consistency with remaining Elastic CI Stack for AWS deployments. ##### Migrate to Kubernetes secrets This approach provides a Kubernetes-native secrets management solution as it migrates secrets from S3 into native Kubernetes Secrets and exposes them to jobs using controller configuration or the [`kubernetes` plugin](/docs/agent/self-hosted/agent-stack-k8s/running-builds#defining-steps-kubernetes-plugin). ###### Prerequisites - Access to existing S3 secrets bucket - `kubectl` configured for your Kubernetes cluster - AWS CLI (for downloading secrets from S3) - [Agent Stack for Kubernetes](https://github.com/buildkite/agent-stack-k8s) installed ###### Migrating SSH keys SSH keys stored in S3 at `/private_ssh_key` can be migrated to Kubernetes Secrets. Start the migration with downloading the SSH key from S3: ```bash #### Download the SSH key from S3 aws s3 cp "s3://${SECRETS_BUCKET}/private_ssh_key" ./id_rsa chmod 600 ./id_rsa ``` Create a Kubernetes Secret: ```bash kubectl create secret generic git-ssh-credentials \ --from-file=SSH_PRIVATE_RSA_KEY=./id_rsa \ --namespace buildkite #### Clean up the local key file rm ./id_rsa ``` Configure the controller to mount the SSH key in the checkout container. Add to your `values.yaml`: ```yaml #### values.yaml config: default-checkout-params: envFrom: - secretRef: name: git-ssh-credentials ``` Alternatively, configure it per-pipeline using the `kubernetes` plugin: ```yaml #### pipeline.yaml steps: - label: "Build" command: "make build" agents: queue: kubernetes plugins: - kubernetes: gitEnvFrom: - secretRef: name: git-ssh-credentials ``` For complete details on Git credentials, refer to the [Git credentials](/docs/agent/self-hosted/agent-stack-k8s/git-credentials) documentation. ###### Migrating environment variables Environment variable files stored in S3 at `/env` or `/environment` can be migrated to Kubernetes Secrets. Start the migration with downloading the environment file from S3: ```bash #### Download the environment file aws s3 cp "s3://${SECRETS_BUCKET}/env" ./env #### View the contents (format: KEY=VALUE) cat ./env ``` Create a Kubernetes Secret from the environment file: ```bash kubectl create secret generic build-env-vars \ --from-env-file=./env \ --namespace buildkite #### Clean up the local file rm ./env ``` Expose environment variables to all containers: ```yaml #### values.yaml config: default-checkout-params: envFrom: - secretRef: name: build-env-vars default-command-params: envFrom: - secretRef: name: build-env-vars ``` Or configure per-pipeline: ```yaml #### pipeline.yaml steps: - label: "Deploy" command: "deploy.sh" agents: queue: kubernetes plugins: - kubernetes: podSpecPatch: containers: - name: checkout envFrom: - secretRef: name: build-env-vars - name: container-0 envFrom: - secretRef: name: build-env-vars ``` To expose specific environment variables individually: ```yaml #### values.yaml config: pod-spec-patch: containers: - name: container-0 env: - name: API_KEY valueFrom: secretKeyRef: name: build-env-vars key: API_KEY - name: DATABASE_URL valueFrom: secretKeyRef: name: build-env-vars key: DATABASE_URL ``` ###### Migrating Git credentials Git credentials files stored in S3 at `/git-credentials` can be migrated to Kubernetes Secrets for HTTPS repository cloning. Start the migration with downloading the Git credentials file from S3: ```bash #### Download the git-credentials file aws s3 cp "s3://${SECRETS_BUCKET}/git-credentials" ./.git-credentials chmod 600 ./.git-credentials ``` Create a Kubernetes Secret: ```bash kubectl create secret generic git-https-credentials \ --from-file=.git-credentials=./.git-credentials \ --namespace buildkite #### Clean up the local file rm ./.git-credentials ``` Configure the controller to use Git credentials: ```yaml #### values.yaml config: default-checkout-params: gitCredentialsSecret: secretName: git-https-credentials ``` Or configure per-pipeline: ```yaml #### pipeline.yaml steps: - label: "Build" command: "make build" agents: queue: kubernetes plugins: - kubernetes: checkout: gitCredentialsSecret: secretName: git-https-credentials ``` ###### Migrating individual secret files Individual secret files stored in S3 at `/secret-files/*` can be migrated to Kubernetes Secrets. These files become environment variables with names derived from their filenames. Start the migration with downloading secret files from S3: ```bash #### Download all secret files aws s3 sync "s3://${SECRETS_BUCKET}/secret-files/" ./secret-files/ #### View downloaded files ls ./secret-files/ ``` Create Kubernetes Secrets for each file: ```bash #### Create a secret for DATABASE_PASSWORD kubectl create secret generic database-password \ --from-file=DATABASE_PASSWORD=./secret-files/DATABASE_PASSWORD \ --namespace buildkite #### Create a secret for API_TOKEN kubectl create secret generic api-token \ --from-file=API_TOKEN=./secret-files/API_TOKEN \ --namespace buildkite #### Clean up local files rm -rf ./secret-files ``` Expose individual secrets as environment variables: ```yaml #### values.yaml config: default-checkout-params: envFrom: - secretRef: name: database-password - secretRef: name: api-token default-command-params: envFrom: - secretRef: name: database-password - secretRef: name: api-token ``` Alternatively, create a single Secret containing multiple files: ```bash kubectl create secret generic app-secrets \ --from-file=./secret-files/ \ --namespace buildkite #### Clean up local files rm -rf ./secret-files ``` Expose all secrets from the single Secret: ```yaml #### values.yaml config: default-checkout-params: envFrom: - secretRef: name: app-secrets default-command-params: envFrom: - secretRef: name: app-secrets ``` ###### Migrating pipeline-specific secrets Pipeline-specific secrets stored in S3 at `/{pipeline-slug}/...` can be migrated to pipeline-specific Kubernetes Secrets. Start the migration with downloading pipeline-specific secrets: ```bash #### Download pipeline-specific environment file aws s3 cp "s3://${SECRETS_BUCKET}/my-pipeline/env" ./my-pipeline-env #### Download pipeline-specific SSH key aws s3 cp "s3://${SECRETS_BUCKET}/my-pipeline/private_ssh_key" ./my-pipeline-key ``` Create pipeline-specific Kubernetes Secrets: ```bash #### Create Secret for pipeline environment variables kubectl create secret generic my-pipeline-env-vars \ --from-env-file=./my-pipeline-env \ --namespace buildkite #### Create Secret for pipeline SSH key kubectl create secret generic my-pipeline-ssh-key \ --from-file=SSH_PRIVATE_RSA_KEY=./my-pipeline-key \ --namespace buildkite #### Clean up local files rm ./my-pipeline-env ./my-pipeline-key ``` Configure pipeline-specific secrets in pipeline YAML: ```yaml #### pipeline.yaml steps: - label: "Build my-pipeline" command: "make build" agents: queue: kubernetes plugins: - kubernetes: gitEnvFrom: - secretRef: name: my-pipeline-ssh-key podSpecPatch: containers: - name: container-0 envFrom: - secretRef: name: my-pipeline-env-vars ``` ###### Considerations Kubernetes Secrets provide native integration with your cluster but require: - Initial migration effort to extract and create all secrets - Kubernetes RBAC configuration for secret access control - Process for secret rotation and updates in Kubernetes - Separate secrets management for each Kubernetes cluster This approach works well when committing fully to Kubernetes-native tooling and when secrets are environment-specific. ##### Migrate to Buildkite secrets This approach migrates S3 secrets to [Buildkite secrets](/docs/pipelines/security/secrets/buildkite-secrets), which provides centralized secrets storage accessible across different agent platforms. ###### Prerequisites - Buildkite organization with Secrets feature enabled - Cluster configured with Buildkite secrets access - [Agent Stack for Kubernetes](https://github.com/buildkite/agent-stack-k8s) installed ###### Migrating secrets to Buildkite For each secret in S3, create a corresponding Buildkite Secret. Learn how to [create a secret](/docs/pipelines/security/secrets/buildkite-secrets#create-a-secret) in the Buildkite secrets documentation. ###### Using secrets in pipeline YAML Reference Buildkite secrets directly in your pipeline YAML using the `secrets` key: ```yaml #### pipeline.yaml steps: - label: "Deploy" command: "deploy.sh" agents: queue: kubernetes secrets: - API_KEY - DATABASE_PASSWORD ``` The secrets are injected as environment variables with the same name as the secret key. You can also specify custom environment variable names: ```yaml #### pipeline.yaml steps: - label: "Deploy" command: "deploy.sh" agents: queue: kubernetes secrets: MY_API_KEY: API_KEY MY_DB_PASSWORD: DATABASE_PASSWORD ``` ###### Using secrets with the agent CLI Retrieve secrets using the `buildkite-agent secret` [CLI command](/docs/agent/cli/reference/secret) within your build steps: ```yaml #### pipeline.yaml steps: - label: "Deploy with CLI" command: | # Retrieve secret and use it API_KEY=$(buildkite-agent secret get api-key) DATABASE_PASSWORD=$(buildkite-agent secret get database-password) # Use secrets in deployment deploy.sh --api-key="$$API_KEY" --db-password="$$DATABASE_PASSWORD" agents: queue: kubernetes ``` ###### Migrating SSH keys For SSH keys, store the private key content as a Buildkite Secret, and configure it using an agent hook. Create a Kubernetes ConfigMap with a `pre-checkout` hook: ```bash cat > pre-checkout ~/.ssh/id_rsa chmod 600 ~/.ssh/id_rsa eval $(ssh-agent -s) ssh-add ~/.ssh/id_rsa fi EOF kubectl create configmap buildkite-hooks \ --from-file=pre-checkout=pre-checkout \ --namespace buildkite ``` Configure the hook in your controller: ```yaml #### values.yaml config: agent-config: hooks-path: /buildkite/hooks hooksVolume: name: buildkite-hooks configMap: name: buildkite-hooks defaultMode: 493 # This is 0755 in octal ``` ###### Considerations Buildkite secrets provide centralized management but require: - Buildkite secrets feature enabled for your organization - Migration of all secrets to Buildkite platform - Configuration of access policies for secret access control - Pipeline updates to reference secrets using the `secrets:` key or `buildkite-agent secret get` command This approach works well when using multiple agent platforms (Kubernetes, AWS, on-premises) and when centralized secrets management is preferred. ##### Related resources - [S3 secrets bucket in Elastic CI Stack for AWS](/docs/agent/self-hosted/aws/elastic-ci-stack/ec2-linux-and-windows/security#s3-secrets-bucket) - [Git credentials in agent-stack-k8s](/docs/agent/self-hosted/agent-stack-k8s/git-credentials) - [Kubernetes PodSpec in agent-stack-k8s](/docs/agent/self-hosted/agent-stack-k8s/podspec) - [Buildkite secrets](/docs/pipelines/security/secrets/buildkite-secrets) - [Using secrets in jobs](/docs/pipelines/security/secrets/buildkite-secrets#use-a-buildkite-secret-in-a-job) - [`buildkite-agent secret` CLI](/docs/agent/cli/reference/secret) - [`elastic-ci-stack-s3-secrets-hooks` repository](https://github.com/buildkite/elastic-ci-stack-s3-secrets-hooks) --- ### Docker daemon access URL: https://buildkite.com/docs/agent/self-hosted/agent-stack-k8s/migrate-from-elastic-ci-stack-for-aws/docker-daemon #### Docker daemon access The [Elastic CI Stack for AWS](/docs/agent/self-hosted/aws/elastic-ci-stack) includes Docker pre-installed in the instance images. Jobs can execute Docker commands directly using the local Docker daemon at `/var/run/docker.sock` without additional configuration. When migrating to [Agent Stack for Kubernetes](/docs/agent/self-hosted/agent-stack-k8s), Docker is not available by default. Kubernetes does not provide a Docker daemon on cluster nodes, so you need to configure Docker access explicitly for jobs that require Docker commands like `docker build` or `docker push`. This guide covers two approaches for providing Docker daemon access in Kubernetes and helps you choose the right approach for your migration scenario. ##### Docker access approaches in Kubernetes When migrating to Kubernetes, you can run a Docker daemon using Docker-in-Docker (DinD) as either a sidecar container for each job [Pod](https://kubernetes.io/docs/concepts/workloads/pods/) or as a [DaemonSet](https://kubernetes.io/docs/concepts/workloads/controllers/daemonset/) across cluster nodes. Each approach has different characteristics that affect your migration planning: | Consideration | Sidecar Container | DaemonSet | |--------------|-------------------|-----------| | Setup complexity | Low (configured per-pipeline or at controller level) | Medium (requires cluster-level DaemonSet configuration) | | Resource usage | Higher (new daemon per job) | Lower (shared daemon across jobs on the same node) | | Isolation | High (dedicated daemon per job) | Lower (shared daemon on each node) | | Startup time | Slower (daemon starts with each job) | Faster (daemon already running) | | Cluster impact | Minimal (only affects job Pods) | Moderate (runs on all or selected nodes) | | Build cache | Ephemeral (lost after job completes) | Persistent (shared across jobs on the same node) | ##### Using a Docker daemon sidecar container The sidecar approach runs a dedicated Docker daemon container alongside your main job container in the same Pod. This provides complete isolation between jobs, as each job gets its own daemon that is destroyed when the job completes. The [official Docker image](https://hub.docker.com/_/docker) provides a Docker-in-Docker (DinD) variant that runs the Docker daemon. Your main container connects to this daemon over TCP using the `DOCKER_HOST` environment variable. ###### Implementation Add the Docker daemon sidecar to your pipeline using the `kubernetes` plugin: ```yaml #### pipeline.yaml steps: - label: "\:docker\: Build with DinD sidecar" command: | docker build -t myimage:latest . docker push myregistry.com/myimage:latest env: DOCKER_HOST: tcp://localhost:2375 agents: queue: kubernetes image: docker:cli plugins: - kubernetes: sidecars: - image: docker:dind command: ["dockerd-entrypoint.sh"] securityContext: privileged: true env: - name: DOCKER_TLS_CERTDIR value: "" ``` ###### Understanding the configuration The sidecar configuration requires several key components: - `DOCKER_HOST` tells the Docker CLI to connect to the daemon at `tcp://localhost:2375` - The `docker:dind` image provides the Docker daemon in the sidecar container - `privileged: true` grants the sidecar elevated privileges needed to run the daemon and create containers - `DOCKER_TLS_CERTDIR` set to an empty string disables TLS authentication between containers in the same Pod ###### Controller-level configuration You can also configure the Docker daemon sidecar at the controller level to apply it to all jobs without modifying individual pipelines: ```yaml #### values.yaml config: pod-spec-patch: containers: - name: container-0 env: - name: DOCKER_HOST value: tcp://localhost:2375 initContainers: - name: dind-sidecar image: docker:dind command: ["dockerd-entrypoint.sh"] restartPolicy: Always securityContext: privileged: true env: - name: DOCKER_TLS_CERTDIR value: "" ``` With this controller-level configuration, all jobs processed by the controller automatically have access to Docker without per-pipeline configuration changes. ###### Using a Unix socket instead of TCP Instead of connecting over TCP, you can configure the daemon to use a Unix socket in a shared volume. This approach provides better security as the socket is not exposed over the network. Use the following configuration: ```yaml #### pipeline.yaml steps: - label: "\:docker\: Build with Unix socket" command: | docker build -t myimage:latest . docker push myregistry.com/myimage:latest env: DOCKER_HOST: unix:///var/run/docker.sock agents: queue: kubernetes image: docker:cli plugins: - kubernetes: podSpec: containers: - image: docker:cli volumeMounts: - name: docker-socket mountPath: /var/run volumes: - name: docker-socket emptyDir: {} # Shared volume between containers sidecars: - image: docker:dind command: ["dockerd-entrypoint.sh"] securityContext: privileged: true volumeMounts: - name: docker-socket mountPath: /var/run env: - name: DOCKER_TLS_CERTDIR value: "" ``` This configuration creates a shared [emptyDir](https://kubernetes.io/docs/concepts/storage/volumes/#emptydir) volume between the main container and the sidecar, allowing both to access the same Unix socket at `/var/run/docker.sock`. ###### Controller-level configuration with Unix socket You can configure the Unix socket approach at the controller level: ```yaml #### values.yaml config: pod-spec-patch: containers: - name: container-0 env: - name: DOCKER_HOST value: unix:///var/run/docker.sock volumeMounts: - name: docker-socket mountPath: /var/run initContainers: - name: dind-sidecar image: docker:dind command: ["dockerd-entrypoint.sh"] restartPolicy: Always securityContext: privileged: true volumeMounts: - name: docker-socket mountPath: /var/run env: - name: DOCKER_TLS_CERTDIR value: "" volumes: - name: docker-socket emptyDir: {} ``` ###### Considerations for the sidecar approach The sidecar approach maximizes job isolation by running a dedicated Docker daemon for each job. This increases startup time and resource usage per job. Build caches and images are ephemeral and are discarded when jobs complete. Each daemon requires privileged container permissions. This approach works well when strong isolation between jobs is required or when you want to minimize cluster-level configuration changes during migration. For more details about configuring Docker-in-Docker with sidecars, see [Docker-in-Docker container builds](/docs/agent/self-hosted/agent-stack-k8s/dind-container-builds). ##### Using a Docker daemon DaemonSet The DaemonSet approach runs a single Docker daemon on each cluster node, similar to how Docker runs in [Elastic CI Stack for AWS](/docs/agent/self-hosted/aws/elastic-ci-stack). Multiple jobs on the same node share the same daemon, which provides better resource efficiency and persistent build caches. ###### Implementation Create a DaemonSet that runs the Docker daemon on each node. This example uses the `buildkite` namespace, but you can use any namespace where your Buildkite jobs run: ```yaml #### docker-dind-daemonset.yaml apiVersion: apps/v1 kind: DaemonSet metadata: name: docker-dind namespace: buildkite # Use the namespace where your jobs run spec: selector: matchLabels: app: docker-dind template: metadata: labels: app: docker-dind spec: containers: - name: dind image: docker:dind command: ["dockerd-entrypoint.sh"] securityContext: privileged: true env: - name: DOCKER_TLS_CERTDIR value: "" - name: DOCKER_HOST value: tcp://0.0.0.0:2375 ports: - containerPort: 2375 protocol: TCP volumeMounts: - name: docker-storage mountPath: /var/lib/docker volumes: - name: docker-storage emptyDir: {} ``` Apply the DaemonSet to your cluster: ```bash kubectl apply -f docker-dind-daemonset.yaml ``` Create a [Service](https://kubernetes.io/docs/concepts/services-networking/service/) to expose the Docker daemon to job Pods: ```yaml #### docker-dind-service.yaml apiVersion: v1 kind: Service metadata: name: docker-dind namespace: buildkite # Must match the DaemonSet namespace spec: selector: app: docker-dind ports: - protocol: TCP port: 2375 targetPort: 2375 type: ClusterIP ``` Apply the Service: ```bash kubectl apply -f docker-dind-service.yaml ``` Configure jobs to connect to the DaemonSet daemon. The Service DNS name follows the Kubernetes format `..svc.cluster.local`: ```yaml #### pipeline.yaml steps: - label: "\:docker\: Build with DaemonSet" command: | docker build -t myimage:latest . docker push myregistry.com/myimage:latest env: DOCKER_HOST: tcp://docker-dind.buildkite.svc.cluster.local:2375 # docker-dind service in buildkite namespace agents: queue: kubernetes image: docker:cli ``` ###### Controller-level configuration Configure the Docker daemon connection at the controller level. Update the Service DNS name if you used a different namespace or service name: ```yaml #### values.yaml config: pod-spec-patch: containers: - name: container-0 env: - name: DOCKER_HOST value: tcp://docker-dind.buildkite.svc.cluster.local:2375 ``` ###### Persistent storage for build caches To preserve build caches and images across daemon restarts, configure persistent storage for the DaemonSet: ```yaml #### docker-dind-daemonset.yaml (storage section) spec: template: spec: containers: - name: dind # ... other configuration ... volumeMounts: - name: docker-storage mountPath: /var/lib/docker volumes: - name: docker-storage hostPath: path: /var/lib/docker-dind type: DirectoryOrCreate ``` > 📘 Build cache storage > When using [hostPath](https://kubernetes.io/docs/concepts/storage/volumes/#hostpath) for persistent storage, each node maintains its own separate Docker cache. Jobs scheduled on different nodes will not share cached layers. ###### Considerations for the DaemonSet approach The DaemonSet approach shares a single Docker daemon across all jobs on each node, providing optimized resource efficiency. Jobs have lower isolation since they share the daemon. Deploying DaemonSets requires cluster-level permissions. Persistent caches can improve performance but need storage management. Daemons run continuously, consuming resources even when idle, and network configuration is more complex than the sidecar approach. This approach works well when you need to optimize resource usage across many concurrent builds or want to maintain persistent build caches similar to the Elastic CI Stack for AWS. ##### Alternatives to running a Docker daemon If your use case allows, consider alternatives that do not require privileged containers: - [BuildKit](/docs/agent/self-hosted/agent-stack-k8s/buildkit-container-builds) provides enhanced security and performance for building container images. - [Kaniko](/docs/agent/self-hosted/agent-stack-k8s/kaniko-container-builds) builds container images without requiring privileged access. - [Buildah](/docs/agent/self-hosted/agent-stack-k8s/buildah-container-builds) builds OCI-compliant images without a daemon. These alternatives provide better security posture in Kubernetes environments where privileged containers are restricted or discouraged. ##### Security considerations Both approaches require privileged containers to run the Docker daemon. Privileged containers have elevated access to the host system and can pose security risks if compromised. Consider these security practices when running a Docker daemon: - Limit privileged container usage to trusted workloads and environments - Use [network policies](https://kubernetes.io/docs/concepts/services-networking/network-policies/) to restrict daemon access to authorized Pods only - Implement [resource limits](https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/) to prevent resource exhaustion - Regularly update Docker images to include security patches - Consider alternatives like BuildKit or Kaniko for better security For production environments, evaluate whether the Docker CLI compatibility requirement justifies the security implications of privileged containers. ##### Related resources - [Docker-in-Docker container builds](/docs/agent/self-hosted/agent-stack-k8s/dind-container-builds) - [BuildKit container builds](/docs/agent/self-hosted/agent-stack-k8s/buildkit-container-builds) - [Kaniko container builds](/docs/agent/self-hosted/agent-stack-k8s/kaniko-container-builds) - [Buildah container builds](/docs/agent/self-hosted/agent-stack-k8s/buildah-container-builds) - [Sidecars in Agent Stack for Kubernetes](/docs/agent/self-hosted/agent-stack-k8s/sidecars) - [Elastic CI Stack for AWS](https://github.com/buildkite/elastic-ci-stack-for-aws) --- ### Overview URL: https://buildkite.com/docs/agent/self-hosted/aws #### Buildkite agents in AWS The Buildkite agent can be run on AWS using Buildkite's Elastic CI Stack for AWS, using a Kubernetes cluster or by installing the agent on your self-managed EC2 instances. On this page, common installation and setup recommendations for different scenarios of using the Buildkite agent on AWS are covered. ##### Using the Elastic CI Stack for AWS The [Elastic CI Stack for AWS](/docs/agent/self-hosted/aws/elastic-ci-stack) is an autoscaling Buildkite agent cluster that includes Docker, S3, and CloudWatch integration. You can use the Elastic CI Stack for AWS to test Linux or Windows projects, parallelize large test suites, run Docker containers or `docker-compose` integration tests, or perform any AWS ops related tasks. ###### Setup with CloudFormation You can launch the Elastic CI Stack for AWS directly in your AWS account using a CloudFormation template. For setup instructions, see [Setup with CloudFormation](/docs/agent/self-hosted/aws/elastic-ci-stack/ec2-linux-and-windows/setup). ###### Setup with Terraform In addition to using CloudFormation, the Elastic CI Stack for AWS can also be deployed and managed using the Terraform module. For setup instructions, see [Setup with Terraform](/docs/agent/self-hosted/aws/elastic-ci-stack/ec2-linux-and-windows/terraform). ##### Using the Buildkite Agent Stack for Kubernetes on AWS The Buildkite agent's jobs can be run within a Kubernetes cluster on AWS. Before you start, you will require your own Kubernetes cluster running on AWS. Learn more about this from [Kubernetes on AWS](https://aws.amazon.com/kubernetes/). Once your Kubernetes cluster is running in AWS, you can then set up the [Buildkite Agent Stack for Kubernetes](/docs/agent/self-hosted/agent-stack-k8s) to run in this cluster. Learn more about how to set up the Agent Stack for Kubernetes on the [Installation](/docs/agent/self-hosted/agent-stack-k8s/installation) page of this documentation. ##### Installing the agent on your own AWS instances To run the Buildkite agent on your own AWS EC2 instances, use the installer that matches your EC2 instance operating system: * For Amazon Linux 2 or later, use the [Red Hat/CentOS installer](/docs/agent/self-hosted/install/redhat) * For macOS, use [installing the agent on your own AWS EC2 Mac instances](/docs/agent/self-hosted/aws/self-serve-install/ec2-mac) ##### Using the Elastic CI Stack for AWS for EC2 Mac CloudFormation template [Elastic CI Stack for AWS for EC2 Mac](https://github.com/buildkite/elastic-ci-stack-for-ec2-mac) is an experimental CloudFormation template for an autoscaling macOS Buildkite agent cluster. You can use an Elastic CI Stack for AWS for EC2 Mac deployment to build and test macOS, iOS, iPadOS, tvOS, and watchOS projects. Read the [Auto Scaling EC2 Mac instances](/docs/agent/self-hosted/aws/elastic-ci-stack/ec2-mac/setup) documentation for instructions on preparing and deploying this template. --- ### VPC design URL: https://buildkite.com/docs/agent/self-hosted/aws/architecture/vpc #### VPC design for the Elastic CI Stack for AWS Agent orchestration deployments on AWS require a virtual private cloud (VPC) network. Your VPC needs to provide routable access to the buildkite.com service so that `buildkite-agent` processes can connect, and retrieve the jobs assigned to them. The options are: * a public subnet, with a route table that has a default route pointing to an [internet gateway](https://docs.aws.amazon.com/vpc/latest/userguide/VPC_Internet_Gateway.html) * a private subnet, with a route table that has a default route pointing to an [NAT device](https://docs.aws.amazon.com/vpc/latest/userguide/vpc-nat.html) Auxiliary services used by the agent or your jobs such as S3, ECR, or SSM, can be routed over the public internet, or through a [VPC Endpoint](https://docs.aws.amazon.com/vpc/latest/privatelink/vpc-endpoints.html). The [AWS VPC quick start](https://aws.amazon.com/quickstart/architecture/vpc/) provides a template for deploying a 2, 3, or 4 Availability Zone VPC with parameters for whether to create public and private subnets. Once deployed, these subnets can then be provided as parameters to the agent orchestration templates such as the [Elastic CI Stack for AWS](/docs/agent/self-hosted/aws/elastic-ci-stack). Use your organization's threat model to guide the selection of a solution that balances operational complexity against acceptable risk for your workload. ##### Public subnets only The most basic VPC subnet design involves using only public subnets whose route table's default route points to an internet gateway. Under this design your EC2 instances or ECS tasks are provided with a public IPv4 address in order to access the internet directly. You can use [security groups](https://docs.aws.amazon.com/vpc/latest/userguide/VPC_SecurityGroups.html) to limit traffic and block inbound network connections to your instances. ##### Using private subnets for added security For an added layer of defence against unwanted inbound connectivity, you can place your instances in a private subnet. A private subnet provides the greatest level of control when seeking to restrict the inbound and outbound network connections of your agent instances. A private subnet's route table does not grant direct routable access to or from the internet. Instead, a private subnet's default route is pointed to a [NAT instance](https://docs.aws.amazon.com/vpc/latest/userguide/VPC_NAT_Instance.html) or a [NAT gateway](https://docs.aws.amazon.com/vpc/latest/userguide/vpc-nat-gateway.html). A NAT device lives in the public subnet, and rewrites the private source IP address of any outbound connections to its own public IP address. NAT devices statefully limit response traffic to known outbound network connections, similar to a security group. To diagnose agent instance performance and behaviors, it is common to remotely access an interactive prompt. There are a number of options available for remote access to instances in a private subnet, described in the following sections. ###### AWS Systems Manager Session Manager Installing the [AWS SSM Agent](https://docs.aws.amazon.com/systems-manager/latest/userguide/ssm-agent.html) allows you to initiate sessions on private instances without requiring publicly routable SSH, or adding a VPN gateway to your VPC. > Session Manager provides secure and auditable instance management without the need to open inbound ports, maintain bastion hosts, or manage SSH keys. Session Manager also allows you to comply with corporate policies that require controlled access to instances, strict security practices, and fully auditable logs with instance access details, while still providing end users with one-click cross-platform access to your managed instances. See the [AWS Systems Manager Session Manager](https://docs.aws.amazon.com/systems-manager/latest/userguide/session-manager.html) documentation for more details. ###### Bastion instance The bastion or jump host pattern, involves deploying an instance to a public subnet with a publicly routable IP address and a security group that allows external inbound SSH connections. An additional security group is used to restrict SSH access to the private subnet agent instances to the bastion instances. This limits the public surface area of your VPC, but still requires exposing an unmanaged instance to public traffic. Public facing instances should be patched and updated regularly. The [Linux Bastion Hosts on AWS Quick Start](https://aws.amazon.com/quickstart/architecture/linux-bastion/) provides an example of this pattern. ###### VPN A client VPN can be used to provide hosts outside your VPC with access to your otherwise non-internet routable private subnets. [AWS Client VPN](https://docs.aws.amazon.com/vpn/latest/clientvpn-admin/what-is.html) provides a managed client-based VPN. You can control which resources can be accessed by your Client VPN Endpoint using [Client VPN authorization](https://docs.aws.amazon.com/vpn/latest/clientvpn-admin/client-authorization.html). A VPN can also be combined with bastion instances to provide additional defence in depth, if appropriate or required for your use case. ###### S3 VPC endpoint A [gateway VPC endpoint](https://docs.aws.amazon.com/vpc/latest/privatelink/vpce-gateway.html) can be used to route traffic directly to regional AWS services. Gateway VPC endpoints can also be used to [control access to services](https://docs.aws.amazon.com/vpc/latest/privatelink/vpc-endpoints-access.html), for example restricting which S3 buckets your VPC resources are allowed to access. The AWS VPC Quick Start creates and configures a gateway VPC endpoint for AWS S3. The private subnet route tables are configured to forward the endpoint's IP-prefix list to the endpoint, instead of the NAT gateway. In-region S3 access from the private subnets will be routed directly over the VPC endpoint, and bypass the NAT gateway. By default, the VPC endpoint has a permissive "Full Access" policy. Should you wish to customize this, or the security group that the endpoint belongs to, create a fork of the CloudFormation template. --- ### Securing your setup URL: https://buildkite.com/docs/agent/self-hosted/aws/architecture/securing-your-setup #### Securing your setup Security is paramount when running CI/CD infrastructure in the cloud. This section outlines essential security practices for Buildkite agent deployments in Elastic CI Stack for AWS, focusing on preventing unauthorized access to AWS resources and implementing proper permission boundaries. These configurations help protect your infrastructure from potential security risks while maintaining the functionality your build processes require. ##### Preventing builds from accessing Amazon EC2 metadata If you provision infrastructure like databases, Redis, Amazon SQS, etc. using AWS permission sandboxes, you might want to restrict access to those roles in your builds. If you run your builds on an AWS EC2 permission sandbox and then allow Buildkite agents to generate and inject some sandboxed AWS credentials into the build secrets, such builds will have access to the EC2 metadata API. They will also be able to access the same permissions as your EC2. To avoid this, you need to prevent the builds from accessing your EC2 metadata or provide sandboxed AWS credentials for each build and restrict their permissions. There are two main ways to do it: * Compartmentalizing your Buildkite agents * Downgrading an instance profile role If you run all the build steps in Docker containers, take a look at [compartmentalizing your agents](#preventing-builds-from-accessing-amazon-ec2-metadata-restricting-permissions-using-compartmentalization-of-agents). If you are using Kubernetes for your Buildkite CI, use the [same approach](#preventing-builds-from-accessing-amazon-ec2-metadata-restricting-permissions-using-compartmentalization-of-agents) and also check out [this article](https://github.com/blakestoddard/scaledkite) for more information and inspiration. ###### Restricting permissions using compartmentalization of agents This approach suggests the use of Elastic CI Stack for AWS. However, these instructions can also be followed using hooks or scripts. You can divide your Buildkite agents by responsibilities. For example — agents building for development environments or release, agents deploying for staging or production, etc. This will help reflect multiple AWS environments in your Buildkite organization. To divide the responsibilities and permissions of Buildkite agents and provide the relevant teams with sandboxed IAM permissions for their own microservices, for each pipeline you will need to use a [third-party AWS AssumeRole Buildkite Plugin](https://github.com/cultureamp/aws-assume-role-buildkite-plugin/). This plugin also takes care of the injection of AWS credentials. To ensure that the agent in charge of a job, build, pipeline, etc., is allowed to run and will assume the role it has permission to, you can perform a [pre-checkout hook](/docs/agent/hooks) on the agent. ###### Restricting permissions by downgrading an instance profile role This approach is [suggested by Amazon](https://docs.aws.amazon.com/cli/latest/reference/ec2/replace-iam-instance-profile-association.html) and is helpful if you are not using Elastic CI Stack for AWS. To restrict permissions of an instance, you can permanently downgrade an instance's profile from a high-permission bootstrap role to a low-permission steady-state role. The high-permission role has a policy that allows replacing the instance profile with a low-permission role, but there is no such policy for a low-permission role. ##### Further tightening the security around EC2 permissions For added security, you can expire agents after a job. For example, you can: 1. Create a new agent for a pending job 1. Transition the agent to a sandbox role 1. Terminate the agent instance when the agent completes the job Starting a new EC2 instance for every job results in a small trade-off of speed in favor of security. However, the Buildkite CI stack for AWS uses a Lambda to start new EC2 instances on demand, and it usually takes around one minute for a typical Linux instance. A larger trade-off here is the need to keep discarding the cache on the machine — for example, pre-fetched and pre-built Docker images — and start anew every time. If you're less concerned about the CI spend, new EC2 instance starting time, and other resources, you can specify a minimum stack size large enough to keep a pool of agents ready to go. This way, you can quickly replace any terminated agent instance with a clean instance. Buildkite uses this approach to secure open-source agent instances as they could be running untrusted code. For more information on AWS security practices regarding restricting access to the API in EKS, see [Amazon EKS security best practices](https://docs.aws.amazon.com/eks/latest/userguide/best-practices-security.html). --- ### Recommendations URL: https://buildkite.com/docs/agent/self-hosted/aws/architecture/recommendations #### Recommendations Optimizing your Buildkite agent infrastructure requires balancing performance, cost, and availability based on your team's specific needs and usage patterns. This section provides guidance on sizing your agent pools effectively, helping you avoid both resource waste from over-provisioning and delays from under-provisioning your CI/CD capacity. ##### A note on recommended pool size There is no exact recommended quantity of agents in a pool. An optimal pool size is the minimum number of available agents you would want to have ready to run jobs instantly. You can start with one or two extra instances that are always available for running lightweight jobs (for example, pipeline uploads), and you can increase the number of agents per machine so that they can run in parallel. For organizations where at any given moment there are engineers working (for example, for shift-based 24/7 schedules or in globally distributed teams), having a large pool of build agents always available makes sense. Otherwise, idly running agents overnight might be a waste of resources. --- ### Overview URL: https://buildkite.com/docs/agent/self-hosted/aws/elastic-ci-stack #### Elastic CI Stack for AWS overview The Buildkite Elastic CI Stack for AWS gives you a private, autoscaling [Buildkite agent](/docs/agent) cluster. You can use the Buildkite Elastic CI Stack for AWS to parallelize large test suites across hundreds of nodes, run tests, app deployments, or AWS ops tasks. Each Buildkite Elastic CI Stack for AWS deployment contains an Auto Scaling group and a launch template. ##### Architecture For an overview of the architecture of the Elastic CI Stack for AWS, see [Architecture](/docs/agent/self-hosted/aws/elastic-ci-stack/architecture). ##### Features The Buildkite Elastic CI Stack for AWS supports: * All AWS regions (except China and US GovCloud) * Linux and Windows operating systems * Configurable instance size * Configurable number of Buildkite agents per instance * Configurable spot instance bid price * Configurable auto-scaling based on build activity * Docker and Docker Compose * Per-pipeline S3 secret storage (with SSE encryption support) * Docker registry push/pull * CloudWatch Logs for system and Buildkite agent events * CloudWatch metrics from the Buildkite API * Support for stable, beta or edge Buildkite agent releases * Multiple stacks in the same AWS Account * Rolling updates to stack instances to reduce interruption Most features are supported across both Linux and Windows. The following table provides details of which features are supported by these operating systems: Feature | Linux | Windows --- | --- | --- Docker | ✅ | ✅ Docker Compose | ✅ | ✅ AWS CLI | ✅ | ✅ S3 Secrets Bucket | ✅ | ✅ ECR Login | ✅ | ✅ Docker Login | ✅ | ✅ CloudWatch Logs Agent | ✅ | ✅ Per-Instance Bootstrap Script | ✅ | ✅ 🧑‍🔬 git-mirrors experiment | ✅ | ✅ SSM Access | ✅ | ✅ Instance Storage (NVMe) | ✅ | SSH Access | ✅ | Periodic `authorized_keys` Refresh | ✅ | Periodic Instance Health Check | ✅ | Git LFS | ✅ | Additional sudo Permissions | ✅ | RDP Access | | ✅ Pipeline Signing | ✅ | ✅ ###### Required and recommended skills The Elastic CI Stack for AWS does not require familiarity with the underlying AWS services to deploy it. However, to run builds, some familiarity with the following services is required: * [AWS CloudFormation](https://aws.amazon.com/cloudformation/) if using the AWS CloudFormation deployment method * [Terraform](https://developer.hashicorp.com/terraform) if using the Terraform deployment method * [Amazon EC2](https://aws.amazon.com/ec2/) (to select an EC2 `InstanceTypes` stack parameter appropriate for your workload) * [Amazon S3](https://aws.amazon.com/s3/) (to copy your git clone secret for cloning and building private repositories) Elastic CI Stack for AWS provides defaults and pre-configurations suited for most use cases without the need for additional customization. Still, you'll benefit from familiarity with VPCs, availability zones, subnets, and security groups for custom instance networking. For post-deployment diagnostic purposes, deeper familiarity with EC2 is recommended to be able to access the instances launched to execute Buildkite Pipelines jobs over SSH or [AWS Systems Manager Sessions](https://docs.aws.amazon.com/systems-manager/latest/userguide/session-manager.html). ###### Billable services Elastic CI Stack for AWS creates its own VPC (virtual private cloud) by default. Best practice is to set up a separate development AWS account and use role switching and consolidated billing. You can check out this external tutorial for more information on how to ["Delegate Access Across AWS Accounts"](http://docs.aws.amazon.com/IAM/latest/UserGuide/tutorial_cross-account-with-roles.html). The Elastic CI Stack for AWS deploys several billable Amazon services that do not require upfront payment and operate on a pay-as-you-go principle, with the bill proportional to usage. | Service name | Purpose | Required | | | Buildkite services are billed according to your [plan](https://buildkite.com/pricing). ###### Supported builds This stack is designed to run your builds in a shared-nothing pattern similar to the [Twelve-Factor App methodology](http://12factor.net): * Each project should encapsulate its dependencies through Docker and Docker Compose. * Build pipeline steps should assume no state on the machine (and instead rely on [build meta-data](/docs/pipelines/configure/build-meta-data), [build artifacts](/docs/pipelines/configure/artifacts), or S3). * Secrets are configured using environment variables exposed using the S3 secrets bucket. By following these conventions, you get a scalable, repeatable, and source-controlled CI environment that any team within your organization can use. ##### Running your first build You can use a [bash-parallel-example sample pipeline](https://github.com/buildkite/bash-parallel-example) to test with your new autoscaling stack. Click the **Add to Buildkite** button below (or on the [GitHub README](https://github.com/buildkite/bash-parallel-example)): [](https://buildkite.com/new?template=https://github.com/buildkite/bash-parallel-example) Click **Create Pipeline**. Depending on your organization's settings, the next step will vary slightly: * If your organization uses the web-based steps editor (default), your pipeline is now ready for its first build. You can skip to the next step. * If your organization has been upgraded to the [YAML steps editor](/docs/pipelines/tutorials/pipeline-upgrade), you should see a **Choose a Starting Point** wizard. Select **Pipeline Upload** from the list: Click **New Build** in the top right and choose a build message (perhaps a little party `\:partyparrot\:`?): Once your build is created, head back to the [AWS EC2 Auto Scaling Groups](https://console.aws.amazon.com/ec2/v2/home?#AutoScalingGroups) to watch the Elastic CI Stack for AWS creating new EC2 instances: Select the **buildkite-AgentAutoScaleGroup-xxxxxxxxxxxx** group and then the **Instances** tab. You'll see instances starting up to run your new build and after a few minutes they'll transition from **Pending** to **InService**: Once the instances are ready, they will appear on your Buildkite agents page: And then your build will start running on your new agents: Congratulations on running your first Elastic CI Stack for AWS build on Buildkite! :tada: ##### Get started with the Elastic CI Stack for AWS Get started with Buildkite Elastic CI Stack for AWS for: * Linux and Windows - [Setup with CloudFormation](/docs/agent/self-hosted/aws/elastic-ci-stack/ec2-linux-and-windows/setup) - [Setup with Terraform](/docs/agent/self-hosted/aws/elastic-ci-stack/ec2-linux-and-windows/terraform) * Mac - [Setup with CloudFormation](/docs/agent/self-hosted/aws/elastic-ci-stack/ec2-mac/setup) --- ### Architecture URL: https://buildkite.com/docs/agent/self-hosted/aws/elastic-ci-stack/architecture #### Architecture of the Elastic CI Stack for AWS The Elastic CI Stack for AWS provisions and manages the infrastructure required to run a scalable Buildkite agent cluster. This page aims to explain the internal components, resources, and mechanisms that make up the stack. This diagram illustrates a standard deployment of Elastic CI Stack for AWS. The primary layout of the stack is built around AWS autoscaling components, with an Auto Scaling group (ASG) being the center piece. The ASG manages the lifecycle of EC2 instances, ensuring that the cluster scales out to meet demand, and scales in to save costs. The instances with the ASG are managed via a launch template; the launch template defines the configuration for EC2 instances launched via the ASG, the launch template will define configuration such as the AMI used, the instance type(s) available, security groups and user data scripts. User data scripts are scripts that run at boot time on the instance to ensure the instance has environment variables propagated, and any additional tools via bootstrap scripts (which are user provided via input configuration) are correctly installed. Once the user data scripts are completed, the instance will be moved into a healthy state. If they fail, the instance will be marked as unhealthy in the ASG and subsequently terminated. Now that the core architecture has been laid out, let's look into the specifics of the stack. ##### Software stack The EC2 instances provisioned by the stack run using a pre-configured Amazon Machine Image (AMI) based on Amazon Linux 2023. The image comes with a suite of software to support your builds and manage the instance, these tools are used to manage the instance in a variety of ways and can be broken down into four subsections. ###### Core components - The Buildkite agent - the main component. - Docker - pre-installed to ensure that any containerized workflows function as intended, such as the [Docker-Compose](https://github.com/buildkite-plugins/docker-compose-buildkite-plugin) and [Docker](https://github.com/buildkite-plugins/docker-buildkite-plugin) Buildkite plugins. - Git - the Buildkite agent actively uses Git to checkout codebases ahead of builds. ###### AWS integration - Amazon SSM Agent - enables remote management of instances, is used this from the Agent Scaler in order to kill Buildkite agent processes. - CloudWatch Agent - for streaming to log groups. - AWS CLI - for interacting with AWS Resources during build time; can be used within a pipeline. - EC2 Instance Connect - can be used to connect to an instance via the AWS Console. - cfn-bootstrap - helper scripts (`cfn-init`, `cfn-signal`) are used within CloudFormation to provision the instance. ###### Helper utilities - lifecycled - this daemon allows listening for Auto Scaling lifecycle hook events on the instance which trigger the graceful shutdown of the Buildkite agent when an instance is scheduled for termination. - s3secrets-helper - is used to fetch and decrypt secrets from the stack's S3 bucket. - jq - is used throughout scripts within the stack to parse JSON responses efficiently. ###### Buildkite plugins - docker-login - is used for authentication with Docker Registries such as ECR. - ecr - this helper is used for streamlining the ECR operations. - secrets - this plugin is used for setting secrets as environment variables using the aforementioned `s3secrets-helper`. ###### Bootstrap scripts The stack uses EC2 user data to perform final configuration at boot time. The script for this is constantly evolving, so you will benefit from looking at the [UserData Scripts used in our Terraform Module](https://github.com/buildkite/terraform-buildkite-elastic-ci-stack-for-aws/tree/main/scripts) to get a better idea of what is happening under the hood. For the most part, the User Data script is used to pass input configuration from the deployment method, whether that be AWS CloudFormation or Terraform, directly to the run time of the instance. When a bootstrap script is defined within input configuration, this is ran after the initial User Data scripts have ran, using the [bk-install-elastic-stack.sh](https://github.com/buildkite/elastic-ci-stack-for-aws/blob/main/packer/linux/stack/conf/bin/bk-install-elastic-stack.sh) script. ##### IAM and security The stack creates several IAM roles to grant access to resources required for the stack to function as intended. For a detailed breakdown of the specific permissions and JSON policy examples, see [IAM policy examples](/docs/agent/self-hosted/aws/elastic-ci-stack/ec2-linux-and-windows/managing-elastic-ci-stack#using-custom-iam-roles-iam-policy-examples). Custom IAM roles can be used, depending on how the stack is deployed. For Terraform, all roles created by the stack can be skipped in favour of a custom role. For AWS CloudFormation, an instance role can be provided to allow a shared role across all clusters created. See [Using custom IAM roles](/docs/agent/self-hosted/aws/elastic-ci-stack/ec2-linux-and-windows/managing-elastic-ci-stack#using-custom-iam-roles) for more information. ###### KMS keys The stack optionally creates an AWS KMS key when the `PipelineSigningKMSKey` (AWS CloudFormation) or `pipeline_signing_kms_key` (Terraform) is selected to support [pipeline signing](/docs/agent/self-hosted/security/signed-pipelines). ##### Networking The stack will create its own VPC to handle networking to ensure agents can reach Buildkite, AWS services, and external services such as GitHub. ###### VPC and subnets By default, the stack creates a new Virtual Private Cloud (VPC) with the CIDR block `10.0.0.0/16` and two subnets, one subnet will use `10.0.1.0/24` and the other will use `10.0.2.0/24`. You can also deploy the stack into an existing VPC by providing your own `VpcId` (AWS CloudFormation) or `vpc_id` (Terraform) and `Subnets` (AWS CloudFormation) or `subnets` (Terraform). ###### Security groups A security group is created and used by the agent instances. By default, it allows all outbound traffic (0.0.0.0/0) and limits all inbound traffic, which can be optionally set to allow port 22 for SSH access. ###### VPC endpoints The stack creates VPC endpoints for AWS Systems Manager (SSM) and S3. This allows instances to communicate with these services within the boundary of the VPC, negating the requirement for outbound access. ##### Scaling mechanism The stack uses a Lambda-based scaling approach rather than standard AWS target tracking policies. This results in quicker scaling based on Buildkite-specific metrics, opposed to resource usage. ###### Agent scaler lambda The `AgentScaler` Lambda function is the main part of the autoscaling logic. It runs on a schedule ( every minute on default settings) and adjusts the Auto Scaling group's capacity based on real-time demand from Buildkite. How it works: 1. The Lambda polls the Buildkite API to retrieve the number of scheduled jobs waiting to run and the number of busy agents currently running jobs. 1. Based on these metrics and your stack configuration (minimum size, maximum size, scale-out factor), it calculates the desired number of instances needed. 1. If the desired capacity differs from the current capacity, it updates the Auto Scaling group to scale up or down accordingly. The polling interval can be configured using the `ScaleInIdlePeriod` (CloudFormation) or `scale_in_idle_period` (Terraform) parameter. A shorter interval means faster response to demand, but may result in more frequent scaling operations. We recommend being careful with this setting as it could result in instance thrashing when there's a large number of jobs that complete quickly. ###### Scheduled scaling You can configure scheduled scaling actions to adjust the minimum size of the cluster based on time of day. This is useful for predictable workload patterns, such as scaling up during business hours when builds are most frequent, and scaling down at night or on weekends to reduce costs. Scheduled scaling is implemented using AWS Auto Scaling Scheduled Actions, which allow you to define: - A target minimum size for the Auto Scaling group at specific times - Recurring schedules using cron expressions - Time zone specifications to ensure schedules match your team's working hours For example, you might configure a schedule that sets the minimum size to 5 instances at 8:00 AM on weekdays and back to 0 at 6:00 PM. The Agent Scaler Lambda will still handle demand-based scaling above the minimum, but scheduled scaling ensures you have a baseline number of instances ready when you need them. This works alongside the demand-based scaling provided by the Agent Scaler Lambda. The scheduled actions set the minimum capacity floor, while the Lambda handles real-time scaling based on actual job demand. ##### Lifecycle hooks The stack uses Auto Scaling lifecycle hooks to ensure graceful termination of agents. Without lifecycle hooks, AWS would immediately terminate instances when scaling in or rebalancing, which would interrupt any running builds and potentially cause failures or data loss. Lifecycle hooks pause the termination process, giving the Buildkite agent time to complete its current job before the instance is destroyed. This is critical for maintaining build reliability and ensuring that your CI/CD pipelines don't experience unexpected interruptions. ###### Instance terminating hook When an instance is scheduled for termination (due to scaling in or spot instance reclamation), the `instance_terminating` hook pauses the termination process on the `autoscaling:EC2_INSTANCE_TERMINATING` transition. This gives the Buildkite agent time to finish its current job and gracefully shut down before the EC2 instance is destroyed. The `lifecycled` daemon running on the instance polls for this hook. When detected, it stops the Buildkite agent service, waiting for any running jobs to finish, and then signals the Auto Scaling group to proceed with termination. The default timeout for this process is 3600 seconds (1 hour), but this is configurable using the `InstanceTerminationGracePeriod` (CloudFormation) or `instance_termination_grace_period` (Terraform) parameter. ##### Lambda functions The stack deploys several Lambda functions to manage automation and lifecycle events: ###### Agent scaler The `AgentScaler` Lambda function calculates and applies scaling adjustments to the Auto Scaling group. It's triggered by an EventBridge Schedule that runs every minute (by default), polling the Buildkite API to determine how many instances are needed based on queued jobs and busy agents. This Lambda is used to ensure that instance count scales based on jobs waiting, opposed to instances only scaling when resources hit the scaling threshold. ###### Availability zone rebalancing suspender The `AzRebalancingSuspender` Lambda function disables the `AZRebalance` process on the Auto Scaling group. AWS Auto Scaling normally attempts to balance instances evenly across Availability Zones, which can cause instances to be terminated while running builds. This function prevents that behavior by suspending the rebalancing process, ensuring that instances are only terminated when scaling in or when they become unhealthy. This Lambda is triggered during stack creation or update events. ###### StopBuildkiteAgents The `StopBuildkiteAgents` Lambda function gracefully stops agents during stack updates or replacements. When the stack is updated, this function scales the old Auto Scaling group to zero and sends an SSM Run Command to running instances, instructing them to stop the `buildkite-agent` service gracefully. This allows current jobs to finish (within a configurable timeout) before the instance is terminated, preventing build interruptions during infrastructure updates. This Lambda is triggered during stack update events. ##### Storage The stack creates and manages several S3 buckets for different purposes, from storing secrets to providing audit logs. ###### Secrets bucket The stack creates a dedicated S3 bucket to store encrypted secrets (such as SSH keys and environment variables) used by the agents. Access to this bucket is restricted using IAM policies, ensuring that only authorized instances can retrieve secrets. The `s3secrets-helper` utility running on agent instances fetches and decrypts secrets from this bucket at run time, making them available to your builds without exposing them in your infrastructure as code. ###### Secrets logging bucket The stack also creates a bucket for storing access logs from the secrets bucket. This provides an audit trail of all access to your secrets, which is to ensure security compliance and enables troubleshooting. The logs capture details about who accessed the secrets bucket, when they accessed it, and what operations were performed. ###### Lambda bucket The Lambda bucket handling differs between deployment methods. When using AWS CloudFormation, the stack creates a Lambda bucket to store the Lambda function source code. This is necessary because AWS CloudFormation requires the Lambda code to be stored in an S3 bucket in the same region where you're deploying the stack. When using Terraform, the stack does not create a Lambda bucket. Instead, it retrieves the Lambda function source code directly from a public S3 bucket managed by Buildkite. ###### Artifacts bucket The stack does not create a bucket for build artifacts by default. You can optionally provide the name of an existing S3 bucket to be used for storing build artifacts. This allows you to use an existing bucket that may already have specific lifecycle policies, versioning, or replication configured according to your organization's requirements. ##### Systems manager parameter store The stack uses AWS Systems Manager Parameter Store to securely manage agent tokens. This provides a centralized, encrypted location for sensitive information that instances need at boot time. The Buildkite agent token is stored as a SecureString parameter, which encrypts the token at rest using AWS KMS. When EC2 instances launch, they retrieve this token from Parameter Store and use it to register with Buildkite. ##### Monitoring The stack provides monitoring through AWS CloudWatch, capturing logs and optionally publishing metrics to help you understand cluster behavior and troubleshoot issues. ###### CloudWatch Logs The CloudWatch Agent running on each EC2 instance streams logs to Amazon CloudWatch Logs, creating separate log groups for different types of output. This centralized logging approach means you can view agent activity and system events without needing to SSH into instances. The stack creates several log groups to organize different types of logs, all prefixed with `/buildkite/`. The main log groups include `/buildkite/buildkite-agent` for agent process output (job execution, plugin output, and errors), `/buildkite/system` for operating system messages, `/buildkite/docker-daemon` for Docker-related logs, `/buildkite/lifecycled` for graceful shutdown events, and several others for bootstrap and initialization processes like `/buildkite/cfn-init` and `/buildkite/cloud-init`. Each EC2 instance creates its own log stream within these log groups, identified by the instance ID. This makes it easy to filter logs for a specific instance when investigating issues. By default, logs are retained indefinitely, but you can configure a retention policy (such as 7, 30, or 90 days) to automatically delete older logs and reduce storage costs. You can search across all logs using CloudWatch Logs Insights to identify patterns or specific error messages. ###### CloudWatch metrics The `AgentScaler` Lambda publishes custom CloudWatch metrics to the `Buildkite` namespace when enabled. These metrics track the queue's job counts that the scaling Lambda uses to make scaling decisions: `ScheduledJobsCount` (jobs waiting to be assigned to an agent), `RunningJobsCount` (jobs currently executing), and `WaitingJobsCount` (jobs waiting in the queue). These metrics are published each time the Lambda runs (by default, every minute), giving you visibility into the queue activity that drives scaling decisions. You can use these metrics to create custom CloudWatch dashboards that visualize your queue's behavior over time, or set up alarms to notify you when certain thresholds are exceeded. For example, you might create an alarm that triggers when `ScheduledJobsCount` remains high for an extended period, indicating that your cluster may not be scaling up quickly enough to meet demand. --- ### Setup with AWS CloudFormation URL: https://buildkite.com/docs/agent/self-hosted/aws/elastic-ci-stack/ec2-linux-and-windows/setup #### Linux and Windows setup for the Elastic CI Stack for AWS with AWS CloudFormation This guide leads you through getting started with the [Elastic CI Stack for AWS](https://github.com/buildkite/elastic-ci-stack-for-aws) for Linux and Windows using [AWS CloudFormation](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/Welcome.html). > 📘 Prefer Terraform? > This guide uses AWS CloudFormation. For the Terraform setup instructions, see the [Terraform setup guide](/docs/agent/self-hosted/aws/elastic-ci-stack/ec2-linux-and-windows/terraform). The Elastic CI Stack for AWS lets you launch a private, autoscaling [Buildkite agent cluster](/docs/pipelines/security/clusters) in your own AWS account. > 📘 Get hands-on > Read on for detailed instructions, or jump straight in: > [](https://console.aws.amazon.com/cloudformation/home#/stacks/new?stackName=buildkite&templateURL=https://s3.amazonaws.com/buildkite-aws-stack/latest/aws-stack.yml) ##### Before you start Most Elastic CI Stack for AWS features are supported on both Linux and Windows. The following [Amazon Machine Images (AMIs)](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/AMIs.html) are available by default in all supported regions. The operating system and architecture will be selected based on the values provided for the `InstanceOperatingSystem` and `InstanceTypes` parameters: - Amazon Linux 2023 (64-bit x86) - Amazon Linux 2023 (64-bit ARM, Graviton) - Windows Server 2022 (64-bit x86) If you want to use the [AWS CLI](https://aws.amazon.com/cli/) instead, download [`config.json.example`](https://github.com/buildkite/elastic-ci-stack-for-aws/blob/-/config.json.example), rename it to `config.json`, add your Buildkite agent token (and any [other config values](https://github.com/buildkite/elastic-ci-stack-for-aws/blob/main/templates/aws-stack.yml)), and then run the below command: ```bash aws cloudformation create-stack \ --output text \ --stack-name buildkite \ --template-url "https://s3.amazonaws.com/buildkite-aws-stack/latest/aws-stack.yml" \ --capabilities CAPABILITY_IAM CAPABILITY_NAMED_IAM CAPABILITY_AUTO_EXPAND \ --parameters "$(cat config.json)" ``` ##### Launching the stack Go to the [Agents page](https://buildkite.com/organizations/-/agents) on Buildkite and select the **AWS** tab: Select **Launch Stack** :red_button: After selecting **Next**, configure the stack using your Buildkite agent token: Copy the value for the [agent token](/docs/agent/self-hosted/tokens) you'd previously configured for your Buildkite cluster and paste it into the required field on this page. > 📘 > If you don't have your agent token's value, you'll need to [create a new one](/docs/agent/self-hosted/tokens#create-a-token), which you can do from the [**Agents** > **Clusters** > your specific cluster page](https://buildkite.com/organizations/-/agents). Once created, don't forget to copy the agent token's value and save it somewhere secure, as you won't be able to see its value from Buildkite again. By default the stack uses a job queue of `default`, but you can specify any other queue name you like. A common example of setting a queue for a dedicated Windows agent can be achieved with the following in your `pipeline.yml` after you've set up your Windows stack: ```yaml steps: - command: echo "hello from windows" agents: queue: "windows" ``` For more information, see the [Queues overview](/docs/agent/queues) page, specifically [Targeting a queue from a pipeline](/docs/agent/queues#targeting-a-queue-from-a-pipeline). Review the parameters, see [Elastic CI Stack for AWS parameters](/docs/agent/self-hosted/aws/elastic-ci-stack/ec2-linux-and-windows/configuration-parameters) for more details. Once you're ready, check these three checkboxes: - I acknowledge that AWS CloudFormation might create IAM resources. - I acknowledge that AWS CloudFormation might create IAM resources with custom names. - I acknowledge that AWS CloudFormation might require the following capability: `CAPABILITY_AUTO_EXPAND` Then select **Create stack**: After creating the stack, Buildkite takes you to the [CloudFormation console](https://console.aws.amazon.com/cloudformation/home). Select the **Refresh** icon in the top right hand corner of the screen until you see the stack status is `CREATE_COMPLETE`. You now have a working Elastic CI Stack for AWS ready to run builds! :tada: ##### CloudFormation service role If you want to explicitly specify the actions AWS CloudFormation can perform on your behalf when deploying the Elastic CI Stack for AWS, you can create your stack using an IAM User or Role that has been granted limited permissions, or use an [AWS CloudFormation service role](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/using-iam-servicerole.html). The Elastic CI Stack for AWS repository contains an experimental [service role template](https://github.com/buildkite/elastic-ci-stack-for-aws/blob/-/templates/service-role.yml). This template creates an IAM Role and set of IAM Policies with the IAM Actions necessary to create, update, and delete a CloudFormation Stack created with the Elastic CI Stack for AWS template. The IAM role created by this template is used to create and delete AWS CloudFormation stacks in the test suite, but it is likely that the permissions needed for some stack parameter permutations are missing. This template can be deployed as is, or used as the basis for your own CloudFormation service role. ###### Deploying the service role template With a copy of the Elastic CI Stack for AWS repository, the service role template can be deployed using the [AWS CLI](https://aws.amazon.com/cli/): ```bash aws cloudformation deploy \ --template-file templates/service-role.yml \ --stack-name buildkite-elastic-ci-stack-service-role \ --capabilities CAPABILITY_IAM ``` Once the stack has been created, the role ARN (Amazon Resource Name) can be retrieved using: ```bash aws cloudformation describe-stacks \ --stack-name buildkite-elastic-ci-stack-service-role \ --query "Stacks[0].Outputs[?OutputKey=='RoleArn'].OutputValue" \ --output text ``` This role ARN can be passed to an `aws cloudformation create-stack` invocation as a value for the `--role-arn` flag. ##### Related content To gain a better understanding of how Elastic CI Stack for AWS works and how to use it most effectively and securely, check out the following resources: - [Running Buildkite agent on AWS](/docs/agent/aws) - [GitHub repo for Elastic CI Stack for AWS](https://github.com/buildkite/elastic-ci-stack-for-aws) - [Configuration parameters for Elastic CI Stack for AWS](/docs/agent/self-hosted/aws/elastic-ci-stack/ec2-linux-and-windows/configuration-parameters) - [Using AWS Secrets Manager](/docs/agent/self-hosted/aws/elastic-ci-stack/ec2-linux-and-windows/security#using-aws-secrets-manager-in-the-elastic-ci-stack-for-aws) --- ### Setup with Terraform URL: https://buildkite.com/docs/agent/self-hosted/aws/elastic-ci-stack/ec2-linux-and-windows/terraform #### Terraform deployment for the Elastic CI Stack for AWS The Elastic CI Stack for AWS can be deployed using Terraform instead of AWS CloudFormation. > 📘 Prefer AWS CloudFormation? > This guide uses Terraform. For AWS CloudFormation instructions, see the [AWS CloudFormation setup guide](/docs/agent/self-hosted/aws/elastic-ci-stack/ec2-linux-and-windows/setup). ##### Before you start Deploying the Elastic CI Stack for AWS with Terraform requires [Terraform](https://www.terraform.io/downloads) version 1.0 or later and a Buildkite [Agent token](/docs/agent/self-hosted/tokens). For the information on getting started with Terraform, see HashiCorp's [Get Started with Terraform](https://developer.hashicorp.com/terraform/tutorials/aws-get-started) tutorial and the [AWS Provider documentation](https://registry.terraform.io/providers/hashicorp/aws/latest/docs) for configuring AWS credentials. The module creates its own VPC by default. To deploy into an existing VPC, set the `vpc_id` and `subnets` variables. ##### Deploying the stack Create a `main.tf` file with the following configuration: ```terraform terraform { required_version = ">= 1.0" } module "buildkite_stack" { source = "buildkite/elastic-ci-stack-for-aws/buildkite" version = "~> 0.1.0" stack_name = "buildkite" buildkite_agent_token = "your-agent-token-here" min_size = 0 max_size = 10 } ``` Next, run the following commands to deploy the stack: ```bash terraform init terraform plan terraform apply ``` ##### Configuration The only required variable is `buildkite_agent_token`. For information on creating and managing agent tokens, see [Agent tokens](/docs/agent/self-hosted/tokens). For the complete list of variables and their descriptions, see the [module documentation](https://registry.terraform.io/modules/buildkite/elastic-ci-stack/aws) on the Terraform Registry or the [configuration parameters](/docs/agent/self-hosted/aws/elastic-ci-stack/ec2-linux-and-windows/configuration-parameters) reference. ##### Example configurations The Terraform module repository includes several example configurations. You can check out the following examples in the [examples directory](https://github.com/buildkite/terraform-buildkite-elastic-ci-stack-for-aws/tree/main/examples): - [Basic](https://github.com/buildkite/terraform-buildkite-elastic-ci-stack-for-aws/tree/main/examples/basic) - [Spot instances](https://github.com/buildkite/terraform-buildkite-elastic-ci-stack-for-aws/tree/main/examples/spot-instances) - [Scheduled scaling](https://github.com/buildkite/terraform-buildkite-elastic-ci-stack-for-aws/tree/main/examples/scheduled-scaling) - [Existing VPC](https://github.com/buildkite/terraform-buildkite-elastic-ci-stack-for-aws/tree/main/examples/existing-vpc) ##### Updating the stack To update to a newer version of the module, update the `version` constraint in your `main.tf`: ```terraform module "buildkite_stack" { source = "buildkite/elastic-ci-stack/aws" version = "0.1.0" # ... your configuration } ``` Then run the following commands: ```bash terraform init -upgrade terraform plan terraform apply ``` The Auto Scaling group will replace instances gradually during the update. Existing builds will complete before instances are terminated using the [Buildkite agent Scaler](https://github.com/buildkite/buildkite-agent-scaler). ##### Related documentation For more information on configuring and managing the Elastic CI Stack for AWS, see: - [Using AWS Secrets Manager](/docs/agent/self-hosted/aws/elastic-ci-stack/ec2-linux-and-windows/security#using-aws-secrets-manager-in-the-elastic-ci-stack-for-aws) to configure secrets - [Managing the Elastic CI Stack for AWS](/docs/agent/self-hosted/aws/elastic-ci-stack/ec2-linux-and-windows/managing-elastic-ci-stack) for operational tasks - [Troubleshooting](/docs/agent/self-hosted/aws/elastic-ci-stack/ec2-linux-and-windows/troubleshooting) for resolving common issues - [Terraform module reference](https://registry.terraform.io/modules/buildkite/elastic-ci-stack-for-aws/buildkite/latest) on the Terraform Registry - [GitHub repository](https://github.com/buildkite/terraform-buildkite-elastic-ci-stack-for-aws) for the module source code --- ### Security URL: https://buildkite.com/docs/agent/self-hosted/aws/elastic-ci-stack/ec2-linux-and-windows/security #### Security in the Elastic CI Stack for AWS The [Elastic CI Stack for AWS](https://github.com/buildkite/elastic-ci-stack-for-aws/) repository hasn't been reviewed by security researchers so exercise caution with what credentials you make available to your builds. The S3 buckets that Buildkite agent creates for secrets don't allow public access. The stack's default VPC configuration does provide EC2 instances with a public IPv4 address. If you wish to customize this, the best practice is to create your own VPC and provide values for the [Network Configuration](/docs/agent/self-hosted/aws/elastic-ci-stack/ec2-linux-and-windows/configuration-parameters#network-configuration) parameters: * `VpcId` * `Subnets` * `AvailabilityZones` * `SecurityGroupIds` Anyone with commit access to your codebase (including third-party pull-requests if you've enabled them in Buildkite) also has access to your secrets bucket files. Keep in mind the EC2 HTTP metadata server is available from within builds, which means builds act with the same IAM permissions as the instance. ##### Network configuration An Elastic CI Stack for AWS deployment contains an Auto Scaling group and a launch template. Together they boot instances in the default templated public subnet, or if you have configured them, into a set of VPC subnets. After booting, the Elastic CI Stack for AWS instances require network access to [buildkite.com](https://buildkite.com/buildkite). This access can be provided by booting them in a VPC subnet with a routing table that has Internet connectivity, either directly using an Internet Gateway or indirectly using a NAT Instance or NAT Gateway. By default, the template creates a public subnet VPC for your EC2 instances. The VPC in which your stack's instances are booted can be customized using the `VpcId`, and `Subnets` template parameters. If you choose to use a VPC with split public/private subnets, the `AssociatePublicIpAddress` parameter can be used to turn off public IP association for your instances. See the [VPC](/docs/agent/self-hosted/aws/architecture/vpc) documentation for guidance on choosing a VPC layout suitable for your use case. ###### Limiting CloudFormation permissions By default, CloudFormation will operate using the permissions granted to the identity, AWS IAM User or Role, used to create or update a stack. See [CloudFormation service role](/docs/agent/self-hosted/aws/elastic-ci-stack/ec2-linux-and-windows/setup#cloudformation-service-role) for a listing of the IAM actions required to create, update, and delete a stack with the Elastic CI Stack for AWS template. ###### Default IAM policies You're not required to create any special IAM roles or policies, though the deployment template creates several of these on your behalf. Some optional functionality does depend on IAM permission should you choose to enable them. For more information, see: * [`buildkite-agent artifact` IAM Permissions](/docs/agent/cli/reference/artifact#using-your-private-aws-s3-bucket-iam-permissions), a policy to allow the Buildkite agent to read/write artifacts to a custom S3 artifact storage location * [`BootstrapScriptUrl` IAM Policy](/docs/agent/self-hosted/aws/elastic-ci-stack/ec2-linux-and-windows/managing-elastic-ci-stack#customizing-instances-with-a-bootstrap-script), a policy to allow the EC2 instances to read an S3-stored `BootstrapScriptUrl` object * Using AWS Secrets Manager to store your Buildkite agent token depends on a resource policy to grant read access to the Elastic CI Stack for AWS roles (the scaling Lambda and EC2 Instance Profile) ###### Key creation You don't need to create keys for the default deployment of Elastic CI Stack for AWS, but you can additionally create: * KMS key to encrypt the AWS SSM Parameter that stores your Buildkite agent token * KMS key for S3 SSE protection of secrets and artifacts * SSH key or other git credentials to be able to clone private repositories and store them in the S3 secrets bucket and optionally encrypt them using S3 SSE) Remember that such keys are not intended to be public, and you must not grant public access to them. ##### Sensitive data The following types of sensitive data are present in Elastic CI Stack for AWS: * **Buildkite agent token credential** (`BuildkiteAgentToken`) retrieved from your Buildkite account. When provided to the deployment template, it is stored in plaintext in AWS SSM Parameter Store (there is no support for creating an encrypted SSM Parameter from CloudFormation). If you need to store it in encrypted form, you can create your own SSM Parameter and provide the `BuildkiteAgentTokenParameterStorePath` value along with `BuildkiteAgentTokenParameterStoreKMSKey` for decrypting it. * **Secrets and artifacts** stored in S3. You can use server-side encryption (SSE) to control access to these objects. * **Instance Storage working data** stored by EC2 instances (git checkouts or any other private resources you decide to retrieve) either on their EBS root disk or on the Instance Storage NVMe drives. The Elastic CI Stack for AWS deployment template does not support configuring EBS encryption. CloudWatch Logs and EC2 instance log data are forwarded to CloudWatch Logs, but these logs don't contain sensitive information. ##### Using AWS Secrets Manager in the Elastic CI Stack for AWS The Elastic CI Stack for AWS supports reading a Buildkite agent token from the AWS Systems Manager Parameter Store. The token can be stored in a plaintext parameter, or encrypted with a KMS Key for access control purposes. You can also store your Buildkite agent token using AWS Secrets Manager if you need the advanced functionality it offers over the Parameter Store. For example, AWS Secrets Manager can automatically rotate and revoke secrets using Lambda functions, and replicate secrets across multiple regions in your account. ###### Storing agent tokens To store your Buildkite agent token as an AWS Secrets Manager secret, configure the Elastic CI Stack for AWS's `BuildkiteAgentTokenParameterStorePath` parameter to reference your secret with the special parameter path `/aws/reference/secretsmanager/your_Secrets_Manager_secret_ID`. Parameter Store will transparently fetch the token from AWS Secrets Manager when this parameter is read. See the AWS documentation on [Referencing AWS Secrets Manager secrets from Parameter Store parameters](https://docs.aws.amazon.com/systems-manager/latest/userguide/integration-ps-secretsmanager.html) for more details. To ensure your Elastic CI Stack for AWS has access to the secret: * Provide the Key ID (not the alias) used to encrypt the Secrets Manager secret to the `BuildkiteAgentTokenParameterStoreKMSKey` parameter. An IAM policy with `kms:Decrypt` permission for this key is included in the CloudFormation template. * Use the CloudFormation stacks' *Resources* tab to find the `AutoscalingLambdaExecutionRole` and `IAMRole` roles, use their Amazon Resource Name (ARN) in the policy below. * Secret Manager will capture a role's Unique ID when saving the resource policy; if you re-create the IAM role you must save the resource policy again to grant access. * Use the Secret Manager secret's resource policy to grant `secretsmanager:GetSecretValue` permission to both the instance IAM role and the scaling Lambda IAM Role. ```json { "Version" : "2012-10-17", "Statement" : [ { "Effect" : "Allow", "Principal" : { "AWS" : [ "arn\:aws\:iam::[redacted]:role/buildkite-stack-AutoscalingLambdaExecutionRole", "arn\:aws\:iam::[redacted]:role/buildkite-stack-Role" ] }, "Action" : "secretsmanager:GetSecretValue", "Resource" : "*" } ] } ``` > 📘 Single instance Elastic CI Stacks > If you have set `MinSize` and `MaxSize` parameters equal to 1 in your [Elastic CI Stack parameters](/docs/agent/self-hosted/aws/elastic-ci-stack/ec2-linux-and-windows/configuration-parameters), you only need to set the `IAMRole` in your Secrets Manager secret's resource policy as above. The `AutoscalingLambdaExecutionRole` IAM role and corresponding autoscaling resources are not created in this Elastic CI Stack setup. ###### Multi-region replication It is also possible to replicate your Buildkite agent token to multiple regions using AWS Secret Manager's [multi-region replication](https://docs.aws.amazon.com/secretsmanager/latest/userguide/create-manage-multi-region-secrets.html). You can then deploy an Elastic CI Stack for AWS to each region and use the Parameter Store reference path to read the secret from the regionally replicated secret. Some additional points to keep in mind when using multi-region replication: * Ensure each region's IAM role has `ssm:GetParameter` permission for the region it will be retrieving the secret from. + By default, the template will grant permission to only the region it is deployed to, limiting the role's utility to the stack's region. This isn't a problem but a caveat to be aware of. Don't expect to use the same role in multiple regions. * Ensure each region's IAM role has `kms:Decrypt` permission for the key used to encrypt the secret in that region. + You can do this with the AWS Secrets Manager key in Secrets Manager, and looking up the underlying CMK ID of that key alias in each region the stack template is deployed to. Provide that value for the `BuildkiteAgentTokenParameterStoreKMSKey` parameter for the stack in that region. * Apply a resource policy to the primary Secrets Manager secret that grants `secretsmanager:GetSecretValue` for each region's IAM role and wait for that to be replicated. Now, changes to the agent token secret (either made by hand or using Automatic Secret Rotation) will be replicated from the primary region to each replica region. The Elastic CI Stack for AWS will only retrieve the Buildkite agent token once when the instance boots. You should [refresh your Auto Scaling Group instances](https://docs.aws.amazon.com/autoscaling/ec2/userguide/asg-instance-refresh.html) after rotating and replicating the secret, and before revoking the old token. ##### S3 secrets bucket The Elastic CI Stack for AWS creates an S3 bucket for you (or uses the one you provide as the `SecretsBucket` parameter). This is where the agent fetches your private SSH keys for source control and environment variables that provide other secrets to your builds. ###### S3 secret paths The following S3 objects are downloaded and processed: * `/env` or `/environment` - a file that contains environment variables in the format `KEY=VALUE` * `/private_ssh_key` - a private SSH key that is added to ssh-agent for your builds * `/git-credentials` - a [git-credentials](https://git-scm.com/docs/git-credential-store#_storage_format) file for git over HTTPS * `/secret-files/*` - individual secret files that are loaded as environment variables ([Individual secret files](#s3-secrets-bucket-individual-secret-files)) * `/{pipeline-slug}/env` or `/{pipeline-slug}/environment` - a file that contains environment variables specific to a pipeline, in the format `KEY=VALUE` * `/{pipeline-slug}/private_ssh_key` - a private SSH key that is added to ssh-agent for your builds, specific to the pipeline * `/{pipeline-slug}/git-credentials` - a [git-credentials](https://git-scm.com/docs/git-credential-store#_storage_format) file for git over HTTPS, specific to a pipeline * `/{pipeline-slug}/secret-files/*` - individual secret files that are loaded as environment variables, specific to a pipeline ([Individual secret files](#s3-secrets-bucket-individual-secret-files)) * When provided, the environment variable `BUILDKITE_PLUGIN_S3_SECRETS_BUCKET_PREFIX` overrides `{pipeline-slug}` These files are encrypted using [AWS KMS](https://aws.amazon.com/kms/). > 🚧 Sourcing of environment variable files > The agent sources files such as `/env` or `/{pipeline-slug}/environment`. It is possible to include a shell script that will be executed by the agent in these files. However, including shell scripts in these files should be used with caution, as it can lead to unexpected behavior. ###### Using your own S3 bucket By default, the Elastic CI Stack for AWS creates a new S3 bucket for secrets. To use an existing S3 bucket instead, specify the following parameters when creating or updating your CloudFormation stack: * `SecretsBucket` - the name of your existing S3 bucket * `SecretsBucketRegion` - the AWS region where your bucket is located (for example, `us-east-1`) When using your own bucket, the Elastic CI Stack for AWS uses it as-is without modifying encryption settings. Your bucket must allow the stack's IAM role to read objects. The Elastic CI Stack for AWS automatically configures the necessary permissions for agents to access the bucket. The `SecretsBucketEncryption` parameter only applies when the Elastic CI Stack for AWS creates a new bucket. When set to `true`, it enforces encryption at rest and in transit on the created bucket. ###### Uploading secrets To generate a private SSH key and upload it with KMS encryption to an S3 bucket: ```bash #### generate a deploy key for your project ssh-keygen -t rsa -b 4096 -f id_rsa_buildkite pbcopy myenv aws s3 cp --acl private --sse aws:kms myenv "s3://${SecretsBucket}/env" rm myenv ``` > 📘 > Currently only the default KMS key for S3 is supported. ###### Individual secret files You can store individual secrets as separate S3 objects under the `/secret-files/` prefix. This approach helps you manage multiple secrets independently from the environment variable files (`/env` or `/environment`). Individual secret files must have a filename that ends with one of the following suffixes: * `_SECRET` * `_SECRET_KEY` * `_PASSWORD` * `_TOKEN` * `_ACCESS_KEY` The filename (without the path) becomes the environment variable name, and the file contents become the environment variable value. To upload a secret that will be available as `DATABASE_PASSWORD`: ```bash echo "my-database-password" > DATABASE_PASSWORD aws s3 cp --acl private --sse aws:kms DATABASE_PASSWORD "s3://${SecretsBucket}/secret-files/DATABASE_PASSWORD" rm DATABASE_PASSWORD ``` To use pipeline-specific secret files, include the pipeline slug in the path. Replace `{pipeline-slug}` with your actual pipeline slug: ```bash aws s3 cp --acl private --sse aws:kms API_TOKEN "s3://${SecretsBucket}/{pipeline-slug}/secret-files/API_TOKEN" ``` ###### Configuration options for suppressing SSH key warnings By default, if your repository uses SSH for transport (the repository URL starts with `git@`) and no SSH key is found in the secrets bucket, the agent will display a warning message. You can suppress this warning using one of the following methods. Use these methods when managing SSH keys through alternative methods such as agent hooks or container images. ###### Using a CloudFormation parameter Set the `SecretsPluginSkipSSHKeyNotFoundWarning` parameter to `true` when creating or updating your CloudFormation stack. This configures the warning suppression for all agents in the stack. ###### Using an environment variable Set the `BUILDKITE_PLUGIN_S3_SECRETS_SKIP_SSH_KEY_NOT_FOUND_WARNING` environment variable to `true` in your pipeline configuration or agent environment hook: ```bash BUILDKITE_PLUGIN_S3_SECRETS_SKIP_SSH_KEY_NOT_FOUND_WARNING=true ``` --- ### Managing the Stack URL: https://buildkite.com/docs/agent/self-hosted/aws/elastic-ci-stack/ec2-linux-and-windows/managing-elastic-ci-stack #### Managing the Elastic CI Stack for AWS This page describes common tasks for managing the Elastic CI Stack for AWS. ##### Docker registry support If you want to push or pull from registries such as [Docker Hub](https://hub.docker.com/) or [Quay](https://quay.io/) you can use the `environment` hook in your secrets bucket to export the following environment variables: * `DOCKER_LOGIN_USER="the-user-name"` * `DOCKER_LOGIN_PASSWORD="the-password"` * `DOCKER_LOGIN_SERVER=""` - optional. By default it logs in to Docker Hub Setting these performs a `docker login` before each pipeline step runs, allowing you to `docker push` to them from within your build scripts. If you use [Amazon ECR](https://aws.amazon.com/ecr/) you can set the `ECRAccessPolicy` parameter for the stack to either `readonly`, `poweruser`, or `full` depending on the [access level](http://docs.aws.amazon.com/AmazonECR/latest/userguide/ecr_managed_policies.html) you want your builds to have. You can disable this in individual pipelines by setting `AWS_ECR_LOGIN=false`. If you want to log in to an ECR server on another AWS account, you can set `AWS_ECR_LOGIN_REGISTRY_IDS="id1,id2,id3"`. The AWS ECR options are powered by an embedded version of the [ECR plugin](https://github.com/buildkite-plugins/ecr-buildkite-plugin), so if you require options that aren't listed here, you can disable the embedded version as above and call the plugin directly. See [its README](https://github.com/buildkite-plugins/ecr-buildkite-plugin) for more examples (requires Agent v3.x). ##### Optimizing for slow Docker builds For large legacy applications the Docker build process might take a long time on new instances. For these cases it's recommended to create an optimized "builder" stack which doesn't scale down, keeps a warm docker cache and is responsible for building and pushing the application to Docker Hub before running the parallel build jobs across your normal CI stack. An example of how to set this up: 1. Create a Docker Hub repository for pushing images to 1. Update the pipeline's [`environment` hook](/docs/agent/hooks#job-lifecycle-hooks) in your secrets bucket to perform a `docker login` 1. Create a builder stack with its own queue (for example, `elastic-builders`) Here is an example build pipeline based on a production Rails application: ```yaml steps: - name: "\:docker\: :package:" plugins: docker-compose: build: app image-repository: my-docker-org/my-repo agents: queue: elastic-builders - wait - name: ":hammer:" command: ".buildkite/steps/tests" plugins: docker-compose: run: app agents: queue: elastic parallelism: 75 ``` ##### Multiple instances If you need different instances sizes and scaling characteristics for different pipelines, you can create multiple stacks. Each can run on a different [Agent queue](/docs/agent/queues), with its own configuration, or even in a different AWS account. Examples: * A `docker-builders` stack that provides always-on workers with hot Docker caches (see [Optimizing for slow Docker builds](#optimizing-for-slow-docker-builds)) * A `pipeline-uploaders` stack with tiny, always-on instances for lightning fast `buildkite-agent pipeline upload` jobs. * A `deploy` stack with added credentials and permissions specifically for deployment. ##### Autoscaling If you configure `MinSize` 📘 Versions prior to v6.0.0 > Per-commit builds for versions prior to v6.0.0, in particular for commits that are ancestors of [419f271](https://github.com/buildkite/elastic-ci-stack-for-aws/commit/419f271b54802c4c8301730bc35b34ed379074c4), were published to: > > ```text > https://s3.amazonaws.com/buildkite-aws-stack/master/${COMMIT}.aws-stack.yml > ``` A main branch release can also be deployed to any of our supported AWS Regions. GitHub branches are also automatically published to a per-branch URL `https://s3.amazonaws.com/buildkite-aws-stack/${BRANCH}/aws-stack.yml`. Branch releases can only be deployed to `us-east-1`. ##### Updating your stack To update your stack to the latest version, use CloudFormation's stack update tools with one of the URLs from the [Elastic CI Stack for AWS releases](#elastic-ci-stack-for-aws-releases) section. To preview changes to your stack before executing them, use a [CloudFormation Change Set](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/using-cfn-updating-stacks-changesets.html). ###### Pause Auto Scaling The CloudFormation template supports zero downtime deployment when updating. If you are concerned about causing a service interruption during the template update, use the AWS Console to temporarily pause auto scaling. Open the CloudFormation console and select your stack instance. Using the Resources tab, find the `AutoscalingFunction`. Use the Lambda console to find the function's Triggers and Disable the trigger rule. Next, find the stack's `AgentAutoScaleGroup` and set the `DesiredCount` to `0`. Once the remaining instances have terminated, deploy the updated stack and undo the manual changes to resume instance auto scaling. ##### Using custom IAM roles You can use an existing IAM role instead of letting the stack create one. This is useful for sharing a role across multiple stacks, or managing IAM roles outside of the stack. To use a custom role, pass a pre-existing role's ARN to the Terraform variable `instance_role_arn`, or the CloudFormation Parameter `InstanceRoleARN`. For the Agent Scaler Lambda, the ASG Process Suspender Lambda, or the Stop Buildkite agents Lambda, you can also provide custom roles using the Terraform variables `scaler_lambda_role_arn`, `asg_process_suspender_role_arn`, and `stop_buildkite_agents_role_arn`. Custom Lambda roles are currently only supported when using Terraform. ###### IAM policy requirements As a baseline, a custom IAM role needs the same permissions the stack would normally create. At minimum, Buildkite agents need an access to: * SSM for agent tokens and instance management * Auto Scaling for instance lifecycle management * AWS CloudWatch for logs and metrics * AWS CloudFormation for stack resource information (AWS CloudFormation-specific) * EC2 for instance metadata The following additional policies may also apply if using additional features: * Amazon S3 access for AWS S3 secrets and custom artifact buckets * KMS for encrypted parameters or pipeline signing * ECR for accessing container images ###### IAM policy examples To get started, we've included the policies that are created via the AWS CloudFormation and Terraform stacks. Some of the resources are generated dynamically when running either of the infrastructure-as-code solutions, so you will need to update them accordingly. ###### Core agent policy The below policy set is the minimum requirement for the Elastic CI Stack for AWS: ```json { "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": [ "autoscaling:DescribeAutoScalingInstances", "cloudwatch:PutMetricData", "cloudformation:DescribeStackResource", "ec2:DescribeTags" ], "Resource": "*" }, { "Sid": "TerminateInstance", "Effect": "Allow", "Action": [ "autoscaling:SetInstanceHealth", "autoscaling:TerminateInstanceInAutoScalingGroup" ], "Resource": "arn\:aws\:autoscaling:*:*:autoScalingGroup:*:autoScalingGroupName/YOUR_STACK_NAME-AgentAutoScaleGroup-*" }, { "Sid": "Logging", "Effect": "Allow", "Action": [ "logs:CreateLogGroup", "logs:CreateLogStream", "logs:PutLogEvents", "logs:DescribeLogGroups", "logs:DescribeLogStreams", "logs:PutRetentionPolicy" ], "Resource": "*" }, { "Sid": "Ssm", "Effect": "Allow", "Action": [ "ssm:DescribeInstanceProperties", "ssm:ListAssociations", "ssm:PutInventory", "ssm:UpdateInstanceInformation", "ssmmessages:CreateControlChannel", "ssmmessages:CreateDataChannel", "ssmmessages:OpenControlChannel", "ssmmessages:OpenDataChannel", "ec2messages:AcknowledgeMessage", "ec2messages:DeleteMessage", "ec2messages:FailMessage", "ec2messages:GetEndpoint", "ec2messages:GetMessages", "ec2messages:SendReply" ], "Resource": "*" }, { "Effect": "Allow", "Action": "ssm:GetParameter", "Resource": "arn\:aws\:ssm:*:*:parameter/YOUR_AGENT_TOKEN_PARAMETER_PATH" } ] } ``` ###### S3 secrets bucket When the [S3 secrets bucket](/docs/agent/self-hosted/aws/elastic-ci-stack/ec2-linux-and-windows/security#s3-secrets-bucket) is enabled, the following statement is required: ```json { "Version": "2012-10-17", "Statement": [ { "Sid": "SecretsBucket", "Effect": "Allow", "Action": [ "s3:Get*", "s3:List*" ], "Resource": [ "arn\:aws\:s3:::YOUR_SECRETS_BUCKET", "arn\:aws\:s3:::YOUR_SECRETS_BUCKET/*" ] } ] } ``` ###### S3 artifacts bucket When using the custom Artifacts Storage in S3, the following statement is required: ```json { "Version": "2012-10-17", "Statement": [ { "Sid": "ArtifactsBucket", "Effect": "Allow", "Action": [ "s3:GetObject", "s3:GetObjectAcl", "s3:GetObjectVersion", "s3:GetObjectVersionAcl", "s3:ListBucket", "s3:PutObject", "s3:PutObjectAcl", "s3:PutObjectVersionAcl" ], "Resource": [ "arn\:aws\:s3:::YOUR_ARTIFACTS_BUCKET", "arn\:aws\:s3:::YOUR_ARTIFACTS_BUCKET/*" ] } ] } ``` ###### KMS When using KMS keys for signed pipelines or encrypted parameters, the following statement is required: ```json { "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": [ "kms:Decrypt" ], "Resource": "arn\:aws\:kms:*:*:key/YOUR_KMS_KEY_ID" } ] } ``` ###### Lambda roles When using custom IAM roles for the Agent Scaler Lambda, the ASG Process Suspender Lambda, or the Stop Buildkite agents Lambda, the following additional permissions are required beyond the core agent policy: ```json { "Version": "2012-10-17", "Statement": [ { "Sid": "ScalerLambdaAutoScaling", "Effect": "Allow", "Action": [ "autoscaling:DescribeAutoScalingGroups", "autoscaling:DescribeScalingActivities", "autoscaling:SetDesiredCapacity" ], "Resource": "*" }, { "Sid": "ScalerLambdaSSMToken", "Effect": "Allow", "Action": [ "ssm:GetParameter" ], "Resource": "arn\:aws\:ssm:*:*:parameter/YOUR_AGENT_TOKEN_PARAMETER_PATH" }, { "Sid": "AsgProcessSuspender", "Effect": "Allow", "Action": [ "autoscaling:SuspendProcesses" ], "Resource": "*" }, { "Sid": "StopBuildkiteAgentsDescribeAsg", "Effect": "Allow", "Action": [ "autoscaling:DescribeAutoScalingGroups" ], "Resource": "*" }, { "Sid": "StopBuildkiteAgentsModifyAsg", "Effect": "Allow", "Action": [ "autoscaling:UpdateAutoScalingGroup" ], "Resource": "arn\:aws\:autoscaling:*:*:autoScalingGroup:*:autoScalingGroupName/YOUR_STACK_NAME-*" }, { "Sid": "StopBuildkiteAgentsSSMDocument", "Effect": "Allow", "Action": [ "ssm:SendCommand" ], "Resource": "arn\:aws\:ssm:*::document/AWS-RunShellScript" }, { "Sid": "StopBuildkiteAgentsSSMInstances", "Effect": "Allow", "Action": [ "ssm:SendCommand" ], "Resource": "arn\:aws\:ec2:*:*:instance/*", "Condition": { "StringEquals": { "aws:ResourceTag/aws:autoscaling:groupName": "YOUR_ASG_NAME" } } }, { "Sid": "LambdaLogging", "Effect": "Allow", "Action": [ "logs:CreateLogGroup", "logs:CreateLogStream", "logs:PutLogEvents" ], "Resource": "arn\:aws\:logs:*:*:log-group:/aws/lambda/YOUR_STACK_NAME-*" } ] } ``` When using Elastic CI mode for the Scaler Lambda, the following additional permissions are also required: ```json { "Version": "2012-10-17", "Statement": [ { "Sid": "ElasticCIModeEC2", "Effect": "Allow", "Action": [ "ec2:DescribeInstances" ], "Resource": "*" }, { "Sid": "ElasticCIModeSSM", "Effect": "Allow", "Action": [ "ssm:SendCommand", "ssm:GetCommandInvocation" ], "Resource": [ "arn\:aws\:ssm:*::document/AWS-RunShellScript", "arn\:aws\:ec2:*:*:instance/*" ] }, { "Sid": "ElasticCIModeTerminate", "Effect": "Allow", "Action": [ "ec2:TerminateInstances" ], "Resource": "arn\:aws\:ec2:*:*:instance/*", "Condition": { "StringEquals": { "ec2:ResourceTag/aws:autoscaling:groupName": "YOUR_ASG_NAME" } } } ] } ``` ###### Trust policy The following is the trust policy that is created for all the Elastic CI Stack for AWS instance roles: ```json { "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Principal": { "Service": [ "autoscaling.amazonaws.com", "ec2.amazonaws.com" ] }, "Action": "sts:AssumeRole" } ] } ``` When using custom IAM roles for the Agent Scaler Lambda, the ASG Process Suspender Lambda, or the Stop Buildkite agents Lambda, the trust policy must include `lambda.amazonaws.com` in your Trust Policy: ```json { "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Principal": { "Service": [ "autoscaling.amazonaws.com", "ec2.amazonaws.com", "lambda.amazonaws.com" ] }, "Action": "sts:AssumeRole" } ] } ``` ###### ECR managed policies For ECR access, the most straightforward approach is to utilize one of the pre-existing roles provided by AWS: * `arn\:aws\:iam:\:aws\:policy/AmazonEC2ContainerRegistryReadOnly` * `arn\:aws\:iam:\:aws\:policy/AmazonEC2ContainerRegistryPowerUser` * `arn\:aws\:iam:\:aws\:policy/AmazonEC2ContainerRegistryFullAccess` ###### CloudFormation configuration When creating a stack with AWS CloudFormation, a role can be passed as an ARN, for example: ```yaml Parameters: InstanceRoleARN: "arn\:aws\:iam::123456789012:role/MyBuildkiteRole" ``` In AWS CloudFormation, IAM roles are limited to a maximum of 10 paths, for example: ```yaml Parameters: InstanceRoleARN: "arn\:aws\:iam::123456789012:role/a/b/c/d/e/f/g/h/i/j/MyBuildkiteRole" ``` ###### Terraform configuration When using Terraform, there is no limit on the number of paths that can be used within an ARN. You can pass the value of your IAM Role's ARN to `var.instance_role_arn` and get started. For Lambda functions, you can provide custom role Amazon Resource Names (ARNs) in `terraform.tfvars`: ```hcl instance_role_arn = "arn\:aws\:iam::123456789012:role/MyBuildkiteRole" scaler_lambda_role_arn = "arn\:aws\:iam::123456789012:role/MyBuildkiteRole" asg_process_suspender_role_arn = "arn\:aws\:iam::123456789012:role/MyBuildkiteRole" stop_buildkite_agents_role_arn = "arn\:aws\:iam::123456789012:role/MyBuildkiteRole" ``` You can use the same role for all resources, or provide different roles for each Lambda function and the EC2 instances. ##### CloudWatch metrics Metrics are calculated every minute from the Buildkite API using a Lambda function. You can view the stack's metrics under **Custom Namespaces** > **Buildkite** within CloudWatch. ##### Reading instance and agent logs Each instance streams file system logs such as `/var/log/messages` and `/var/log/docker` into namespaced AWS log groups. A full list of files and log groups can be found in the relevant [Linux](https://github.com/buildkite/elastic-ci-stack-for-aws/blob/main/packer/linux/stack/conf/cloudwatch-agent/amazon-cloudwatch-agent.json) CloudWatch agent `config.json` file. Within each stream the logs are grouped by instance ID. To debug an agent: 1. Find the instance ID from the agent in Buildkite 2. Go to your **CloudWatch Logs Dashboard** 3. Choose the desired log group 4. Search for the instance ID in the list of log streams ##### Customizing instances with a bootstrap script You can customize your stack's instances by using the `BootstrapScriptUrl` stack parameter to run a script on instance boot. The script executes before the Buildkite agent starts and runs with elevated privileges, making it useful for installing software, configuring settings, or performing other customizations. The stack parameter accepts a URI that specifies the location and retrieval method for your bootstrap script. Supported URI schemes include: * S3 object URI (for example, `s3://my-bucket-name/my-bootstrap.sh`) retrieves the script from an S3 bucket using the AWS S3 API. The instance's IAM role must have `s3:GetObject` permission for the specified object. * HTTPS URL (for example, `https://www.example.com/config/bootstrap.sh`) downloads the script using `curl`command on Linux or `Invoke-WebRequest` on Windows. The URL must be publicly accessible. * Local file path (for example, `file:///usr/local/bin/my-bootstrap.sh`) references a script already present on the instance's filesystem. This is particularly useful when customizing the AMI to include bootstrap scripts. For private S3 objects, you need to create an IAM policy to allow the instances to read the file. The policy should include: ```json { "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": [ "s3:GetObject" ], "Resource": ["arn\:aws\:s3:::my-bucket-name/my-bootstrap.sh"] } ] } ``` After creating the policy, you must specify the policy's ARN in the `ManagedPolicyARNs` stack parameter. ##### Configuring agent environment variables You can configure environment variables for the Buildkite agent process by using the `AgentEnvFileUrl` stack parameter. These environment variables apply to the agent process itself and are useful for configuring proxy settings, debugging options, or other agent-specific configuration. These variables are _not_ the same as build environment variables, which should be configured in your pipeline. The parameter accepts a URI that specifies the location and retrieval method for an environment file. Supported URI schemes include: * S3 object URI (for example, `s3://my-bucket-name/agent.env`) retrieves the environment file from an S3 bucket using the AWS S3 API. The instance's IAM role must have `s3:GetObject` permission for the specified object. * SSM parameter path (for example, `ssm:/buildkite/agent/config`) retrieves environment variables from AWS Systems Manager Parameter Store. The instance's IAM role must have `ssm:GetParameter` permission. All parameters under the specified path are retrieved recursively with decryption enabled for `SecureString` parameters. The last segment of each parameter path becomes the environment variable name in uppercase (for example, `/buildkite/agent/config/http_proxy` becomes `HTTP_PROXY`). * HTTPS URL (for example, `https://www.example.com/config/agent.env`) downloads the environment file using `curl` command on Linux or `Invoke-WebRequest` on Windows. The URL must be publicly accessible. * Local file path (for example, `file:///etc/buildkite/agent.env`) references an environment file already present on the instance's filesystem. This is useful when customizing the AMI to include environment configuration. The environment file must contain variables in the format `KEY="value"`, with one variable per line. For private S3 objects, you must create an IAM policy to allow the instances to read the file. For SSM parameters, the IAM policy should include `ssm:GetParameter` permission for the specified parameter path. After creating the policy, you must specify the policy's ARN in the `ManagedPolicyARNs` stack parameter. ##### Health monitoring You can assess and monitor health and proper function of the Elastic CI Stack for AWS using a combination of the following tools: * **Auto Scaling group Activity logs** found on the EC2 Auto Scaling dashboard. They display the actions taken by the Auto Scaling group (failures, scale in/out, etc.). * **CloudWatch Metrics** the Buildkite namespace contains `ScheduledJobsCount`, `RunningJobsCount`, and `WaitingJobsCount` measurements for the Buildkite Queue your Elastic CI Stack for AWS was configured to poll. These numbers are fed to the Auto Scaling group by the scaling Lambda. * **CloudWatch Logs** log streams for the Buildkite agent and EC2 Instance system console. --- ### Configuration parameters URL: https://buildkite.com/docs/agent/self-hosted/aws/elastic-ci-stack/ec2-linux-and-windows/configuration-parameters #### Configuration parameters The Elastic CI Stack for AWS can be configured using parameters in AWS CloudFormation or variables in Terraform. This page provides a complete reference of all available configuration options. > 📘 Deployment method > If you're using AWS CloudFormation, see the [AWS CloudFormation setup guide](/docs/agent/self-hosted/aws/elastic-ci-stack/ec2-linux-and-windows/setup). If you're using Terraform, see the [Terraform deployment guide](/docs/agent/self-hosted/aws/elastic-ci-stack/ec2-linux-and-windows/terraform). The following tables list all of the available configuration parameters. For CloudFormation deployments, these are parameters in the [`aws-stack.yml` template](https://github.com/buildkite/elastic-ci-stack-for-aws/blob/-/templates/aws-stack.yml). For Terraform deployments, these are variables in the [Terraform module](https://github.com/buildkite/terraform-buildkite-elastic-ci-stack-for-aws). Note that you must provide a value for the Buildkite agent token (CloudFormation: [`BuildkiteAgentTokenParameterStorePath`](#BuildkiteAgentTokenParameterStorePath) or [`BuildkiteAgentToken`](#BuildkiteAgentToken); Terraform: `agent_token_parameter_store_path` or `agent_token`) to use the stack. All other parameters are optional. ##### | CloudFormation parameter | Terraform variable | Description | `` `()` | `` `()` _N/A_ | **Allowed Values**: `` **Default Value:** `` **Allowed Pattern:** `` **Minimum Length:** **Maximum Length:** **Minimum Value:** **Maximum Value:** --- ### Creating custom AMIs URL: https://buildkite.com/docs/agent/self-hosted/aws/elastic-ci-stack/ec2-linux-and-windows/creating-custom-amis #### Creating custom AMIs Custom AMIs help teams ensure that their agents have all required tools and configurations before instance launch. This prevents instances from reverting to the base image state when agents restart, which would lose any manual changes made during run time. Custom [AMIs](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/AMIs.html) can be used with the Elastic CI Stack for AWS by specifying the `ImageId` parameter. You can use any AMI available to your AWS account. For best results, start with Buildkite's base [Packer](https://developer.hashicorp.com/packer) templates. The Packer templates used to create the default stack images are available in the [packer directory](https://github.com/buildkite/elastic-ci-stack-for-aws/tree/main/packer) of the [Elastic CI Stack for AWS](https://github.com/buildkite/elastic-ci-stack-for-aws) repository. ##### Requirements To use the Packer templates provided, you will need the following installed on your system: - Docker - Make - AWS CLI The following AWS IAM permissions are required to build custom AMIs using the provided packer templates: ```json { "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": [ "ec2:AttachVolume", "ec2:AuthorizeSecurityGroupIngress", "ec2:CopyImage", "ec2:CreateImage", "ec2:CreateKeyPair", "ec2:CreateSecurityGroup", "ec2:CreateSnapshot", "ec2:CreateTags", "ec2:CreateVolume", "ec2:DeleteKeyPair", "ec2:DeleteSecurityGroup", "ec2:DeleteSnapshot", "ec2:DeleteVolume", "ec2:DeregisterImage", "ec2:DescribeImageAttribute", "ec2:DescribeImages", "ec2:DescribeInstances", "ec2:DescribeInstanceStatus", "ec2:DescribeRegions", "ec2:DescribeSecurityGroups", "ec2:DescribeSnapshots", "ec2:DescribeSubnets", "ec2:DescribeTags", "ec2:DescribeVolumes", "ec2:DetachVolume", "ec2:GetPasswordData", "ec2:ModifyImageAttribute", "ec2:ModifyInstanceAttribute", "ec2:ModifySnapshotAttribute", "ec2:RegisterImage", "ec2:RunInstances", "ec2:StopInstances", "ec2:TerminateInstances" ], "Resource": "*" } ] } ``` You'll also benefit from familiarity with: - [Packer](https://developer.hashicorp.com/packer/docs/intro) - [HashiCorp configuration language (HCL)](https://github.com/hashicorp/hcl?tab=readme-ov-file#hcl) - Bash or PowerShell (depending on the operating system) ##### Creating an image To create a custom AMI, use the provided Packer templates to build new images with your modifications. First, make your changes to the Packer templates, then run the [`Makefile`](https://github.com/buildkite/elastic-ci-stack-for-aws/blob/main/Makefile) in the root directory to begin the build process. This [`Makefile`](https://github.com/buildkite/elastic-ci-stack-for-aws/blob/main/Makefile) provides several build targets, each running Packer in a Docker container: | Command | Description | `` | By default, all builds target the `us-east-1` region and use your default AWS profile. The `make` command can be prefixed with environment variables to change the behavior of the build. | Variable | Default | Description | `` | `` | For example, you could build an AMD64 Linux image in the `eu-west-1` region using a smaller instance type and a specific AWS profile by running: ```bash AMD64_INSTANCE_TYPE="t3.medium" \ AWS_REGION="eu-west-1" \ AWS_PROFILE="assets-profile" \ make packer-linux-amd64.output ``` Once your image build is completed, the AMI will be stored in your AWS account and the AMI ID is displayed in your terminal output. You can also find the AMI ID in the corresponding output file (such as `packer-linux-amd64.output`). --- ### BuildKit URL: https://buildkite.com/docs/agent/self-hosted/aws/elastic-ci-stack/ec2-linux-and-windows/buildkit-container-builds #### BuildKit container builds [BuildKit](https://docs.docker.com/build/buildkit/) is Docker's next-generation build system that provides improved performance, better caching, parallel build execution, and efficient layer management. The Elastic CI Stack for AWS includes [Docker Buildx](https://docs.docker.com/build/concepts/overview/#buildx) ([BuildKit](https://docs.docker.com/build/buildkit/)) pre-installed on recent Linux AMI versions. BuildKit runs as part of the Docker daemon on EC2 instances. > 📘 > Docker Buildx comes pre-installed on recent Elastic CI Stack for AWS AMI versions (starting with version `5.4.0`). If you're using an older AMI version and Buildx is not available, you can either upgrade to the latest AMI version or manually install Buildx following the [Docker Buildx installation documentation](https://docs.docker.com/build/install-buildx/). ##### Using BuildKit with Elastic CI Stack for AWS BuildKit is available through the `docker buildx build` command, which provides the same interface as `docker build`, while leveraging BuildKit's advanced features. The Elastic CI Stack for AWS supports multiple build configurations to match your security and performance requirements. ###### Basic BuildKit build You can use BuildKit through Docker Buildx with default settings, without any additional configuration. For example: ```yaml steps: - label: "\:docker\: BuildKit container build" agents: queue: elastic command: | docker buildx build \ --progress=plain \ --file Dockerfile \ . ``` ###### BuildKit with build cache BuildKit supports efficient layer caching to speed up subsequent builds. By default, BuildKit caches layers in the Docker daemon's data directory (`/var/lib/docker` or `/mnt/ephemeral/docker` if instance storage is enabled). For explicit local cache management within a single job or on long-running agents, you can use the local cache type: ```yaml steps: - label: "\:docker\: BuildKit build with cache" agents: queue: elastic command: | docker buildx build \ --progress=plain \ --file Dockerfile \ --cache-from type=local,src=/tmp/buildkit-cache \ --cache-to type=local,dest=/tmp/buildkit-cache,mode=max \ . ``` The `mode=max` setting exports all build layers to the cache, providing maximum cache reuse for subsequent builds. > 📘 > Local cache directories like `/tmp/buildkit-cache` do not persist across instance terminations in autoscaling environments. For persistent cache across builds on different instances, use AWS S3 or registry's [remote cache backends](#customizing-builds-using-remote-cache-backends) instead. ###### BuildKit with instance storage When [instance storage is enabled](/docs/agent/self-hosted/aws/elastic-ci-stack/ec2-linux-and-windows/configuration-parameters) through the `EnableInstanceStorage` parameter in AWS CloudFormation, Docker stores images and build cache in high-performance NVMe storage at `/mnt/ephemeral/docker`. This significantly improves build performance for I/O-intensive operations. No configuration changes are required in your pipeline YAML for using BuildKit with the instance's storage. BuildKit automatically uses the configured Docker data directory. ###### BuildKit with multi-platform builds The Elastic CI Stack for AWS supports building container images for multiple architectures. BuildKit can build images for platforms different from the host architecture (through QEMU emulation). As a result, you can build ARM64 images on x86 instances and vice versa without additional setup. Here is an example of a multi-platform build configuration: ```yaml steps: - label: "\:docker\: Multi-platform build" agents: queue: elastic command: | docker buildx build \ --platform linux/amd64,linux/arm64 \ --progress=plain \ --file Dockerfile \ . ``` For production multi-platform builds, consider using separate agents with native architecture support to avoid emulation overhead as it can significantly impact build performance. ##### Building and pushing to Buildkite Package Registries Buildkite Package Registries provide secure OCI-compliant container image storage integrated with your Buildkite organization. The registry URL format is `packages.buildkite.com/{org.slug}/{registry.slug}`. ###### Authentication with OIDC The recommended authentication method for CI/CD pipelines is [Open ID Connect (OIDC) tokens](/docs/pipelines/security/oidc). OIDC tokens are short-lived, automatically issued by the Buildkite agent, and more secure than static API tokens. ```yaml steps: - label: "\:docker\: Build and push to Package Registries" agents: queue: elastic env: REGISTRY: "packages.buildkite.com/my-org/my-registry" command: | # Authenticate using OIDC buildkite-agent oidc request-token \ --audience "https://${REGISTRY}" \ --lifetime 300 | docker login ${REGISTRY} \ --username buildkite \ --password-stdin # Build and push docker buildx build \ --tag ${REGISTRY}/myapp:${BUILDKITE_BUILD_NUMBER} \ --tag ${REGISTRY}/myapp:latest \ --push \ --progress=plain \ . ``` > 📘 > OIDC authentication requires configuring an OIDC policy in your registry settings. See the [Package Registries OIDC documentation](/docs/package-registries/security/oidc) for setup instructions. ###### Multi-platform builds Build and push images for multiple architectures to Package Registries: ```yaml steps: - label: "\:docker\: Multi-platform build and push" agents: queue: elastic env: REGISTRY: "packages.buildkite.com/my-org/my-registry" command: | # Authenticate buildkite-agent oidc request-token \ --audience "https://${REGISTRY}" \ --lifetime 300 | docker login ${REGISTRY} \ --username buildkite \ --password-stdin # Build for multiple platforms docker buildx build \ --platform linux/amd64,linux/arm64 \ --tag ${REGISTRY}/myapp:${BUILDKITE_BUILD_NUMBER} \ --push \ --progress=plain \ . ``` ###### Using Package Registries as cache backend You can store BuildKit cache layers in Package Registries alongside your images: ```yaml steps: - label: "\:docker\: Build with registry cache" agents: queue: elastic env: REGISTRY: "packages.buildkite.com/my-org/my-registry" command: | # Authenticate buildkite-agent oidc request-token \ --audience "https://${REGISTRY}" \ --lifetime 300 | docker login ${REGISTRY} \ --username buildkite \ --password-stdin # Build with cache docker buildx build \ --cache-from type=registry,ref=${REGISTRY}/myapp:cache \ --cache-to type=registry,ref=${REGISTRY}/myapp:cache,mode=max \ --tag ${REGISTRY}/myapp:${BUILDKITE_BUILD_NUMBER} \ --push \ --progress=plain \ . ``` ##### Building and pushing to Amazon ECR The Elastic CI Stack for AWS includes the [ECR Buildkite plugin](https://buildkite.com/resources/plugins/buildkite-plugins/ecr-buildkite-plugin/) for seamless Amazon ECR authentication. The plugin automatically authenticates with ECR before your build runs, allowing you to push images directly. ###### Basic ECR push This example shows a basic ECR push. Replace the placeholder values with your values. ```yaml steps: - label: "\:docker\: Build and push to ECR" agents: queue: elastic env: REGISTRY: "123456789012.dkr.ecr.us-east-1.amazonaws.com/myapp" command: | docker buildx build \ --tag ${REGISTRY}:${BUILDKITE_BUILD_NUMBER} \ --tag ${REGISTRY}:latest \ --push \ --progress=plain \ . ``` ###### ECR push with build arguments You can pass build arguments to customize your build based on Buildkite [metadata](/docs/agent/cli/reference/meta-data) or [environment variables](/docs/pipelines/configure/environment-variables#buildkite-environment-variables). ```yaml steps: - label: "\:docker\: Build with args and push to ECR" agents: queue: elastic command: | docker buildx build \ --build-arg NODE_ENV=production \ --build-arg VERSION=${BUILDKITE_BUILD_NUMBER} \ --build-arg BUILD_URL=$BUILDKITE_BUILD_URL \ --tag 123456789012.dkr.ecr.us-east-1.amazonaws.com/myapp:${BUILDKITE_BUILD_NUMBER} \ --push \ --progress=plain \ . ``` ###### Cross-account ECR push For pushing to ECR repositories in different AWS accounts, use the [ECR plugin](https://buildkite.com/resources/plugins/buildkite-plugins/ecr-buildkite-plugin/)'s role assumption feature. ```yaml steps: - label: "\:docker\: Build and push to cross-account ECR" agents: queue: elastic plugins: - ecr#v2.11.0: login: true account-ids: "999888777666" region: us-west-2 assume_role: role_arn: "arn\:aws\:iam::999888777666:role/BuildkiteECRAccess" command: | docker buildx build \ --tag 999888777666.dkr.ecr.us-west-2.amazonaws.com/myapp:${BUILDKITE_BUILD_NUMBER} \ --push \ --progress=plain \ . ``` ##### Customizing builds BuildKit provides extensive customization options through the `docker buildx build` command. ###### Targeting specific build stages Multi-stage Dockerfiles can build specific stages using the `--target` flag. For example: ```bash docker buildx build \ --target production \ --tag myapp:production \ --progress=plain \ . ``` ###### Exporting build artifacts BuildKit can export build outputs beyond container images, such as compiled binaries or [build artifacts](/docs/pipelines/configure/artifacts). For example: ```yaml steps: - label: "\:docker\: Export build artifacts" agents: queue: elastic command: | docker buildx build \ --target builder \ --output type=local,dest=./dist \ --progress=plain \ . artifact_paths: - "dist/**/*" ``` This example demonstrates exporting the contents of the `builder` stage to the local `./dist` directory, which can then be uploaded as artifacts. ###### Using remote cache backends Remote cache backends provide persistent cache storage across builds to speed up container builds across agents running in the Elastic CI Stack for AWS. ###### Registry cache backend Build cache layers can also be stored in a container registry alongside your images. ```yaml steps: - label: "\:docker\: Build with registry cache" agents: queue: elastic plugins: - ecr#v2.11.0: login: true command: | docker buildx build \ --cache-from type=registry,ref=123456789012.dkr.ecr.us-east-1.amazonaws.com/myapp-cache:latest \ --cache-to type=registry,ref=123456789012.dkr.ecr.us-east-1.amazonaws.com/myapp-cache:latest,mode=max \ --tag 123456789012.dkr.ecr.us-east-1.amazonaws.com/myapp:latest \ --push \ --progress=plain \ . ``` ###### AWS S3 cache backend AWS S3 buckets can be used to store build cache layers between builds. > 📘 Experimental feature > The S3 cache backend is an [experimental Docker BuildKit feature](https://docs.docker.com/build/cache/backends/s3/). It requires creating a custom buildx builder with a non-default driver (such as `docker-container`), as the default Docker driver does not support AWS S3 cache. ```yaml steps: - label: "\:docker\: Build with S3 cache" agents: queue: elastic command: | # Create builder with docker-container driver if it doesn't exist # This can also be added as a custom `pre-command` hook if ! docker buildx ls | grep -q "^my-custom-builder"; then docker buildx create --name my-custom-builder --driver=docker-container --use --bootstrap else docker buildx use my-custom-builder fi # Build with S3 cache docker buildx build \ --cache-from type=s3,region=us-east-1,bucket=my-buildkit-cache-bucket,name=myapp \ --cache-to type=s3,region=us-east-1,bucket=my-buildkit-cache-bucket,name=myapp,mode=max \ --progress=plain \ . # Clean up builder after build # This can also be added as a custom `post-command` hook docker buildx rm my-custom-builder ``` Ensure your Elastic CI Stack for AWS IAM role has appropriate AWS S3 permissions for the cache bucket. AWS credentials are automatically available to the builder through the instance's IAM role. ##### Security considerations BuildKit builds run with the privileges of the Docker daemon on the EC2 instance. Consider these security practices when using BuildKit on the Elastic CI Stack for AWS. ###### Docker user namespace remapping Enable [Docker user namespace remapping](/docs/agent/self-hosted/aws/elastic-ci-stack/ec2-linux-and-windows/configuration-parameters) through the `EnableDockerUserNamespaceRemap` parameter in AWS CloudFormation. This maps the containers to the non-root `buildkite-agent` user, reducing the attack surface if a container is compromised. When user namespace remapping is enabled, Docker containers run as user `100000-165535` (mapped from container UID `0-65535`) on the host, preventing container processes from accessing host resources as root. ###### Secret management Never include secrets directly in Dockerfiles or build arguments, as they may be persisted in image layers or build history. Instead, use BuildKit's `--secret` flag with secrets retrieved from [Buildkite secrets](/docs/pipelines/security/secrets/buildkite-secrets) or the Buildkite environment. The Elastic CI Stack for AWS provides isolated Docker configurations per job through the `DOCKER_CONFIG` environment variable, ensuring Docker credentials are not leaked between jobs. ###### Build isolation Each Buildkite job on the Elastic CI Stack for AWS creates its own isolated Docker configuration directory (`$DOCKER_CONFIG`). This isolation prevents credentials and configurations from one job accessing another job's resources, even when multiple jobs run on the same instance. After each job completes, the isolated Docker configuration is automatically cleaned up. ###### Image scanning Integrate container image scanning into your pipeline to detect vulnerabilities before deployment. ```yaml steps: - label: "\:docker\: Build image" agents: queue: elastic plugins: - ecr#v2.11.0: login: true command: | docker buildx build \ --tag 123456789012.dkr.ecr.us-east-1.amazonaws.com/myapp:${BUILDKITE_BUILD_NUMBER} \ --push \ --progress=plain \ . - label: "\:shield\: Scan image" agents: queue: elastic plugins: - ecr#v2.11.0: login: true command: | # Use AWS ECR image scanning aws ecr start-image-scan \ --repository-name myapp \ --image-id imageTag=${BUILDKITE_BUILD_NUMBER} # Or use third-party scanners like Trivy docker run --rm \ aquasec/trivy:latest image \ 123456789012.dkr.ecr.us-east-1.amazonaws.com/myapp:${BUILDKITE_BUILD_NUMBER} ``` ##### Performance optimization BuildKit provides several features to improve build performance on the Elastic CI Stack for AWS. ###### Enable instance storage Configure the `EnableInstanceStorage` parameter in AWS CloudFormation to use high-performance NVMe storage for Docker data. This provides significantly faster I/O for image pulls, layer extraction, and build cache operations compared to EBS volumes. Instance storage is ephemeral and cleared when instances terminate, making it ideal for temporary build artifacts and cache data. ###### Optimize Dockerfile layer caching Structure Dockerfiles to maximize layer cache reuse: ```dockerfile #### syntax=docker/dockerfile:1 FROM node:18-alpine #### Install dependencies first (changes infrequently) COPY package.json package-lock.json ./ RUN npm ci --production #### Copy application code (changes frequently) COPY . . #### Build application RUN npm run build CMD ["node", "dist/index.js"] ``` This ordering ensures dependency installation layers are cached and reused across builds, with only the application code layers rebuilding when source files change. ###### Use build cache effectively Configure cache backends appropriate for your build frequency: - _Local cache_: fastest access, suitable for long-running instances or within a single job. - _S3 cache_: persistent across instance terminations, good for high-frequency builds. - _Registry cache_: persistent and no additional infrastructure required, leverages existing container registry. For autoscaling environments where instances terminate frequently, use S3 or registry cache backends to maintain cache between builds. Registry cache performance depends on your registry location and network configuration. When using ECR in the same region as your instances, performance is comparable to S3. ###### Parallel multi-stage builds BuildKit automatically parallelizes independent build stages. Structure Dockerfiles with multiple independent stages to maximize parallelism, for example: ```dockerfile #### syntax=docker/dockerfile:1 FROM node:18 AS frontend-builder WORKDIR /app/frontend COPY frontend/package*.json ./ RUN npm ci COPY frontend/ ./ RUN npm run build FROM golang:1.23 AS backend-builder WORKDIR /app/backend COPY backend/go.mod backend/go.sum ./ RUN go mod download COPY backend/ ./ RUN go build -o server FROM alpine:latest RUN apk add --no-cache ca-certificates COPY --from=frontend-builder /app/frontend/dist /app/static COPY --from=backend-builder /app/backend/server /app/server CMD ["/app/server"] ``` The `frontend-builder` and `backend-builder` stages run in parallel, reducing the total build time. ##### Troubleshooting This section describes common issues with BuildKit on the Elastic CI Stack for AWS and how to resolve them. ###### BuildKit not available If `docker buildx` commands fail with "buildx command not found" message, your instance is running an older AMI version without Buildx pre-installed. Update to the latest Elastic CI Stack for AWS AMI version to get the Buildx support. ###### Out of disk space BuildKit builds can consume significant disk space for layers and cache. The Elastic CI Stack for AWS automatically monitors disk usage and prunes Docker resources when space is low. When disk space becomes critically low, the stack fails the current job by default. Additional AWS CloudFormation parameters are available to handle how the Stack instance responds when disk space management issues are encountered: - `BuildkitePurgeBuildsOnDiskFull` - set to `true` to automatically purge build directories when disk space is critically low (default: `false`). - `BuildkiteTerminateInstanceOnDiskFull` - set to `true` to terminate the instance when disk space is critically low, allowing autoscaling to provision a fresh instance (default: `false`). To prevent disk space issues, consider enabling instance storage or increasing the root volume size through the `RootVolumeSize` parameter in AWS CloudFormation. ###### Build cache not working If builds don't reuse cache layers as expected, start by verifying your local/remote cache configuration. For local cache, ensure the cache directory persists between builds: ```bash ls -la /tmp/buildkit-cache ``` For remote cache (AWS S3 or registry), verify authentication and network access: ```bash #### Test S3 access aws s3 ls s3://my-buildkit-cache-bucket/ #### Test registry access docker login 123456789012.dkr.ecr.us-east-1.amazonaws.com ``` ###### Multi-platform build failures Multi-platform builds are supported out-of-the-box on Elastic CI Stack for AWS through pre-configured QEMU emulation. If multi-platform builds fail, common causes include: - _Memory constraints_: cross-architecture emulation requires additional memory. Ensure your instance type has sufficient memory for emulated builds. - _Build script compatibility_: some build operations may not work correctly under emulation. Test your build scripts with the target architecture. - _Performance timeouts_: emulated builds are significantly slower than native builds. Consider increasing timeouts or using native architecture agents for production workloads. To verify the build works for a specific platform without emulation overhead, use separate agent queues with native architecture instances for each target platform. ###### Secret mount failures If secrets are not accessible during builds, verify the secret file exists and has correct permissions: ```bash ls -la /tmp/npmtoken ``` Ensure the Dockerfile uses correct BuildKit secret syntax: ```dockerfile #### syntax=docker/dockerfile:1 RUN --mount=type=secret,id=npmtoken \ cat /run/secrets/npmtoken ``` The `# syntax=docker/dockerfile:1` directive at the beginning of the Dockerfile is required for BuildKit features like secrets. --- ### Remote BuildKit on EC2 URL: https://buildkite.com/docs/agent/self-hosted/aws/elastic-ci-stack/ec2-linux-and-windows/remote-buildkit-ec2 #### Remote BuildKit builders on Elastic CI Stack for AWS [BuildKit](https://docs.docker.com/build/buildkit/) supports running container builds on a remote daemon. Running builds on a separate instance provides faster CPU, persistent cache storage, and isolation from your pipeline agents. The Buildkite agent coordinates the build while BuildKit executes it on the remote node. This guide shows you how to provision an Amazon EC2 instance as a dedicated BuildKit builder and connect Elastic CI Stack for AWS agents to it. > 📘 Local BuildKit builds > If you want to run BuildKit builds directly on your Elastic CI Stack for AWS agents instead of a remote instance, see [BuildKit container builds](/docs/agent/self-hosted/aws/elastic-ci-stack/ec2-linux-and-windows/buildkit-container-builds). ##### How it works 1. A dedicated EC2 instance runs the BuildKit daemon (`buildkitd`) and exposes a gRPC listener on TCP port `1234`. 1. A BuildKit client (`buildctl`) is installed on Elastic CI Stack for AWS agents, with environment variables configured to target the remote builder. 1. Pipelines call `buildctl build` to build and push container images. The remote EC2 instance retains the BuildKit cache, so subsequent builds reuse cached layers. Multiple pipelines can share the same builder if you size the instance appropriately. > 📘 TLS configuration > This guide configures BuildKit without TLS for simplicity. The BuildKit instance runs in a private VPC with security group rules that restrict access to only your Elastic CI Stack for AWS agents. For additional security guidance, see the [BuildKit TLS documentation](https://github.com/moby/buildkit#expose-buildkit-as-a-tcp-service). ##### Prerequisites - Elastic CI Stack for AWS deployed with agents in the same VPC and security group with access to the BuildKit instance. - Amazon EC2 instance to run BuildKit (for example, `c5a.large` with gp3 EBS volume for cache). - IAM permissions for the instance profile to access ECR (if pushing images). ##### Provision the BuildKit instance Use Terraform, AWS CloudFormation, or another provisioning tool to launch an EC2 instance with the following characteristics: - Amazon Linux 2023 (or another supported Linux distribution) - Attached EBS volume sized for your layer cache (100 GB or more) - Same VPC as your Elastic CI Stack for AWS instances - Security group that allows inbound TCP connections on the BuildKit port (default `tcp/1234`) from your Elastic CI Stack for AWS security group > 🚧 VPC requirement > The BuildKit instance and Elastic CI Stack for AWS must be in the same VPC. Security group rules that reference other security groups only work within a single VPC. Install BuildKit and Docker CLI on the instance: ```bash sudo yum update -y sudo yum install -y docker export BUILDKIT_VERSION="v0.13.2" curl -LO "https://github.com/moby/buildkit/releases/download/${BUILDKIT_VERSION}/buildkit-${BUILDKIT_VERSION}.linux-amd64.tar.gz" sudo tar -C /usr/local -xzvf "buildkit-${BUILDKIT_VERSION}.linux-amd64.tar.gz" sudo mkdir -p /var/lib/buildkit ``` Create a systemd unit to manage the BuildKit daemon: ```bash sudo tee /etc/systemd/system/buildkitd.service /env` with the following content: ```bash #!/bin/bash set -euo pipefail #### Configure BuildKit connection export BUILDKIT_HOST="tcp://:1234" echo "BuildKit connection configured" ``` Replace `` with the private IP address of your BuildKit EC2 instance. The Elastic CI Stack for AWS automatically sources scripts from the `env` path in the secrets bucket during agent startup. > 📘 Pipeline-specific configuration > The `env` hook at `s3:///env` applies to all pipelines. To configure BuildKit for specific pipelines only, upload the hook to `s3:////env` instead. ##### Pipeline example The following example runs a build using the remote BuildKit instance and pushes to Amazon ECR, using the [ECR plugin](https://buildkite.com/resources/plugins/buildkite-plugins/ecr-buildkite-plugin/). For Docker, use the [Docker Login Buildkite plugin](https://buildkite.com/resources/plugins/buildkite-plugins/docker-login-buildkite-plugin/) or [Docker Image Push Buildkite plugin](https://buildkite.com/resources/plugins/buildkite-plugins/docker-image-push-buildkite-plugin/) plugins: ```yaml steps: - label: ":docker: Build with BuildKit" plugins: - ecr#v2.11.0: login: true account-ids: - region: command: | set -euo pipefail buildctl build \ --frontend dockerfile.v0 \ --local context=. \ --local dockerfile=. \ --output type=image,name=.dkr.ecr..amazonaws.com/:latest,push=true ``` You will need to replace the registry URL and image name with your target repository. The `buildctl` command uses the `BUILDKIT_HOST` environment variable set by the environment hook to connect to the remote daemon. For multi-platform builds, add the `--opt platform=linux/amd64,linux/arm64` flag to the `buildctl build` command. ##### Security considerations - Network isolation (required): configure the BuildKit security group to only allow inbound `tcp/1234` from your Elastic CI Stack for AWS security group. - VPC placement (required): run the BuildKit instance in a private subnet with no public IP address. Use VPC endpoints or NAT gateways for outbound internet access if needed. - Monitoring: use Amazon CloudWatch Logs and metrics to monitor CPU, memory, and disk usage. Set alarms to detect resource exhaustion or unusual activity. - Access control: limit which pipelines can use the remote builder by restricting the `BUILDKIT_HOST` environment variable to specific pipeline configurations. ##### Troubleshooting This section covers common issues when setting up remote BuildKit builders. ###### Connection errors **Issue:** `connection error: desc = "error reading server preface: read tcp ... connection reset by peer"` error. **Solution:** This error indicates a network connectivity issue or TLS configuration mismatch. To troubleshoot: - Verify the BuildKit instance is running: `sudo systemctl status buildkitd` - Confirm the security group allows inbound TCP 1234 from the agent security group - Test connectivity from an agent: `buildctl debug workers` - Check BuildKit logs: `sudo journalctl -u buildkitd -n 50` ###### Environment hook errors **Issue:** `mkdir: cannot create directory '/etc/buildkit': Permission denied` error. **Solution:** The `env` hook runs as the `buildkite-agent` user and cannot write to `/etc`. Use agent-writable directories like `${HOME}/.buildkit` or configure certificates in the secrets bucket. ###### Build errors **Issue:** `exporter "registry" could not be found` error. **Solution:** The `registry` exporter is not available in this BuildKit version. Use `type=image,push=true` instead of `type=registry` in the `--output` flag: ```bash buildctl build \ --output type=image,name=/:tag,push=true ``` ###### Cache not reused **Issue:** BuildKit root directory is not on the persistent EBS volume. **Solution:** Ensure the BuildKit root directory (`/var/lib/buildkit`) is on the attached EBS volume and that the daemon service references this directory with the `--root` flag. ###### Version mismatch **Issue:** Builds fail with protocol or feature errors. **Solution:** The `buildctl` binary on agents doesn't match the BuildKit daemon version. Confirm both use the same version: ```bash #### On agent buildctl --version #### On BuildKit instance buildkitd --version ``` --- ### Docker Compose URL: https://buildkite.com/docs/agent/self-hosted/aws/elastic-ci-stack/ec2-linux-and-windows/docker-compose-container-builds #### Docker Compose builds The [Docker Compose plugin](https://buildkite.com/resources/plugins/buildkite-plugins/docker-compose-buildkite-plugin/) helps you build and run multi-container Docker applications. You can build and push container images using the Docker Compose plugin on agents that are auto-scaled by the [Buildkite Elastic CI Stack for AWS](/docs/agent/aws). ##### Special considerations regarding Elastic CI Stack for AWS When running the [Docker Compose plugin](https://buildkite.com/resources/plugins/buildkite-plugins/docker-compose-buildkite-plugin/) within the Buildkite Elastic CI Stack for AWS, consider the following requirements and best practices for successful container builds. ###### Docker daemon access The Elastic CI Stack for AWS provides EC2 instances with Docker pre-installed and running. Each agent has its own Docker daemon, providing complete isolation between builds without the complexity of Docker-in-Docker or socket mounting. ###### Build context and file access In Elastic CI Stack for AWS, the build context is the checked-out repository on the EC2 agent's filesystem. By default, the [Docker Compose plugin](https://buildkite.com/resources/plugins/buildkite-plugins/docker-compose-buildkite-plugin/) uses the current working directory as the build context. If your `docker-compose.yml` references files outside the repository directory, ensure they are: - Included in your repository - Available through [Buildkite artifact uploads](/docs/agent/cli/reference/artifact#uploading-artifacts) from previous steps - Accessible via network mounts or external storage For build caching or sharing artifacts across builds, use: - Container registry for build cache layers - Buildkite artifacts for build outputs - AWS S3 for large artifacts or dependencies ###### Registry authentication Set up proper authentication for pushing to container registries: - Use the `docker-login` plugin for standard Docker registries - Use the `ecr` plugin for AWS ECR (recommended for AWS environments) - Use the `gcp-workload-identity-federation` plugin for Google Artifact Registry When pushing services, ensure the `image:` field is set in `docker-compose.yml` to specify the full registry path. For AWS ECR, the Elastic CI Stack for AWS agents can use IAM roles for authentication, eliminating the need to manage credentials manually. ###### Resource allocation Building container images can be resource-intensive, especially for large applications or when building multiple services. Configure your Elastic CI Stack for AWS agent instance types and other required resources accordingly. Without appropriate resources, builds may fail with Out of Memory (OOM) errors or timeouts ##### Configuration approaches with the Docker Compose plugin The Docker Compose plugin supports different workflow patterns for building and pushing container images, each suited to specific use cases in Elastic CI Stack for AWS environments. ###### Push to Buildkite Package Registries You can push a built image directly to Buildkite Package Registries by using the following example configuration: ```yaml steps: - label: "\:docker\: Build and push to Buildkite Package Registries" agents: queue: default plugins: - docker-login#v3.0.0: server: packages.buildkite.com/{org.slug}/{registry.slug} username: "${REGISTRY_USERNAME}" password-env: "REGISTRY_PASSWORD" - docker-compose#v5.12.1: build: app push: - app:packages.buildkite.com/{org.slug}/{registry.slug}/image-name:${BUILDKITE_BUILD_NUMBER} cache-from: - app:packages.buildkite.com/{org.slug}/{registry.slug}/image-name:cache buildkit: true buildkit-inline-cache: true ``` ###### Basic Docker Compose build Build the services defined in your `docker-compose.yml` file: ```yaml steps: - label: "Build with Docker Compose" agents: queue: default plugins: - docker-compose#v5.12.1: build: app config: docker-compose.yml ``` This is what a sample `docker-compose.yml` file would look like: ```yaml services: app: build: context: . dockerfile: Dockerfile image: your-registry.example.com/your-team/app:bk-${BUILDKITE_BUILD_NUMBER} ``` ###### Building and pushing with the Docker Compose plugin Build and push images in a single step: ```yaml steps: - label: "\:docker\: Build and push" agents: queue: default plugins: - docker-compose#v5.12.1: build: app push: app ``` If you're using a private repository, add authentication: ```yaml steps: - label: "\:docker\: Build and push" agents: queue: default plugins: - docker-login#v3.0.0: server: your-registry.example.com username: "${REGISTRY_USERNAME}" password-env: "REGISTRY_PASSWORD" - docker-compose#v5.12.1: build: app push: app ``` ###### Build and push to AWS ECR Build and push images to AWS ECR using IAM role authentication: ```yaml steps: - label: "\:docker\: Build and push to ECR" agents: queue: default plugins: - ecr#v2.11.0: login: true account-ids: "123456789012" region: us-west-2 - docker-compose#v5.12.1: build: app push: - app:123456789012.dkr.ecr.us-west-2.amazonaws.com/my-app:${BUILDKITE_BUILD_NUMBER} - app:123456789012.dkr.ecr.us-west-2.amazonaws.com/my-app:latest cache-from: - app:123456789012.dkr.ecr.us-west-2.amazonaws.com/my-app:cache buildkit: true buildkit-inline-cache: true ``` Corresponding `docker-compose.yml`: ```yaml services: app: build: context: . dockerfile: Dockerfile image: 123456789012.dkr.ecr.us-west-2.amazonaws.com/my-app:${BUILDKITE_BUILD_NUMBER} ``` ###### Multi-service build with ECR You can build multiple services and push them to ECR with proper tagging: ```yaml steps: - label: "\:docker\: Build microservices" agents: queue: default plugins: - ecr#v2.11.0: login: true account-ids: "123456789012" region: us-west-2 - docker-compose#v5.12.1: build: - frontend - backend - api push: - frontend:123456789012.dkr.ecr.us-west-2.amazonaws.com/frontend:${BUILDKITE_BUILD_NUMBER} - backend:123456789012.dkr.ecr.us-west-2.amazonaws.com/backend:${BUILDKITE_BUILD_NUMBER} - api:123456789012.dkr.ecr.us-west-2.amazonaws.com/api:${BUILDKITE_BUILD_NUMBER} cache-from: - frontend:123456789012.dkr.ecr.us-west-2.amazonaws.com/frontend:cache - backend:123456789012.dkr.ecr.us-west-2.amazonaws.com/backend:cache - api:123456789012.dkr.ecr.us-west-2.amazonaws.com/api:cache buildkit: true buildkit-inline-cache: true ``` ##### Customizing the build Customize your Docker Compose builds by using the Docker Compose plugin's configuration options to control build behavior, manage credentials, and optimize performance. ###### Using build arguments Pass build arguments to customize image builds at build time. You can add parameters to Dockerfiles without directly embedding values in the file by using build arguments: ```yaml steps: - label: "\:docker\: Build with arguments" agents: queue: default plugins: - docker-compose#v5.12.1: build: app args: - NODE_ENV=production - BUILD_NUMBER=${BUILDKITE_BUILD_NUMBER} - API_URL=${API_URL} ``` ###### Building specific services When your `docker-compose.yml` defines multiple services, you are able to build only the services you need rather than building everything: ```yaml steps: - label: "\:docker\: Build frontend only" agents: queue: default plugins: - docker-compose#v5.12.1: build: frontend push: frontend ``` ###### Using BuildKit features with cache optimization [BuildKit](https://docs.docker.com/build/buildkit/) provides advanced build features including build cache optimization. BuildKit's inline cache stores cache metadata in the image itself, enabling cache reuse across different build agents. Here is an example configuration: ```yaml steps: - label: "\:docker\: Build with BuildKit cache" agents: queue: default plugins: - ecr#v2.11.0: login: true account-ids: "123456789012" region: us-west-2 - docker-compose#v5.12.1: build: app cache-from: - app:123456789012.dkr.ecr.us-west-2.amazonaws.com/app:cache buildkit: true buildkit-inline-cache: true push: - app:123456789012.dkr.ecr.us-west-2.amazonaws.com/app:${BUILDKITE_BUILD_NUMBER} - app:123456789012.dkr.ecr.us-west-2.amazonaws.com/app:cache ``` ###### Using multiple compose files Combine multiple compose files to create layered configurations. This pattern works well for separating base configuration from environment-specific overrides: ```yaml steps: - label: "\:docker\: Build with compose file overlay" agents: queue: default plugins: - docker-compose#v5.12.1: config: - docker-compose.yml - docker-compose.production.yml build: app push: app ``` ###### Custom image tagging on push You can push the same image with multiple tags to support different deployment strategies. This is useful for maintaining both immutable version tags and mutable environment tags: ```yaml steps: - label: "\:docker\: Push with multiple tags" agents: queue: default plugins: - ecr#v2.11.0: login: true account-ids: "123456789012" region: us-west-2 - docker-compose#v5.12.1: build: app push: - app:123456789012.dkr.ecr.us-west-2.amazonaws.com/app:${BUILDKITE_BUILD_NUMBER} - app:123456789012.dkr.ecr.us-west-2.amazonaws.com/app:${BUILDKITE_COMMIT} - app:123456789012.dkr.ecr.us-west-2.amazonaws.com/app:latest - app:123456789012.dkr.ecr.us-west-2.amazonaws.com/app:${BUILDKITE_BRANCH} cache-from: - app:123456789012.dkr.ecr.us-west-2.amazonaws.com/app:cache buildkit: true buildkit-inline-cache: true ``` ###### Using SSH agent for private repositories If you enable SSH agent forwarding, you will be able to access private Git repositories or packages during the build. Use this when Dockerfiles need to clone private dependencies. Example configuration: ```yaml steps: - label: "\:docker\: Build with SSH access" agents: queue: default plugins: - docker-compose#v5.12.1: build: app ssh: true ``` Your Dockerfile needs to use BuildKit's SSH mount feature: ```dockerfile #### syntax=docker/dockerfile:1 FROM node:18 #### Install dependencies from private repository RUN --mount=type=ssh git clone git@github.com:yourorg/private-lib.git ``` ##### Troubleshooting This section can help you to identify and solve the issues that might arise when using Docker Compose container builds with Buildkite Pipelines on Elastic CI Stack for AWS. ###### Network connectivity Network policies, security groups, or DNS configuration issues can restrict EC2 agent networking. As a result, builds may fail with errors like "could not resolve host," "connection timeout," or "unable to pull image" when trying to pull base images from Docker Hub or push to your private registry. To resolve these issues: - Verify that your Elastic CI Stack security groups allow outbound HTTPS traffic (port `443`) for registry access - Check VPC routing and internet gateway configuration - Verify DNS resolution in your VPC - Ensure NAT gateway is configured if agents are in private subnets - Test registry connectivity from an agent instance using `docker pull` or `docker login` ###### Resource constraints Docker builds may fail with errors like "signal: killed," "build container exited with code 137," or builds that hang indefinitely and timeout. These usually signal insufficient memory or CPU resources allocated to your EC2 agent instances, causing the Linux kernel to kill processes (Out of Memory or OOM). To resolve these issues: - Check CloudWatch metrics for agent instance CPU and memory utilization - Upgrade to larger instance types (e.g., from `c5.large` to `c5.xlarge` or `c5.2xlarge`) - Monitor build logs for memory-related errors - Optimize Dockerfiles to reduce resource requirements - Use multi-stage builds to reduce final image size - Consider building smaller, more focused images ###### Build cache not working Docker builds rebuild all layers even when source files haven't changed. This happens when build cache is not preserved between builds or when cache keys don't match. To enable build caching with BuildKit: ```yaml plugins: - ecr#v2.11.0: login: true account-ids: "123456789012" region: us-west-2 - docker-compose#v5.12.1: build: app cache-from: - app:123456789012.dkr.ecr.us-west-2.amazonaws.com/app:cache buildkit: true buildkit-inline-cache: true push: - app:123456789012.dkr.ecr.us-west-2.amazonaws.com/app:${BUILDKITE_BUILD_NUMBER} - app:123456789012.dkr.ecr.us-west-2.amazonaws.com/app:cache ``` Ensure that the cache image exists in your registry before running the first build, or accept that the initial build will be slower. Subsequent builds will use the cached layers. ###### Environment variables not available during build Environment variables from your Buildkite pipeline aren't accessible inside your Dockerfile during the build process. Docker builds are isolated and don't automatically inherit environment variables. To pass environment variables to the build, use build arguments: ```yaml plugins: - docker-compose#v5.12.1: build: app args: - API_URL=${API_URL} - BUILD_NUMBER=${BUILDKITE_BUILD_NUMBER} - COMMIT_SHA=${BUILDKITE_COMMIT} ``` Then reference the passed environment variables in your Dockerfile: ```dockerfile ARG API_URL ARG BUILD_NUMBER ARG COMMIT_SHA RUN echo "Building version ${BUILD_NUMBER} from commit ${COMMIT_SHA}" ``` Note that the `args` option in the Docker Compose plugin passes variables at build time, while the `environment` option passes variables at runtime (for running containers, not building images). ###### Image push failures Pushing images to registries fails with authentication errors or timeout errors. For authentication failures, ensure credentials are properly configured. Use the [Docker Login Buildkite plugin](https://buildkite.com/resources/plugins/buildkite-plugins/docker-login-buildkite-plugin/) before the [Docker Compose Buildkite plugin](https://buildkite.com/resources/plugins/buildkite-plugins/docker-compose-buildkite-plugin/): ```yaml plugins: - docker-login#v3.0.0: server: your-registry.example.com username: "${REGISTRY_USERNAME}" password-env: "REGISTRY_PASSWORD" - docker-compose#v5.12.1: build: app push: app ``` For AWS ECR, use the ECR plugin which handles authentication automatically: ```yaml plugins: - ecr#v2.11.0: # For AWS ECR login: true account-ids: "123456789012" region: us-west-2 - docker-compose#v5.12.1: build: app push: app ``` Ensure the Elastic CI Stack agent IAM role has the necessary ECR permissions: ```json { "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": ["ecr:GetAuthorizationToken"], "Resource": "*" }, { "Effect": "Allow", "Action": [ "ecr:BatchCheckLayerAvailability", "ecr:GetDownloadUrlForLayer", "ecr:BatchGetImage", "ecr:PutImage", "ecr:InitiateLayerUpload", "ecr:UploadLayerPart", "ecr:CompleteLayerUpload" ], "Resource": "arn:aws:ecr:region:123456789012:repository/name" } ] } ``` For timeout or network failures, enable push retries: ```yaml plugins: - docker-compose#v5.12.1: build: app push: app push-retries: 3 ``` ###### Agent startup and scaling issues Builds may fail due to agent startup problems or scaling limitations: - Agent startup failures - check AWS CloudWatch logs for agent initialization errors. - Instance availability issues - verify sufficient instance capacity in your AWS region and availability zones. - IAM permissions issues - ensure the Elastic CI Stack has permissions to launch and manage EC2 instances. - VPC configuration issues - verify that VPC, subnets, and security groups are correctly configured. ##### Debugging builds When builds fail or behave in unexpected manner, you need to enable verbose output and disable caching to diagnose the issue. ###### Enable verbose output Use the `verbose` option in the Docker Compose plugin to see detailed output from Docker Compose operations: ```yaml steps: - label: "\:docker\: Debug build" agents: queue: default plugins: - docker-compose#v5.12.1: build: app verbose: true ``` The detailed output shows all Docker Compose commands being executed and their full output, helping identify where failures occur. ###### Disable build cache Disable caching to ensure builds run from scratch, which can reveal caching-related issues: ```yaml steps: - label: "\:docker\: Build without cache" agents: queue: default plugins: - docker-compose#v5.12.1: build: app no-cache: true ``` ###### Test docker-compose locally Test your `docker-compose.yml` configuration locally before running in the pipeline: ```bash #### Validate compose file syntax docker compose config #### Build without the Docker Compose plugin docker compose build #### Check what images were created docker images ``` Local execution helps identify issues with the compose configuration itself, separate from pipeline or Elastic CI Stack concerns. --- ### Kaniko URL: https://buildkite.com/docs/agent/self-hosted/aws/elastic-ci-stack/ec2-linux-and-windows/kaniko-container-builds #### Building Docker images [Kaniko](https://github.com/GoogleContainerTools/kaniko) builds container images from a Dockerfile without requiring a Docker daemon, making it ideal for CI/CD environments that lack or don't need privileged access. This guide shows you how to use Kaniko with [Buildkite Elastic CI Stack for AWS](/docs/agent/self-hosted/aws/elastic-ci-stack) to build and push images directly to [Buildkite Package Registries](/docs/package-registries). Unlike traditional Docker builds, Kaniko runs as a container and executes each command in your Dockerfile in the user space. This approach eliminates the need for using [Docker-in-Docker](https://www.docker.com/resources/docker-in-docker-containerized-ci-workflows-dockercon-2023/) or privileged mode while maintaining full compatibility with the standard Dockerfiles. You can authenticate using short-lived [OpenID Connect (OIDC)](https://openid.net/developers/how-connect-works/) tokens (see the [example](#running-kaniko-in-docker-example-pipeline) below), leverage registry-based caching to speed up the builds, and push to any [Open Container Initiative (OCI)](https://opencontainers.org/)-compliant container registry. > 📘 On Kaniko support > Google has deprecated support for the Kaniko project and no longer publishes new images to `gcr.io/kaniko-project/`. However, [Chainguard has forked the project](https://github.com/chainguard-dev/kaniko) and continues to provide support and create new releases. ##### One-time package registry setup Create a [Buildkite Package Registry](/docs/package-registries) for container images through the Buildkite web interface: 1. Navigate to your Buildkite organization and select **Package Registries** from the global navigation. 1. Click **New registry**. 1. Provide a name (for example, `my-container-registry`) and an optional description for your registry. 1. Select **OCI Image (Docker)** as the ecosystem type. 1. Assign appropriate team access permissions (select teams that need access to the registry). 1. Click **Create Registry**. 1. Configure an OIDC policy to allow your agents to push images. To do it, select **Settings** > **OIDC Policy** and add the following policy that allows agents to authenticate using OIDC tokens. You'll need to replace `` and `` in the policy with your Buildkite organization and pipeline slugs. ```yaml - iss: https://agent.buildkite.com scopes: - read_packages - write_packages claims: organization_slug: pipeline_slug: build_branch: main ``` Note that the `build_branch` claim restricts image pushes to the specified branch. See [OIDC in Buildkite Package Registries](/docs/package-registries/security/oidc) for more configuration options. For more information regarding registries, see [Manage registries](/docs/package-registries/registries/manage). > 📘 Registry compatibility > While the example uses [Buildkite Package Registries](/docs/package-registries), Kaniko can work with any OCI-compliant container registry. To use a different registry (for example, Docker Hub, Amazon ECR, Google Container Registry, Azure Container Registry, and so on), adjust the authentication method and the destination URL accordingly. ##### Push using Kaniko Commit your changes. The step in your pipeline configuration will: - Build the Docker image using [Kaniko](https://github.com/GoogleContainerTools/kaniko) (no Docker daemon required). - Push the image directly to the [Buildkite Package Registries](/docs/package-registries) using a short-lived OIDC token retrieved by the Buildkite agent. > 📘 SSH repository requirements > If your Git repository uses SSH, make sure your [S3 secrets bucket for Elastic CI Stack for AWS](/docs/agent/self-hosted/aws/elastic-ci-stack/ec2-linux-and-windows/security#s3-secrets-bucket) contains a `private_ssh_key` at the correct prefix (or switch to HTTPS + `git-credentials`). ##### Running Kaniko in Docker Kaniko runs inside a Docker container on the Elastic CI Stack for AWS agent, so no Docker-in-Docker, privileged mode, or Docker daemon are required for the build. ###### Kaniko image availability Google has deprecated support for the Kaniko project and no longer publishes new images to `gcr.io/kaniko-project/`. However, [Chainguard has forked the project](https://github.com/chainguard-dev/kaniko) and continues to provide support and create new releases. So there are several options you can choose from to run Kaniko in Docker. ###### Option 1: Google's final published images You can use Google's final published Kaniko images from June 2025 (publicly available): - `gcr.io/kaniko-project/executor:v1.24.0` - `gcr.io/kaniko-project/executor:v1.24.0-debug` ###### Option 2: Chainguard-maintained images Chainguard builds and publishes images for Kaniko, but requires a subscription to their services for access to the images: - `cgr.dev/chainguard/kaniko:latest` - `cgr.dev/chainguard/kaniko:latest-debug` > 📘 Image directory reference > See [Chainguard's image directory](https://images.chainguard.dev/directory/image/kaniko/versions) for the versions and access details. ###### Option 3: Build your own images with the Chainguard fork If you need to use a specific Kaniko version, a custom configuration, or want to host Kaniko images in your own container registry, you can also build your own images by running the following commands: ```bash #### Build the latest Kaniko image from the Chainguard fork git clone https://github.com/chainguard-dev/kaniko.git cd kaniko docker build --target kaniko-executor -t your-registry/kaniko:latest --file deploy/Dockerfile . docker push your-registry/kaniko:latest #### Build the debug image docker build --target kaniko-debug -t your-registry/kaniko:debug --file deploy/Dockerfile . docker push your-registry/kaniko:debug ``` These commands clone the Chainguard Kaniko fork, build both the standard executor and debug images, and push them to your container registry. Note that you will need to update the image references in your pipeline to use your own registry. ###### Example pipeline Here's a complete example of using Kaniko for building and pushing a container image to Buildkite Package Registries. ###### Project hierarchy The example pipeline uses the following project structure to organize the Kaniko build configuration. The `.buildkite` directory contains the pipeline definition and build scripts, while application files remain in the project root. ```text project-root/ ├── .buildkite/ │ ├── pipeline.yml │ └── steps/ │ └── kaniko.sh ├── Dockerfile ├── package.json └── app.js ``` ###### Pipeline configuration This step defines a pipeline that builds and pushes a Docker image using Kaniko. It sets the package registry name as an environment variable and runs the Kaniko build script. ```yaml steps: - label: ":whale: Build and push with Kaniko" env: PACKAGE_REGISTRY_NAME: "my-container-registry" commands: - bash .buildkite/steps/kaniko.sh ``` ###### Kaniko build script This script builds and pushes a Docker image using Kaniko. It generates an image tag from the commit hash and build number, requests an OIDC token for authentication, creates a Docker config file with the token, runs Kaniko to build the image, and then pulls and runs the built image to verify that it works. ```bash #!/bin/bash set -euo pipefail TAG="$(echo "${BUILDKITE_COMMIT:-local}" | cut -c1-12)-${BUILDKITE_BUILD_NUMBER:-0}" IMG="packages.buildkite.com/${BUILDKITE_ORGANIZATION_SLUG}/${PACKAGE_REGISTRY_NAME}/hello-kaniko:${TAG}" #### Use debug image if KANIKO_DEBUG is set if [[ "${KANIKO_DEBUG:-false}" =~ (true|on|1) ]]; then KANIKO="${KANIKO_IMAGE:-gcr.io/kaniko-project/executor:v1.24.0-debug}" else KANIKO="${KANIKO_IMAGE:-gcr.io/kaniko-project/executor:v1.24.0}" fi ORG="${BUILDKITE_ORGANIZATION_SLUG}" REG="${PACKAGE_REGISTRY_NAME}" REG_URL="packages.buildkite.com/${ORG}/${REG}" echo "~~~ Configure OIDC token" #### 1) Request a short-lived OIDC token (aud must be the registry URL) OIDC_TOKEN="$(buildkite-agent oidc request-token \ --audience "https://packages.buildkite.com/${ORG}/${REG}" \ --lifetime 300)" #### 2) Write Kaniko's Docker config with the OIDC token #### Username must be "buildkite" for OIDC auth. cat > config.json 📘 Docker login is not required > You don't need `docker login` for this step as it requests a short-lived OIDC token and passes it to Kaniko using a Docker config file. ###### Using the published images After the pipeline completes successfully, your Docker image will be available in your Buildkite Package Registry. ###### Pull and run the image Once your image is built and pushed to the Package Registry, you can pull and run it using standard Docker commands: ```bash #### Pull the image from your Package Registry docker pull packages.buildkite.com/acme-inc/my-container-registry/hello-kaniko:abc123-1 #### Run the image docker run --rm packages.buildkite.com/acme-inc/my-container-registry/hello-kaniko:abc123-1 ``` ###### Push to other registries If you need to push the image to registries other than the Buildkite Package Registries (for example, Docker Hub, AWS ECR, and so on), use the following commands: ```bash #### Tag for your target registry docker tag packages.buildkite.com/acme-inc/my-container-registry/hello-kaniko:abc123-1 your-registry.example.com/hello-kaniko:abc123-1 #### Push to your registry docker push your-registry.example.com/hello-kaniko:abc123-1 ``` ###### Verifying signed Kaniko images To ensure the authenticity and integrity of Kaniko images, you can [verify their cryptographic signatures using Cosign](https://docs.sigstore.dev/cosign/verifying/verify/) before using those images in your builds. ###### Verifying Google's deprecated images If you're using Google's final published images (`gcr.io/kaniko-project/executor:v1.24.0`), you can verify their signatures by running the following script. This script creates Google's public key file and uses Cosign to verify that the Kaniko image signature is authentic and hasn't been tampered with. ```bash #### Verify the Google Kaniko image signature with cosign (public key) cat > cosign.pub cosign.key # Sign your custom image and push signature to registry docker run --rm -v "$PWD:/work" -w /work cgr.dev/chainguard/cosign \ sign --key cosign.key your-registry/kaniko:latest ``` 1. Verify your custom Kaniko image before using the following commands. These commands retrieve the public signing key from Buildkite secrets and use Cosign to verify that your custom Kaniko image's signature is valid and the image hasn't been modified. ```bash # Pull the public key from Buildkite secrets buildkite-agent secret get "kaniko-signing-public-key" > cosign.pub # Verify your custom image docker run --rm -v "$PWD:/work" -w /work cgr.dev/chainguard/cosign \ verify --key cosign.pub your-registry/kaniko:latest ``` ###### Keyless signing with OIDC For an alternative, more modern approach to signing, you can use keyless signing with OIDC instead of managing key pairs. This method uses [sigstore.dev](https://sigstore.dev/) as a third-party service for handling the signing process. > 🚧 Important > Keyless signing requires authenticating with an OAuth provider (like Google, GitHub, or Microsoft) through [sigstore.dev](https://sigstore.dev/). This means your OAuth identity will be used to create a temporary signing certificate stored in the sigstore's public transparency log. Consider your organization's security policies before using this approach. To implement keyless signing, run the following commands. These commands use Cosign to sign and verify images without managing key pairs. The signing process authenticates with sigstore.dev using OIDC, and verification requires specifying the certificate identity and issuer that were used during signing. ```bash #### Keyless signing (requires OIDC authentication with sigstore.dev) docker run --rm -v "$PWD:/work" -w /work cgr.dev/chainguard/cosign \ sign your-registry/kaniko:latest #### Keyless verification (requires certificate identity and issuer) docker run --rm -v "$PWD:/work" -w /work cgr.dev/chainguard/cosign \ verify \ --certificate-identity="your-email@example.com" \ --certificate-oidc-issuer="https://accounts.google.com" \ your-registry/kaniko:latest ``` This approach ensures your custom-built Kaniko images are authentic and haven't been tampered with. For more details on Cosign signing and verification, see the [Chainguard Cosign documentation](https://edu.chainguard.dev/open-source/sigstore/cosign/how-to-sign-a-container-with-cosign/). ###### Debugging with Kaniko debug image When troubleshooting build issues, you can use the [Kaniko debug image](https://github.com/chainguard-dev/kaniko#debug-image) which includes additional debugging tools. The debug image contains utilities like `busybox` and `sh` for interactive debugging. > 🚧 Prerequisites for interactive debugging > To run interactive debugging commands on your [Elastic CI Stack for AWS EC2](/docs/agent/self-hosted/aws/elastic-ci-stack) instances, you must have configured the `KeyName` CloudFormation stack parameter during stack deployment. This allows you to SSH into the instances as the `ec2-user` to run local Docker commands. For interactive debugging, you can run the debug image directly on your EC2 instance: ```bash docker run -it --entrypoint=/busybox/sh gcr.io/kaniko-project/executor:v1.24.0-debug ``` > 📘 Interactive debugging limitations > Interactive debugging only works when running Docker commands directly on the EC2 instance with the `-it` flags, not when it is executed through pipeline environment. Pipeline builds run non-interactively and cannot provide shell access. To enable debug mode in your pipeline, set the `KANIKO_DEBUG` environment variable. ```yaml steps: - label: ":whale: Build and Push with Kaniko Debug" env: KANIKO_DEBUG: "true" # Use debug image (accepts: true, on, 1) PACKAGE_REGISTRY_NAME: "my-container-registry" commands: - bash .buildkite/steps/kaniko.sh ``` When `KANIKO_DEBUG` is set to `true`, `on`, or `1`, the pipeline will use the Kaniko debug image instead of the standard image. The debug image includes additional utilities like `busybox`, `sh`, and other debugging tools that can help with troubleshooting build issues, even though interactive shell access is not available in pipeline environments. The debug image provides several debugging options: - Verbose logging - add `--verbosity=debug` flag to the Kaniko command for detailed build logs. - No-push mode - add `--no-push` flag to build without pushing to registry. - Interactive shell access - run the debug image with `--entrypoint=/busybox/sh` and `-it` flags (only works when running Docker commands directly on the EC2 instance, not through pipeline builds). ##### Troubleshooting common Kaniko issues This section covers some common issues as well as their resolutions and prevention methods for using Kaniko with Buildkite Pipelines. ###### "Invalid 'aud' claim" error - Cause: OIDC policy is not configured correctly. - Solution: Check your Package Registry's OIDC configuration in Buildkite (ensure it's configured for the correct ecosystem). ###### 401/403 on push - Cause: OIDC audience mismatch. - Solution: Check that the audience exactly matches `https://packages.buildkite.com/${ORG}/${REG}` and your registry's OIDC settings allow that audience. ###### Image push fails - Cause: Authentication or registry configuration issues. - Solution: Check your Package Registry configuration and OIDC policy. --- ### Buildah URL: https://buildkite.com/docs/agent/self-hosted/aws/elastic-ci-stack/ec2-linux-and-windows/buildah-container-builds #### Buildah container builds Buildah operates without a need for a persistent daemon (unlike Docker). With Buildah, you can build containers from Dockerfiles or Containerfiles (the [Open Container Initiative (OCI)](https://opencontainers.org/) standard format) or through its native command-line interface. This guide shows you how to use Buildah with the [Elastic CI Stack for AWS](https://github.com/buildkite/elastic-ci-stack-for-aws) to build and push container images. ##### Key difference from Docker Buildah does not use the Docker daemon, which means images built with Buildah are managed separately from Docker images. When using the [Docker Buildkite plugin](https://buildkite.com/resources/plugins/buildkite-plugins/docker-buildkite-plugin/) to run Buildah, the images built by Buildah won't be visible to Docker commands running outside the Buildah container. As a result: - `docker images` won't show Buildah-built images - Images must be pushed to a registry to be shared between Buildah and Docker environments - Buildah stores its images in its own storage backend ##### Using Buildah with Elastic CI Stack for AWS To use Buildah with the Elastic CI Stack for AWS, you will need to run Buildah inside a container using the [Docker Buildkite plugin](https://github.com/buildkite-plugins/docker-buildkite-plugin). ###### Building and pushing to Buildkite Package Registries The following example shows how to build a container image and push it to [Buildkite Package Registries](https://buildkite.com/docs/packages) using [OIDC authentication](/docs/pipelines/security/oidc): ```yaml env: PACKAGE_REGISTRY_NAME: "my-docker-registry" BUILDAH_ISOLATION: "chroot" steps: - label: ":whale: Build and Push with Buildah" plugins: - docker#v5.13.0: image: "quay.io/buildah/stable:latest" privileged: true userns: "host" mount-buildkite-agent: true command: | buildah bud \ --format docker \ --file Dockerfile \ --tag packages.buildkite.com/${BUILDKITE_ORGANIZATION_SLUG}/${PACKAGE_REGISTRY_NAME}/myapp:${BUILDKITE_BUILD_NUMBER} \ . # Verify the image was built buildah images # Authenticate using OIDC and push to registry buildkite-agent oidc request-token \ --audience "https://packages.buildkite.com/${BUILDKITE_ORGANIZATION_SLUG}/${PACKAGE_REGISTRY_NAME}" \ --lifetime 300 | \ buildah login \ --authfile ./bk-oidc-auth.json \ "packages.buildkite.com/${BUILDKITE_ORGANIZATION_SLUG}/${PACKAGE_REGISTRY_NAME}" \ --username buildkite \ --password-stdin buildah push \ --authfile ./bk-oidc-auth.json \ packages.buildkite.com/${BUILDKITE_ORGANIZATION_SLUG}/${PACKAGE_REGISTRY_NAME}/myapp:${BUILDKITE_BUILD_NUMBER} ``` ###### Configuration breakdown This section explains the components of the configuration necessary for using Buildah with Buildkite Pipelines. ###### Environment variables - `PACKAGE_REGISTRY_NAME`: the name of your Buildkite Package Registry. - `BUILDAH_ISOLATION: "chroot"`: sets the isolation mode for Buildah. The `chroot` mode provides good isolation without requiring additional privileges. ###### Docker plugin configuration - `image: "quay.io/buildah/stable:latest"`: uses the official Buildah container image. - `privileged: true`: grants extended privileges to the container. Is required for Buildah to create and manage container images. - `userns: "host"`: uses the host's user namespace and is necessary for Buildah to function correctly in this configuration. - `mount-buildkite-agent: true`: mounts the Buildkite agent binary into the container, enabling the use of `buildkite-agent oidc request-token`. ###### Buildah commands - `buildah bud`: builds an image from a Dockerfile. + `--format docker`: produces a Docker-compatible image format. + `--file Dockerfile`: specifies the path to the Dockerfile. + `--tag`: tags the resulting image. - `buildah images`: lists the built images (useful for verification). - `buildah login`: authenticates with a container registry. + `--authfile`: specifies where to store authentication credentials. + `--username` and `--password-stdin`: provide credentials for authentication. - `buildah push`: pushes the image to a registry. + `--authfile`: uses the authentication file created during login. ##### Understanding the components This section covers the key components and configuration options for running Buildah with the Elastic CI Stack for AWS. - Container images: the official Buildah image that runs in privileged mode is `quay.io/buildah/stable:latest`. - Security contexts: the configuration shown uses privileged mode where the container runs as root with `privileged: true`, bypassing most security controls. - Storage driver: Buildah uses the following container storage backends: + **overlay**: fast and efficient, used by default. Modern Buildah images support overlay without requiring `/dev/fuse` or additional configuration. + **vfs**: fallback option that works in all environments but slower, especially with bigger images. Can be specified with `--storage-driver vfs` if overlay encounters issues. - Storage paths: when running in the container as root (privileged), Buildah uses the system location `/var/lib/containers`. - Build isolation: the recommended isolation mode for Buildah container environments is `BUILDAH_ISOLATION=chroot`. It provides good isolation without requiring additional privileges. ##### Customizing the build This section contains instructions for customizing your Buildah builds. ###### Using build arguments You can pass build arguments to your Dockerfile: ```yaml command: | buildah bud \ --format docker \ --build-arg VERSION=${BUILDKITE_BUILD_NUMBER} \ --build-arg COMMIT=${BUILDKITE_COMMIT} \ --file Dockerfile \ --tag packages.buildkite.com/${BUILDKITE_ORGANIZATION_SLUG}/${PACKAGE_REGISTRY_NAME}/myapp:${BUILDKITE_BUILD_NUMBER} \ . ``` ###### Targeting specific build stages For multi-stage Dockerfiles, you can target a specific stage: ```yaml command: | buildah bud \ --format docker \ --target production \ --file Dockerfile \ --tag packages.buildkite.com/${BUILDKITE_ORGANIZATION_SLUG}/${PACKAGE_REGISTRY_NAME}/myapp:${BUILDKITE_BUILD_NUMBER} \ . ``` ###### Using alternative storage driver If you encounter issues with the default overlay driver, you can use `vfs` as a fallback: ```yaml command: | buildah bud \ --storage-driver vfs \ --format docker \ --file Dockerfile \ --tag packages.buildkite.com/${BUILDKITE_ORGANIZATION_SLUG}/${PACKAGE_REGISTRY_NAME}/myapp:${BUILDKITE_BUILD_NUMBER} \ . ``` ##### Troubleshooting This section describes common issues for Buildah and the ways of solving these issues. ###### Permission denied errors - Ensure `privileged: true` is configured in the Docker plugin. - Verify the `userns: "host"` setting is present. - Confirm the `BUILDAH_ISOLATION` environment variable is set to `"chroot"`. ###### Storage driver errors - The default overlay driver should work in privileged mode. - If overlay fails, try `--storage-driver vfs` as a fallback (this is a slower but more compatible approach). - Check that the storage volume has sufficient space. ###### Registry authentication failures - Ensure the `mount-buildkite-agent: true` setting is configured so `buildkite-agent oidc request-token` is available. - Verify that the OIDC token audience matches your Package Registry URL exactly. - Check that the authentication file is being passed correctly to the `buildah push` command. ###### Image not found after build Remember that Buildah images are separate from Docker images. If you need to use the image in subsequent steps: - Push the image to a registry and pull it in later steps. - Use the same Buildah container for all operations on that image. --- ### Depot URL: https://buildkite.com/docs/agent/self-hosted/aws/elastic-ci-stack/ec2-linux-and-windows/depot-elastic-ci-container-builds #### Container builds with Depot You can use [Depot](https://depot.dev/) remote builders to build container images in your Buildkite pipelines on agents that are auto-scaled by the [Buildkite Elastic CI Stack for AWS](https://github.com/buildkite/elastic-ci-stack-for-aws). Depot runs Docker builds on dedicated build infrastructure, offloading build workloads from your EC2 agents. > 🚧 Warning! > The Depot installation method uses `curl | sh`, which executes scripts directly. Review the installation script before use in production environments. Consider downloading and verifying the script separately, or installing Depot CLI in your agent bootstrap script for better security control. ##### Special considerations regarding Elastic CI Stack for AWS When using Depot with the Buildkite Elastic CI Stack for AWS, consider the following requirements and best practices for successful container builds. ###### Depot project configuration Depot requires a project ID to route builds to the correct infrastructure. You can configure your Depot project in two ways: 1. Environment variable `DEPOT_PROJECT_ID`. 1. Configuration file `depot.json` in your repository. ###### Environment variable approach (recommended for AWS) Set `DEPOT_PROJECT_ID` in your Buildkite pipeline environment variables or in your Elastic CI Stack agent environment hooks. This approach is recommended for AWS environments as it's easier to manage via AWS Secrets Manager and doesn't require repository changes: ```yaml steps: - label: "\:docker\: Build with Depot" command: | curl -L https://depot.dev/install-cli.sh | DEPOT_INSTALL_DIR=/usr/local/bin sh depot configure-docker docker build -t your-registry.example.com/app:${BUILDKITE_BUILD_NUMBER} . env: DEPOT_PROJECT_ID: "${DEPOT_PROJECT_ID}" DEPOT_TOKEN: "${DEPOT_TOKEN}" ``` You can also set `DEPOT_PROJECT_ID` globally in your Elastic CI Stack configuration using agent environment hooks: ```bash #### In your agent bootstrap script or environment hook export DEPOT_PROJECT_ID="your-project-id" ``` ###### Configuration file (depot.json) approach Use `depot init` to create a `depot.json` file in your repository. You'll need to authenticate with Depot first to select from your available projects: ```bash #### Authenticate with Depot depot login #### Initialize the project configuration (displays interactive list of projects) depot init ``` The `depot init` command creates a `depot.json` file in the current directory with the following format: ```json { "id": "your-project-id" } ``` This file is automatically detected by the [Depot CLI](https://github.com/depot/cli) when present in your repository root. The `depot.json` file should be committed to your repository. For AWS environments, using the environment variable approach is recommended as it provides the most flexibility and doesn't require repository changes. ###### Depot CLI installation Depot integrates with Docker via a CLI plugin. The [Depot CLI](https://github.com/depot/cli) must be installed on your EC2 agents to enable remote builds. You can install it in your [agent bootstrap script](/docs/agent/cli/reference/bootstrap#running-the-bootstrap-usage) or as part of your build steps. Install the Depot CLI in your agent bootstrap script: ```bash #### In your Elastic CI Stack agent bootstrap script curl -L https://depot.dev/install-cli.sh | DEPOT_INSTALL_DIR=/usr/local/bin sh ``` Alternatively, install it at runtime in your Buildkite pipeline steps: ```yaml steps: - label: "Install Depot and build" command: | curl -L https://depot.dev/install-cli.sh | DEPOT_INSTALL_DIR=/usr/local/bin sh depot configure-docker docker build -t my-image . env: DEPOT_PROJECT_ID: "${DEPOT_PROJECT_ID}" DEPOT_TOKEN: "${DEPOT_TOKEN}" ``` ###### Authentication Depot requires authentication to access your projects. Depot supports [OIDC trust relationships with Buildkite](/docs/pipelines/security/oidc), which is the recommended authentication method as it provides ephemeral tokens without managing static credentials. ###### OIDC trust relationships (recommended) Configure an [OIDC trust relationships with Buildkite](/docs/pipelines/security/oidc) and Depot to use ephemeral tokens automatically as explained below. This will eliminate the need to manage static tokens and improves security. Set up the OIDC trust relationship in your Depot project settings. The Depot CLI automatically detects Buildkite's OIDC credentials from the Elastic CI Stack agents and uses them for authentication when an OIDC trust relationship is configured. No additional configuration is needed in your pipeline beyond setting `DEPOT_PROJECT_ID` variable. As mentioned in the [Depot Buildkite integration documentation](https://depot.dev/docs/container-builds/integrations/buildkite), the CLI supports OIDC authentication in Buildkite Pipelines by default when you have a trust relationship configured: ```yaml steps: - label: "\:docker\: Build with Depot (OIDC)" command: | curl -L https://depot.dev/install-cli.sh | DEPOT_INSTALL_DIR=/usr/local/bin sh depot build -t your-registry.example.com/app:${BUILDKITE_BUILD_NUMBER} . env: DEPOT_PROJECT_ID: "${DEPOT_PROJECT_ID}" # OIDC authentication is handled automatically by Depot CLI # No DEPOT_TOKEN needed when using OIDC trust relationships ``` ###### Static token authentication (alternative) For environments where OIDC is not available, you can use static project tokens. Store your Depot token in AWS Secrets Manager and reference it in your pipeline: Create a secret in AWS Secrets Manager: ```bash aws secretsmanager create-secret \ --name buildkite/depot-token \ --secret-string "your-depot-token" ``` Ensure your Elastic CI Stack agents have IAM permissions to read the secret. Add the following policy to your agent IAM role: ```json { "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": [ "secretsmanager:GetSecretValue", "secretsmanager:DescribeSecret" ], "Resource": "arn\:aws\:secretsmanager:region:account-id\:secret\:buildkite/depot-token-*" } ] } ``` Configure your Buildkite pipeline to retrieve the secret and use it: ```yaml steps: - label: "\:docker\: Build with Depot" command: | export DEPOT_TOKEN=$(aws secretsmanager get-secret-value --secret-id buildkite/depot-token --query SecretString --output text) curl -L https://depot.dev/install-cli.sh | DEPOT_INSTALL_DIR=/usr/local/bin sh depot configure-docker docker build -t your-registry.example.com/app:${BUILDKITE_BUILD_NUMBER} . env: DEPOT_PROJECT_ID: "${DEPOT_PROJECT_ID}" ``` > 🚧 Warning! > Static tokens persist until rotated. OIDC trust relationships provide ephemeral tokens that automatically expire, reducing the risk of credential exposure. Use OIDC whenever possible. ###### Build context and file access Depot builds require access to your build context, which is typically the checked-out repository on the EC2 agent's filesystem. Ensure your build context is accessible and includes all necessary files for the build. For large build contexts, Depot efficiently handles context uploads and can optimize transfers. However, consider using `.dockerignore` files to exclude unnecessary files from the build context, which Depot respects when uploading the build context. ###### Resource allocation Since builds run on Depot's infrastructure, your EC2 agents don't need to allocate resources for Docker daemons or build processes. This allows you to use smaller, more cost-effective EC2 instances that primarily handle: - Repository checkout - Build orchestration - Artifact handling - Post-build steps Configure your Elastic CI Stack agent instance types accordingly: - Smaller instance types - agents only need resources for agent operations, not builds - Network bandwidth - ensure sufficient bandwidth for context uploads and image pulls - Storage - minimal ephemeral storage needed since builds run remotely ##### Configuration approaches with Depot Depot supports different workflow patterns for building container images in your Buildkite pipelines, each suited to specific use cases when using the Elastic CI Stack for AWS. Note that the examples below include `DEPOT_TOKEN` in the environment variables. If you're using OIDC trust relationships (recommended), you can omit `DEPOT_TOKEN` as authentication is handled automatically. Only include `DEPOT_TOKEN` when using static token authentication. ###### Basic Docker build with Depot You can build images in your Buildkite pipelines using Depot's remote builders. According to the [Depot Buildkite integration documentation](https://depot.dev/docs/container-builds/integrations/buildkite), you can use `depot build` directly: ```yaml steps: - label: "\:docker\: Build with Depot" command: | curl -L https://depot.dev/install-cli.sh | DEPOT_INSTALL_DIR=/usr/local/bin sh depot build -t your-registry.example.com/app:${BUILDKITE_BUILD_NUMBER} . env: DEPOT_PROJECT_ID: "${DEPOT_PROJECT_ID}" DEPOT_TOKEN: "${DEPOT_TOKEN}" ``` Alternatively, you can use `depot configure-docker` to configure Docker CLI to use Depot. In this case, use standard `docker build` commands: ```yaml steps: - label: "\:docker\: Build with Depot (Docker CLI)" command: | curl -L https://depot.dev/install-cli.sh | DEPOT_INSTALL_DIR=/usr/local/bin sh depot configure-docker docker build -t your-registry.example.com/app:${BUILDKITE_BUILD_NUMBER} . env: DEPOT_PROJECT_ID: "${DEPOT_PROJECT_ID}" DEPOT_TOKEN: "${DEPOT_TOKEN}" ``` ###### Building and pushing with Depot You can build and push images in your Buildkite pipelines using Depot's remote builders. According to the [Depot Buildkite integration documentation](https://depot.dev/docs/container-builds/integrations/buildkite), you can use `depot build` with the `--push` flag. For private registries, you need to authenticate before building: ```yaml steps: - label: "\:docker\: Build and push with Depot" command: | curl -L https://depot.dev/install-cli.sh | DEPOT_INSTALL_DIR=/usr/local/bin sh depot build -t your-registry.example.com/app:${BUILDKITE_BUILD_NUMBER} --push . env: DEPOT_PROJECT_ID: "${DEPOT_PROJECT_ID}" DEPOT_TOKEN: "${DEPOT_TOKEN}" ``` If you're using a private repository, you need to authenticate before pushing: ```yaml steps: - label: "\:docker\: Build and push with Depot" command: | curl -L https://depot.dev/install-cli.sh | DEPOT_INSTALL_DIR=/usr/local/bin sh depot configure-docker echo "${REGISTRY_PASSWORD}" | docker login your-registry.example.com -u "${REGISTRY_USERNAME}" --password-stdin docker build -t your-registry.example.com/app:${BUILDKITE_BUILD_NUMBER} . docker push your-registry.example.com/app:${BUILDKITE_BUILD_NUMBER} env: DEPOT_PROJECT_ID: "${DEPOT_PROJECT_ID}" DEPOT_TOKEN: "${DEPOT_TOKEN}" REGISTRY_USERNAME: "${REGISTRY_USERNAME}" REGISTRY_PASSWORD: "${REGISTRY_PASSWORD}" ``` ###### Building and pushing to AWS ECR with Depot For AWS ECR, authenticate using AWS CLI as explained in the [Depot Buildkite integration documentation](https://depot.dev/docs/container-builds/integrations/buildkite): ```yaml steps: - label: "\:docker\: Build and push to ECR with Depot" command: | curl -L https://depot.dev/install-cli.sh | DEPOT_INSTALL_DIR=/usr/local/bin sh # AWS CLI is pre-installed on Elastic CI Stack for AWS agents aws ecr get-login-password --region us-west-2 | docker login --username AWS --password-stdin 123456789012.dkr.ecr.us-west-2.amazonaws.com depot build -t 123456789012.dkr.ecr.us-west-2.amazonaws.com/my-app:${BUILDKITE_BUILD_NUMBER} --push . env: DEPOT_PROJECT_ID: "${DEPOT_PROJECT_ID}" DEPOT_TOKEN: "${DEPOT_TOKEN}" ``` Alternatively, you can use the [ECR plugin](https://github.com/buildkite-plugins/ecr-buildkite-plugin) for authentication, which works seamlessly with Depot builds: ```yaml steps: - label: "\:docker\: Build and push to ECR with Depot (ECR plugin)" plugins: - ecr#v2.11.0: login: true account-ids: "123456789012" region: us-west-2 command: | curl -L https://depot.dev/install-cli.sh | DEPOT_INSTALL_DIR=/usr/local/bin sh depot configure-docker docker build -t 123456789012.dkr.ecr.us-west-2.amazonaws.com/my-app:${BUILDKITE_BUILD_NUMBER} . docker push 123456789012.dkr.ecr.us-west-2.amazonaws.com/my-app:${BUILDKITE_BUILD_NUMBER} env: DEPOT_PROJECT_ID: "${DEPOT_PROJECT_ID}" DEPOT_TOKEN: "${DEPOT_TOKEN}" ``` The [ECR plugin](https://buildkite.com/resources/plugins/buildkite-plugins/ecr-buildkite-plugin/) handles authentication automatically using the Elastic CI Stack agent's IAM role, so no manual credentials are needed. ###### Using Depot with Docker Compose Depot works seamlessly with Docker Compose builds in your Buildkite pipelines. Configure Depot before running compose builds: ```yaml steps: - label: "\:docker\: Build with Depot and Docker Compose" command: | curl -L https://depot.dev/install-cli.sh | DEPOT_INSTALL_DIR=/usr/local/bin sh depot configure-docker docker compose build docker compose push env: DEPOT_PROJECT_ID: "${DEPOT_PROJECT_ID}" DEPOT_TOKEN: "${DEPOT_TOKEN}" ``` ##### Customizing builds with Depot You can customize your Depot builds in Buildkite pipelines by using Depot-specific features and configuration options. ###### Multi-platform builds You can build for multiple architectures in your Buildkite pipeline using Depot's multi-platform support with the help of the `--platform` flag with `depot build`: ```yaml steps: - label: "\:docker\: Multi-platform build" command: | curl -L https://depot.dev/install-cli.sh | DEPOT_INSTALL_DIR=/usr/local/bin sh depot build --platform linux/amd64,linux/arm64 -t your-registry.example.com/app:${BUILDKITE_BUILD_NUMBER} -t your-registry.example.com/app:latest --push . env: DEPOT_PROJECT_ID: "${DEPOT_PROJECT_ID}" DEPOT_TOKEN: "${DEPOT_TOKEN}" ``` Learn more about this option in the [Depot Buildkite integration documentation](https://depot.dev/docs/container-builds/integrations/buildkite). ###### Using Depot cache Depot provides native caching that works automatically when you use `depot configure-docker` — no additional configuration is required. Depot manages cache layers on its infrastructure, which persist across builds within the same project. ##### Troubleshooting This section helps you identify and solve the issues that might arise when using Depot with Buildkite Pipelines on Elastic CI Stack for AWS. ###### Depot authentication failures Builds fail with authentication errors when Depot cannot access your project. ###### Missing or invalid authentication credentials or project ID For OIDC trust relationships (recommended), ensure the trust relationship is configured in your Depot project settings and that `DEPOT_PROJECT_ID` is set in your pipeline: ```yaml steps: - label: "\:docker\: Build with Depot" command: | curl -L https://depot.dev/install-cli.sh | DEPOT_INSTALL_DIR=/usr/local/bin sh depot configure-docker docker build -t my-image . env: DEPOT_PROJECT_ID: "${DEPOT_PROJECT_ID}" # OIDC authentication handled automatically, no DEPOT_TOKEN needed ``` For static token authentication, ensure your Depot token and project ID are correctly configured. Verify the token is accessible from your EC2 agents: ```yaml steps: - label: "\:docker\: Build with Depot" command: | export DEPOT_TOKEN=$(aws secretsmanager get-secret-value --secret-id buildkite/depot-token --query SecretString --output text) curl -L https://depot.dev/install-cli.sh | DEPOT_INSTALL_DIR=/usr/local/bin sh depot configure-docker docker build -t my-image . env: DEPOT_PROJECT_ID: "${DEPOT_PROJECT_ID}" ``` Verify authentication by checking your Depot dashboard. For OIDC, ensure the trust relationship is active. For static tokens, verify the token has access to the specified project and that your EC2 agents have IAM permissions to access AWS Secrets Manager. ###### Depot CLI not found Builds fail with "depot: command not found" errors. ###### Depot CLI is not installed on the EC2 agent Install Depot CLI before using it: ```yaml steps: - label: "Install Depot CLI" command: | curl -L https://depot.dev/install-cli.sh | DEPOT_INSTALL_DIR=/usr/local/bin sh depot configure-docker env: DEPOT_PROJECT_ID: "${DEPOT_PROJECT_ID}" DEPOT_TOKEN: "${DEPOT_TOKEN}" ``` Alternatively, include Depot CLI installation in your Elastic CI Stack agent bootstrap script to install it once for all builds. ###### Build context upload failures Builds fail when uploading build context to Depot. ###### Network issues or build context too large - Check network connectivity from your EC2 agents to Depot - Verify security group rules allow outbound HTTPS traffic to `depot.dev` - Verify VPC routing and internet gateway configuration - Use `.dockerignore` files to reduce build context size - Check Depot service status ###### Docker not configured for Depot Builds run locally on the EC2 agent instead of on Depot infrastructure. ###### Depot Docker plugin not configured Run `depot configure-docker` before building: ```yaml steps: - label: "Configure and build" command: | curl -L https://depot.dev/install-cli.sh | DEPOT_INSTALL_DIR=/usr/local/bin sh depot configure-docker docker build -t my-image . env: DEPOT_PROJECT_ID: "${DEPOT_PROJECT_ID}" DEPOT_TOKEN: "${DEPOT_TOKEN}" ``` You can confirm builds are using Depot by looking for `[depot]` prefixed log lines in the build output. ###### Registry push failures Pushing images to registries fails after Depot builds. ###### Authentication or network issues when pushing from Depot infrastructure Ensure registry credentials are properly configured. For private registries, authenticate before pushing: ```yaml steps: - label: "\:docker\: Build and push with Depot" command: | curl -L https://depot.dev/install-cli.sh | DEPOT_INSTALL_DIR=/usr/local/bin sh depot configure-docker echo "${REGISTRY_PASSWORD}" | docker login your-registry.example.com -u "${REGISTRY_USERNAME}" --password-stdin docker build -t your-registry.example.com/app:${BUILDKITE_BUILD_NUMBER} . docker push your-registry.example.com/app:${BUILDKITE_BUILD_NUMBER} env: DEPOT_PROJECT_ID: "${DEPOT_PROJECT_ID}" DEPOT_TOKEN: "${DEPOT_TOKEN}" REGISTRY_USERNAME: "${REGISTRY_USERNAME}" REGISTRY_PASSWORD: "${REGISTRY_PASSWORD}" ``` In this example, the build runs on Depot's infrastructure (via `depot configure-docker`), but the `docker push` command runs on the agent, so authentication is configured on the agent. When using `depot build --push` instead, Depot reads registry credentials from the agent's Docker configuration and performs the push from Depot's infrastructure. ###### AWS Secrets Manager access issues Builds fail when trying to retrieve `DEPOT_TOKEN` from AWS Secrets Manager. ###### EC2 agent IAM role lacks permissions or secret doesn't exist. 1. Verify the secret exists: ```bash aws secretsmanager describe-secret --secret-id buildkite/depot-token ``` 1. Ensure your Elastic CI Stack agent IAM role has the necessary permissions: ```json { "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": [ "secretsmanager:GetSecretValue", "secretsmanager:DescribeSecret" ], "Resource": "arn\:aws\:secretsmanager:region:account-id\:secret\:buildkite/depot-token-*" } ] } ``` 1. Test secret access from an agent: ```bash aws secretsmanager get-secret-value --secret-id buildkite/depot-token --query SecretString --output text ``` ##### Debugging builds When builds fail or behave unexpectedly with Depot in your Buildkite pipelines, use these debugging approaches to diagnose issues. ###### Enable verbose output Use Docker's build output to see detailed build information. Depot builds will show `[depot]` prefixed log lines indicating Depot is handling the build: ```yaml steps: - label: "\:docker\: Debug build with Depot" command: | curl -L https://depot.dev/install-cli.sh | DEPOT_INSTALL_DIR=/usr/local/bin sh depot configure-docker docker build --progress=plain -t my-image . env: DEPOT_PROJECT_ID: "${DEPOT_PROJECT_ID}" DEPOT_TOKEN: "${DEPOT_TOKEN}" ``` The `--progress=plain` flag shows detailed build output, and you can verify Depot is being used by looking for `[depot]` prefixed lines in the build logs. ###### Verify Depot configuration Test Depot configuration before running builds: ```yaml steps: - label: "Verify Depot setup" command: | curl -L https://depot.dev/install-cli.sh | DEPOT_INSTALL_DIR=/usr/local/bin sh depot configure-docker depot projects list env: DEPOT_PROJECT_ID: "${DEPOT_PROJECT_ID}" DEPOT_TOKEN: "${DEPOT_TOKEN}" ``` This verifies authentication and project access before attempting builds. ###### Test builds locally Test your Dockerfile and build configuration locally before running on Elastic CI Stack: ```bash #### Install Depot CLI locally curl -L https://depot.dev/install-cli.sh | DEPOT_INSTALL_DIR=/usr/local/bin sh #### Configure Depot depot configure-docker #### Test build docker build -t my-image . #### Verify build uses Depot (look for [depot] in output) ``` This helps identify issues with build configuration before running on Elastic CI Stack agents. --- ### Namespace remote builders URL: https://buildkite.com/docs/agent/self-hosted/aws/elastic-ci-stack/ec2-linux-and-windows/namespace-remote-builders #### Namespace remote builder container builds [Namespace](https://namespace.so) provides [remote Docker builders](https://namespace.so/docs/solutions/docker-builders) that execute container image builds on dedicated infrastructure outside of your Elastic CI Stack instances. Namespace remote builders offload the CPU and memory-intensive container build workloads to Namespace's infrastructure, freeing your Elastic CI Stack instances to continue running pipeline steps. ##### How it works When using Namespace remote Docker builders with the [Elastic CI Stack for AWS](/docs/agent/self-hosted/aws/elastic-ci-stack/ec2-linux-and-windows/setup): 1. The stack instance authenticates with Namespace using [Buildkite OIDC](/docs/pipelines/security/oidc) or [AWS Cognito](https://aws.amazon.com/cognito/) (learn more in [Authentication](/docs/agent/self-hosted/aws/elastic-ci-stack/ec2-linux-and-windows/namespace-remote-builders#authentication)). 1. The CLI for Namespace (`nsc`) configures [Docker Buildx](https://docs.docker.com/reference/cli/docker/buildx/) on the instance to target the remote builders. 1. Namespace runs the build workload remotely while the Buildkite agent continues orchestrating the pipeline. 1. Built images are pushed to Namespace's registry (`nscr.io`) or any other registry you configure. ##### Prerequisites - Namespace account with a workspace (you can [sign up for it](https://cloud.namespace.so/signin) if you don't have one) - Recent release of the Elastic CI Stack for AWS with outbound access to `namespaceapis.com` - Properly configured authentication ##### Installing the Namespace CLI > 📘 > The Namespace CLI is only available for Linux. Windows instances are not currently supported. Use a [bootstrap script](/docs/agent/self-hosted/aws/elastic-ci-stack/ec2-linux-and-windows/managing-elastic-ci-stack#customizing-instances-with-a-bootstrap-script) to install the Namespace CLI on your Elastic CI Stack instances. Create the script with the following content and upload it to an S3 bucket, then set the `BootstrapScriptUrl` Elastic CI Stack parameter to the S3 URI: ```bash #!/bin/bash set -eo pipefail DOWNLOAD_URL="https://get.namespace.so/packages/nsc/latest?arch=amd64&os=linux" TEMP_TAR=$(mktemp) if curl --fail --location --silent --show-error \ --connect-timeout 30 --max-time 120 \ --output "${TEMP_TAR}" "${DOWNLOAD_URL}"; then tar -xzf "${TEMP_TAR}" -C /usr/local/bin nsc docker-credential-nsc chmod 755 /usr/local/bin/nsc /usr/local/bin/docker-credential-nsc chown buildkite-agent:buildkite-agent /usr/local/bin/nsc /usr/local/bin/docker-credential-nsc rm -f "${TEMP_TAR}" fi ``` ##### Authentication Namespace supports multiple authentication [methods](https://namespace.so/docs/federation). This guide covers the [Buildkite OIDC](/docs/pipelines/security/oidc) authentication and [AWS Cognito](https://aws.amazon.com/cognito/) authentication. ###### Buildkite OIDC authentication (recommended) [Buildkite OIDC](/docs/pipelines/security/oidc) is recommended for most environments. To be able to start using it with Namespace, contact [support@namespace.so](mailto:support@namespace.so) to register `https://agent.buildkite.com` as a trusted issuer for your Namespace tenant. The OIDC token then needs to be exchanged for a Namespace token for further authentication to your Namespace workspace: ```bash #### Authenticate using Buildkite OIDC OIDC_TOKEN=$$(buildkite-agent oidc request-token --audience federation.namespaceapis.com) /usr/local/bin/nsc auth exchange-oidc-token \ --token "$$OIDC_TOKEN" \ --tenant_id ``` ###### AWS Cognito authentication Alternatively, you can use [AWS Cognito federation](https://namespace.so/docs/federation/aws) for your [instance IAM profile](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_use_switch-role-ec2.html). The instance profile will need the following permissions to authenticate to Namespace: - `cognito-identity:GetOpenIdTokenForDeveloperIdentity` - `cognito-identity:GetId` ```bash #### Create pool aws cognito-identity create-identity-pool \ --identity-pool-name namespace-buildkite-federation \ --no-allow-unauthenticated-identities \ --developer-provider-name namespace.so \ --region #### Trust the pool (note the pool ID from output) nsc auth trust-aws-cognito-identity-pool \ --aws_region \ --identity_pool \ --tenant_id ``` Once the configuration is done, the instance profile credentials need to be exchanged for a Cognito token for further authentication to your Namespace workspace: ```bash #### Authenticate using AWS Cognito /usr/local/bin/nsc auth exchange-aws-cognito-token \ --aws_region \ --identity_pool \ --tenant_id ``` ##### Pushing to external registries Namespace handles authentication to its own registry when you run the `nsc docker login` command. The Elastic CI Stack for AWS includes an `environment` [hook](/docs/agent/hooks#whats-a-hook) that can sign in to [Docker Hub](https://docs.docker.com/docker-hub/) or [Amazon ECR](https://docs.aws.amazon.com/AmazonECR/latest/userguide/what-is-ecr.html) when you configure Docker and ECR credentials in the stack secrets bucket. See [Managing the Elastic CI Stack](/docs/agent/self-hosted/aws/elastic-ci-stack/ec2-linux-and-windows/managing-elastic-ci-stack#docker-registry-support) for more information. ##### Complete pipeline examples The following examples show complete pipeline configurations for building and pushing container images with Namespace remote builders. ###### Pushing to a Namespace registry This example authenticates with Buildkite OIDC and pushes to the Namespace registry: ```yaml steps: - label: ":docker: Build with Namespace" command: | OIDC_TOKEN=$$(buildkite-agent oidc request-token --audience federation.namespaceapis.com) /usr/local/bin/nsc auth exchange-oidc-token \ --token "$$OIDC_TOKEN" \ --tenant_id /usr/local/bin/nsc docker buildx setup --background --use /usr/local/bin/nsc docker login docker buildx build \ --builder nsc-remote \ --platform linux/amd64,linux/arm64 \ -t nscr.io//:latest \ --push \ . ``` ###### Pushing to Amazon ECR This example authenticates with Buildkite OIDC and pushes to Amazon ECR: ```yaml steps: - label: ":docker: Build with Namespace" command: | OIDC_TOKEN=$$(buildkite-agent oidc request-token --audience federation.namespaceapis.com) /usr/local/bin/nsc auth exchange-oidc-token \ --token "$$OIDC_TOKEN" \ --tenant_id /usr/local/bin/nsc docker buildx setup --background --use docker buildx build \ --builder nsc-remote \ --platform linux/amd64,linux/arm64 \ -t .dkr.ecr..amazonaws.com/:latest \ --push \ . ``` ###### Pushing to Docker Hub This example authenticates with AWS Cognito and pushes to Docker Hub: ```yaml steps: - label: ":docker: Build with Namespace" command: | /usr/local/bin/nsc auth exchange-aws-cognito-token \ --aws_region \ --identity_pool \ --tenant_id /usr/local/bin/nsc docker buildx setup --background --use docker buildx build \ --builder nsc-remote \ --platform linux/amd64,linux/arm64 \ -t /:latest \ --push \ . ``` ##### Troubleshooting - If authentication fails, contact [support@namespace.so](mailto:support@namespace.so) to register the OIDC issuer or verify AWS Cognito permissions for the stack's IAM role. - If the builder is not found, rerun `nsc docker buildx setup --background --use` before building. - If registry authentication fails, run `nsc docker login` before building. - If shell execution errors occur, ensure the stack is using the default `#!/bin/bash -e -c` shell in step commands. --- ### Troubleshooting URL: https://buildkite.com/docs/agent/self-hosted/aws/elastic-ci-stack/ec2-linux-and-windows/troubleshooting #### Troubleshooting the Elastic CI Stack for AWS Infrastructure as code isn't always easy to troubleshoot, but here are some ways to debug exactly what's going on inside the [Elastic CI Stack for AWS](https://github.com/buildkite/elastic-ci-stack-for-aws), and some solutions for specific situations. ##### Using CloudWatch Logs Elastic CI Stack for AWS sends logs to various CloudWatch log streams: * Buildkite agent logs get sent to the `buildkite/buildkite-agent/{instance_id}` log stream. If there are problems within the agent itself, the agent logs should help diagnose. * Output from an Elastic CI Stack for AWS instance's startup script ([Linux](https://github.com/buildkite/elastic-ci-stack-for-aws/blob/-/packer/linux/stack/conf/bin/bk-install-elastic-stack.sh) or [Windows](https://github.com/buildkite/elastic-ci-stack-for-aws/blob/-/packer/windows/stack/conf/bin/bk-install-elastic-stack.ps1)) get sent to the `/buildkite/elastic-stack/{instance_id}` log stream. If an instance is failing to launch cleanly, it's often a problem with the startup script, making this log stream especially useful for debugging problems with the Elastic CI Stack for AWS. Additionally, on Linux instances only: * Docker Daemon logs get sent to the `/buildkite/docker-daemon/{instance_id}` log stream. If docker is having a bad day on your machine, look here. * Output from the cloud init process, up until the startup script is initialised, is sent to `/buildkite/cloud-init/output/{instance_id}`. Logs from this stream can be useful for inspecting what environment variables were sent to the startup script. On Windows instances only: * Logs from the UserData execution process (similar to the `/buildkite/cloud-init/output` group above) are sent to the `/buildkite/EC2Launch/UserdataExecution/{instance_id}` log stream. There are a couple of other log groups that the Elastic CI Stack for AWS sends logs to, but their use cases are pretty specific. For a full accounting of what logs are sent to CloudWatch, see the config for [Linux](https://github.com/buildkite/elastic-ci-stack-for-aws/blob/-/packer/linux/stack/conf/cloudwatch-agent/amazon-cloudwatch-agent.json) and [Windows](https://github.com/buildkite/elastic-ci-stack-for-aws/blob/-/packer/windows/stack/conf/cloudwatch-agent/amazon-cloudwatch-agent.json). ##### Collecting logs using script An alternative method to collect the logs is to use the [`log-collector`](https://github.com/buildkite/elastic-ci-stack-for-aws/blob/main/utils/log-collector) script in the [`utils`](https://github.com/buildkite/elastic-ci-stack-for-aws/tree/main/utils) folder of the [Elastic CI Stack for AWS repository](https://github.com/buildkite/elastic-ci-stack-for-aws). The script collects CloudWatch Logs for the Instance, Lambda function, and AutoScaling activity, then packages them in a zip archive that you can email to Support for help at [support@buildkite.com](mailto:support@buildkite.com). ##### Debugging bootstrap script failures When you've configured a custom `BootstrapScriptUrl` parameter but instances aren't working correctly, use the following suggestions to help identify and resolve any issues. ###### Verify the basics * Test whether `BootstrapScriptUrl` is accessible: `curl -f "$BOOTSTRAP_URL" -o bootstrap_script.sh`. * Syntax-check the script: `bash -n bootstrap_script.sh`. * Check the Auto Scaling group activity for launch failures: ```bash aws autoscaling describe-scaling-activities \ --auto-scaling-group-name your-buildkite-asg \ --max-items 10 ``` ###### Examine CloudWatch Logs * `/buildkite/elastic-stack/{instance_id}` - check for the "Running bootstrap script from" message. * `/buildkite/cloud-init/output/{instance_id}` - check the environment setup. * `/buildkite/buildkite-agent/{instance_id}` - verify the agent start. ###### Collect detailed information For active instances: ```bash aws ssm send-command \ --instance-ids i-1234567890abcdef0 \ --document-name "AWS-RunShellScript" \ --parameters 'commands=["cat /var/log/elastic-stack-bootstrap-status", "tail -50 /var/log/elastic-stack.log"]' ``` For terminated instances: ```bash aws ec2 get-console-output --instance-id i-1234567890abcdef0 ``` Use the [`log-collector`](https://github.com/buildkite/elastic-ci-stack-for-aws/blob/main/utils/log-collector) script: ```bash ./utils/log-collector.sh -s your-stack-name -r your-region ``` ##### Accessing Elastic CI Stack for AWS instances directly Sometimes, looking at the logs isn't enough to figure out what's going on in your instances. In these cases, it can be useful to access the shell on the instance directly: * If your Elastic CI Stack for AWS has been configured to allow SSH access (using the `AuthorizedUsersUrl` parameter), run `ssh ` in your terminal. * If SSH access isn't available, you can still use AWS SSM to remotely access the instance by finding the instance ID, and then running `aws ssm start-session --target `. ##### Auto Scaling group fails to boot instances Resource shortage can cause this issue. See the Auto Scaling group's Activity log for diagnostics. To fix this issue, change or add more instance types to the `InstanceTypes` template parameter. If 100% of your existing instances are Spot Instances, switch some of them to On-Demand Instances by setting `OnDemandPercentage` parameter to a value above zero. ##### Instances are abruptly terminated This can happen when using Spot Instances. AWS EC2 sends a notification to a spot instance 2 minutes prior to termination. The agent intercepts that notification and attempts to gracefully shut down. If the instance does not shut down gracefully in that time, it is terminated. To identify if your agent instance was terminated, you can inspect the `/buildkite/lifecycled` CloudWatch log group for the instance. The example below shows the log line indicating that the instance was sent the spot termination notice. ``` | 2023-07-31 19:19:23.432 | level=info msg="Received termination notice" instanceId=i-abcd notice=spot | i-abcd | 444793955923:/buildkite/lifecycled | ``` If all your existing instances are Spot Instances, switch some of them to On-Demand Instances by setting the `OnDemandPercentage` parameter to a value above zero. For better resilience, you can use step retries to automatically retry a job that has failed due to spot instance reclamation. See [Automatic retry attributes](/docs/pipelines/configure/retry#retry-attributes-automatic-retry-attributes) for more information. ##### Stacks over-provision agents If you have multiple stacks, check that they listen to unique queues—determined by the `BuildkiteQueue` parameter. Each Elastic CI Stack for AWS you launch through CloudFormation should have a unique value for this parameter. Otherwise, each scales out independently to service all the jobs on the queue, but the jobs will be distributed amongst them. This will mean that your stacks are over-provisioned. This could also happen if you have agents that are not part of an Elastic CI Stack for AWS [started with a tag](/docs/agent/cli/reference/start#tags) of the form `queue=`. Any agents started like this will compete with a stack for jobs, but the stack will scale out as if this competition did not exist. ##### Instances fail to boot Buildkite agent See the Auto Scaling group's Activity logs and CloudWatch Logs for the booting instances to determine the issue. Observe where in the `UserData` script the boot is failing. Identify what resource is failing when the instances are attempting to use it, and fix that issue. ##### Instances fail jobs Successfully booted instances can fail jobs for numerous reasons. A frequent source of issues is their disk filling up before the hourly cron job fixes it or terminates them. An instance with a full disk can be causing jobs to fail. If such instance is not being replaced automatically — for example, because of a stack with the `MinSize` parameter greater than zero, you can manually terminate the instance using the EC2 Dashboard. ##### Permission errors when running Docker images with volume mounts The Docker daemon is configured by default to run containers in a [username namespace](https://docs.docker.com/engine/security/userns-remap/). This will map the `root:root` user and group inside the container to the `buildkite-agent:docker` on the host. You can disable this using the stack parameter `EnableDockerUserNamespaceRemap`. --- ### Setup URL: https://buildkite.com/docs/agent/self-hosted/aws/elastic-ci-stack/ec2-mac/setup #### EC2 Mac setup for the Elastic CI Stack for AWS You can run your builds on AWS EC2 Mac using Buildkite's [CloudFormation template](https://github.com/buildkite/elastic-mac-for-aws). This template creates an Auto Scaling group, launch template, and host resource group for maintaining a pool of EC2 Mac instances that run the Buildkite agent. Using Buildkite agents, you can run pipelines and build Xcode-based software projects for macOS, iOS, iPadOS, tvOS, and watchOS. > 🚧 > As you must prepare and supply your own AMI (Amazon Machine Image) for this template, macOS support has **not** been incorporated into the Elastic CI Stack for AWS. Using an Auto Scaling group for your instances ensures booting your macOS Buildkite agents is repeatable, and enables automatic instance replacement when hardware failures occur. ##### Before you start You should have familiarity with: * AWS VPCs * AWS EC2 AMIs * macOS GUI You must also choose an AWS Region with EC2 Mac instances available. See [Amazon EC2 Mac instances](https://aws.amazon.com/ec2/instance-types/mac/) and [Amazon EC2 Dedicated Hosts pricing](https://aws.amazon.com/ec2/dedicated-hosts/pricing/) for details on which regions have EC2 Mac Dedicated Hosts. > 🚧 Minimum allocation > Dedicated macOS **hosts** on AWS have a minimum billing period of 24 hours (as is indicated in the **On-Demand Pricing** of the [Amazon EC2 Dedicated Hosts pricing](https://aws.amazon.com/ec2/dedicated-hosts/pricing/) page). However you can scale instances running on the host at will. See also the [Amazon EC2 Mac instances user guide](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2-mac-instances.html) for more details on AWS EC2 Mac instances. ##### Step 1: Choose a VPC layout Before deploying this template you must [choose a VPC subnet design](/docs/agent/self-hosted/aws/architecture/vpc), and which VPC security groups your instances will belong to. Depending on your threat model, you may find running instances in your default VPC's public subnets with a public IP address suitable. Otherwise, you may wish to explore options like separate public/private subnets with a NAT Gateway, and a bastion instance or a VPN to access the private instances over SSH and VNC. See the [AWS VPC Design documentation](/docs/agent/self-hosted/aws/architecture/vpc) for more details, and the [AWS VPC quick start](https://aws.amazon.com/quickstart/architecture/vpc/) for a ready-made CloudFormation template. EC2 Mac Dedicated Hosts are not available in every Availability Zone in the supported regions. You need to provision a VPC subnet in all of your region's Availability Zones to maximize the size of your instance pool. You also need to configure or define the VPC security groups your instance network interfaces will belong to. At a minimum, inbound SSH access is required to set up your initial template EC2 AMI. ##### Step 2: Build an AMI Before deploying this template, you must create a template AMI that will be horizontally scaled across multiple instances. 1. Reserve an [EC2 Mac](https://aws.amazon.com/ec2/instance-types/mac/) Dedicated Host. 1. Boot a macOS instance using your desired AMI on the Dedicated Host. Ensure the root disk is large enough for the version of Xcode you plan to download and install. 1. Configure the instance VPC subnet, security groups, and key name so that you can access the instance. 1. Using an SSH or AWS SSM session: - Set a password for the `ec2-user` using `sudo passwd ec2-user` - Enable screen sharing using `sudo /System/Library/CoreServices/RemoteManagement/ARDAgent.app/Contents/Resources/kickstart -activate -configure -access -on -restart -agent -privs -all` - Grow the AFPS container to use all the available space in your EBS root disk if needed, see the [AWS user guide](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/mac-instance-increase-volume.html) for details. 1. Using a VNC session (run SSH port forwarding `ssh -L 5900:localhost:5900 ec2-user@` if direct access is not available): 1. Sign in as the `ec2-user`. 1. Set **Automatically log in as** to `ec2-user` in **System Settings** > **Users & Groups**. 1. Set an empty password in **System Settings** > **Login Password**. 1. Set **Start Screen Saver when inactive** to `Never` in **System Settings** > **Lock Screen**. 1. Install your required version of Xcode, and ensure you launch Xcode at least once so you are presented with the EULA prompt. 1. If you plan to customize the UserData script or build automation tools, note that Homebrew paths differ by architecture: - Apple Silicon (ARM): `/opt/homebrew/bin` - Intel (x86): `/usr/local/bin` 1. Using the AWS EC2 Console, create an AMI from your instance. You do not need to install the `buildkite-agent` in your template AMI, the `buildkite-agent` will be installed at boot time by the launch template's `UserData` script. > 📘 UserData script considerations > The default UserData script installs the Buildkite agent using Homebrew. Since Homebrew is installed under the `ec2-user` account (not root), the UserData script must run Homebrew commands using `su - ec2-user -c`. If you're customizing the UserData script, ensure you maintain this pattern to avoid "command not found: brew" errors. ##### Step 3: Associate your AMI with a self-managed license in AWS License Manager To launch an instance using a host resource group, the instance AMI must be associated with a **Self-managed license** in **AWS License Manager**. Using the AWS Console, open the **AWS License Manager** and navigate to **Self-managed licenses**. Create a new **Self-managed license**, enter a descriptive name and select a **License type** of `Cores`. Once your self-managed license has been saved, open the detail view for your license. Open the **Associated AMIs** tab and choose **Associate AMI**. From the list of **Available AMIs**, select your macOS template AMI and then click **Associate**. ##### Step 4: Deploy the CloudFormation template Using the VPC and AMI prepared earlier, prepare values for the following required parameters: * `ImageId` from your AMI set up * `RootVolumeSize` no smaller than the template AMI's root disk * `Subnets` from your VPC set up * `SecurityGroupIds` from your VPC set up * `IamInstanceProfile` if accessing AWS services from your builds, provide an Instance Profile ARN with an appropriate IAM role attached * `BuildkiteAgentToken` an Agent Token for your [Buildkite organization](https://buildkite.com/organizations/-/agents) * `BuildkiteAgentQueue` the Buildkite Queue your pipeline steps use There are optional parameters to configure which EC2 Mac instance types to use: * `HostFamily` defaults to `mac1` * `InstanceType` defaults to `mac1.metal` There are also optional parameters to configure the size of the Auto Scaling group: * `MinSize` defaults to 0 * `MaxSize` defaults to 3 The default AWS Limit for `mac1.metal` is three Dedicated Hosts per account region. If you require more than three instances, request an increased limit in the *AWS Service Quotas Dashboard*. ###### Deploy using the AWS Console * Use the launch button below to create a CloudFormation stack from the latest version of the Buildkite template: [](https://console.aws.amazon.com/cloudformation/home#/stacks/new?stackName=buildkite-mac&templateURL=https://s3.amazonaws.com/buildkite-serverless-apps-us-east-1/elastic-mac/template/latest.yml) * Ensure the selected region in the top menu bar matches the region of your VPC and AMI resources. * Give your stack a unique name, and fill in the required parameters. ###### Deploy using the AWS CLI To deploy using the AWS CLI, save your parameters in a `.parameters.json` file and run the following commands: ``` $ cat .parameters.json > [ { "ParameterKey": "ImageId", "ParameterValue": "ami-0c3a7d0c15048b6b5" }, { "ParameterKey": "RootVolumeSize", "ParameterValue": "250" }, { "ParameterKey": "Subnets", "ParameterValue": "subnet-f3e72abb,subnet-f23fe294" }, { "ParameterKey": "SecurityGroupIds", "ParameterValue": "sg-a09db9d7" }, { "ParameterKey": "BuildkiteAgentQueue", "ParameterValue": "mac" }, { "ParameterKey": "BuildkiteAgentToken", "ParameterValue": "[redacted]" } ] $ make > sed "s/%v/v0.0.1-9-g1790b0d/" build/template.yml $ aws cloudformation deploy --stack-name buildkite-mac --region YOUR_REGION --template-file build/template.yml --parameters-override file:///$PWD/.parameters.json ``` See the [AWS CloudFormation Deploy CLI documentation](https://awscli.amazonaws.com/v2/documentation/api/latest/reference/cloudformation/deploy.html) for help using the AWS CLI. ##### Step 5: Starting your Buildkite agents Once you have successfully deployed the template, use the deployed stack's **Resources** tab to find the `AutoScaleGroup` and open the **Physical ID** link. **Edit** the selected Auto Scaling group, and set the **Desired capacity** to the number of instances you require. The Auto Scaling group will automatically provision Dedicated Hosts using the host resource group and boot instances on them. The launch template's `UserData` script will resize the root disk, then install, configure, and start the Buildkite agent. EC2 Mac instances are slower to boot and terminate than Linux instances. If want to match your **Desired capacity** to your workload, consider configuring [scheduled scaling for your Auto Scaling group](https://docs.aws.amazon.com/autoscaling/ec2/userguide/schedule_time.html) --- ### Troubleshooting URL: https://buildkite.com/docs/agent/self-hosted/aws/elastic-ci-stack/ec2-mac/troubleshooting #### Troubleshooting The following are solutions to problems some users face when using the [Elastic CI Stack for AWS Mac](https://github.com/buildkite/elastic-mac-for-aws). ##### My Auto Scaling group doesn't launch any instances * If your Auto Scaling group does not launch any instances, open the EC2 Console dashboard and **Auto Scaling Groups** from the side bar. Find your Auto Scaling group and open the **Activity** tab. The **Activity history** table will list the scaling actions that have occurred and any errors that resulted. * There may be a shortage of `mac1.metal` instances in the region, or Availability Zones of your VPC subnets. This error is likely to be a temporary one, wait for your Auto Scaling group to attempt to scale out again and see if the error persists. * Your launch template's AMI may not have been associated with a Customer Managed License in AWS License Manager. Ensure you [associate your AMI](/docs/agent/self-hosted/aws/elastic-ci-stack/ec2-mac/setup#step-3-associate-your-ami-with-a-self-managed-license-in-aws-license-manager) and any new AMIs with a Customer managed license. Ensure the License configuration has a **License type** of `Cores`. ##### My instances don't start the buildkite-agent Ensure your AMI has been [configured to auto-login as the `ec2-user`](/docs/agent/self-hosted/aws/elastic-ci-stack/ec2-mac/setup#step-2-build-an-ami) in the GUI. ##### How do I enable use of Xcode and the iOS simulator? To allow your pipelines to use Xcode and the iOS simulator the Buildkite agent launchd job configuration requires an `Aqua` session type. ##### What user does the agent run as? The Buildkite agent runs as `ec2-user`. ##### UserData script fails with Homebrew commands Common errors include: ``` zsh:1: command not found: brew Error: $HOME must be set to run brew. ``` These occur because the UserData script runs as `root`, but Homebrew is installed under the `ec2-user` account. Ensure your UserData script uses `su -` to run Homebrew commands as the correct user: ```bash #!/bin/bash user=ec2-user su - "${user}" -c 'brew install buildkite/buildkite/buildkite-agent' config="$(su - ${user} -c 'brew --prefix')"/etc/buildkite-agent/buildkite-agent.cfg sed -i '' "s/xxx/${BuildkiteAgentToken}/g" "${config}" echo "tags=\"queue=${BuildkiteAgentQueue},buildkite-mac-stack=%v\"" >> "${config}" echo "tags-from-ec2=true" >> "${config}" su - "${user}" -c 'brew services start buildkite/buildkite/buildkite-agent' ``` ##### Homebrew service fails to start with launch control error 125 You may see errors like: ``` Error: Failure while executing; `/bin/launchctl enable gui/501/homebrew.mxcl.buildkite-agent` exited with 125. Warning: running through sudo, using user/* instead of gui/* domain! ``` This is related to GUI service startup and the agent should still start successfully. If problems persist, ensure your AMI was configured with auto-login enabled. ##### Path issues when building custom AMIs with Packer When using Packer to build custom macOS AMIs, you may encounter issues where commands like `brew` cannot be found, which are usually the result of these executables not being configured in your `PATH` environment variable. Add the Homebrew executable paths to your Packer provisioner scripts: ```bash PATH=/opt/homebrew/bin:/opt/homebrew/sbin:/usr/bin:/bin:/usr/sbin:/sbin ``` Note that the exact path depends on your architecture: * Apple Silicon (ARM): `/opt/homebrew/bin` * Intel (x86): `/usr/local/bin` --- ### On AWS EC2 Mac instances URL: https://buildkite.com/docs/agent/self-hosted/aws/self-serve-install/ec2-mac #### Installing the Agent on your own AWS EC2 Mac instances Setting up a macOS AMI that starts a Buildkite agent on launch is a multi step process. You can start with one of the macOS AMIs from AWS, or with an AMI you've already installed Xcode or other software on. To use Xcode and the iOS Simulator, you must configure auto-login of a GUI session, and launch the Buildkite agent in an `aqua` session as a Launchd Agent: 1. Reserve an [EC2 Mac](https://aws.amazon.com/ec2/instance-types/mac/) Dedicated Host. 1. Boot a macOS instance using your desired AMI on the Dedicated Host. 1. Configure instance VPC subnets, security groups, and key pairs so that you can access the instance. 1. Using an SSH or AWS SSM session: - Set a password for the `ec2-user` using `sudo passwd ec2-user` - Enable screen sharing using `sudo /System/Library/CoreServices/RemoteManagement/ARDAgent.app/Contents/Resources/kickstart -activate -configure -access -on -restart -agent -privs -all` - Grow the AFPS container to use all the available space in your EBS root disk if needed, see the [AWS user guide](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/mac-instance-increase-volume.html). 1. Using a VNC session (run SSH port forwarding `ssh -L 5900:localhost:5900 ec2-user@` if direct access is not available): 1. Sign in as the `ec2-user`. 1. Enable **Automatic login** for the `ec2-user` in **System Preferences** > **Users & Accounts** > **Login Options**. 1. Disable **Require password** in **System Preferences** > **Security & Privacy** > **General**. 1. Set system sleep in **System Preferences** > **Energy Saver** > **Turn display off after** to **Never**. 1. Disable the screen saver in **System Preferences** > **Desktop & Screen Saver** > **Show screen saver after**. 1. Follow the [macOS installation guide](/docs/agent/self-hosted/install/macos#installation) instructions to install the Buildkite agent using Homebrew and configure starting on login. 1. Verify that the Buildkite agent has connected to buildkite.com with your desired agent tags. 1. Create an AMI from your instance. Your saved AMI can now be used to boot as many macOS instances as you require. To make this process repeatable, save your instance configuration in a [launch template](https://docs.aws.amazon.com/autoscaling/ec2/userguide/LaunchTemplates.html). To automate instance replacement, use an [Auto Scaling group](https://docs.aws.amazon.com/autoscaling/ec2/userguide/AutoScalingGroup.html) to boot instances and a [host resource group](https://docs.aws.amazon.com/license-manager/latest/userguide/host-resource-groups.html) to reserve Dedicated Hosts. While the use of an Auto Scaling group and host resource group to automatically maintain capacity in the face of hardware failures is recommended, load based dynamic auto-scaling of macOS instances is not recommended. The instances are currently slow to boot and slow to terminate. Use of load based auto-scaling is likely to result in an over-provision of agents which carries a high minimum charge per Dedicated Host. There is an excellent blog post on [running iOS agents in the cloud](https://www.starkandwayne.com/blog/buildkite-2/) that goes into more detail on preparing macOS AMIs using [Packer](https://www.packer.io/). ##### Known issues * You might need to give the agent [full disk access](https://github.com/buildkite/agent/issues/1400). --- ### Overview URL: https://buildkite.com/docs/agent/self-hosted/gcp #### Buildkite agents on Google Cloud Platform The Buildkite agent can be run on Google Cloud Platform (GCP) using Buildkite's Elastic CI Stack for GCP Terraform module, or by installing the agent on your self-managed instances. This page covers common installation and setup recommendations for different scenarios of using the Buildkite agent on GCP. ##### Using the Elastic CI Stack for GCP Terraform module The [Elastic CI Stack for GCP](/docs/agent/self-hosted/gcp/elastic-ci-stack) is a Terraform module for an autoscaling Buildkite agent cluster. The agent instances include Docker, Cloud Storage, and Cloud Logging integration. You can build a [custom image](/docs/agent/self-hosted/gcp/elastic-ci-stack/terraform#custom-images) if you need additional tools for your pipelines. You can use an Elastic CI Stack for GCP deployment to test Linux projects, parallelize large test suites, run Docker containers or docker-compose integration tests, or perform any tasks related to GCP ops. You can deploy an instance of the Elastic CI Stack for GCP by following the [Terraform setup guide](/docs/agent/self-hosted/gcp/elastic-ci-stack/terraform). ##### Using the Buildkite Agent Stack for Kubernetes on GCP The Buildkite agent's jobs can be run within a Kubernetes cluster on GCP. To start, you will need your own Kubernetes cluster running on GCP. Learn more in [Google Kubernetes Engine (GKE) documentation](https://cloud.google.com/kubernetes-engine). Once your Kubernetes cluster is running on GCP, you can then set up the [Buildkite Agent Stack for Kubernetes](/docs/agent/self-hosted/agent-stack-k8s) to run in this cluster. Learn more about how to set up the Agent Stack for Kubernetes in the [Agent Stack for Kubernetes installation documentation](/docs/agent/self-hosted/agent-stack-k8s/installation). ##### Installing the agent on your own GCP instances To run the Buildkite agent on your own [Google Compute Engine](https://cloud.google.com/compute) instance, use whichever installer matches your instance operating system. For example, to install on an Ubuntu-based instance: 1. Launch an instance using the latest Ubuntu LTS image [through the console](https://console.cloud.google.com/compute/instancesAdd) 1. Connect using SSH (via the console SSH button or `gcloud compute ssh`) 1. Follow the [setup instructions for Ubuntu](/docs/agent/self-hosted/install/ubuntu) For other Linux distributions, see: - [Debian](/docs/agent/self-hosted/install/debian) - [Red Hat/CentOS](/docs/agent/self-hosted/install/redhat) ###### Configuring agents for production use When running agents on individual Compute Engine instances, consider: - **Service account permissions**: create a dedicated service account with minimal required permissions. - **Metadata server**: use the [metadata server](https://cloud.google.com/compute/docs/metadata/overview) for configuration. - **Startup scripts**: configure the agent using [startup scripts](https://cloud.google.com/compute/docs/instances/startup-scripts). - **Systemd integration**: use systemd to manage the agent service (installed by default with package installers). - **Logging**: configure log shipping to [Cloud Logging](https://cloud.google.com/logging). ##### Uploading artifacts to Google Cloud Storage You can upload the [artifacts](/docs/pipelines/artifacts) created by your builds to your own [Google Cloud Storage](https://cloud.google.com/storage) bucket. Configure the agent to target your bucket by exporting the following environment variables using an [environment agent hook](/docs/agent/hooks) (note that this cannot be set via the Buildkite web interface, API, or during pipeline upload): ```shell export BUILDKITE_ARTIFACT_UPLOAD_DESTINATION="gs://my-bucket/$BUILDKITE_PIPELINE_ID/$BUILDKITE_BUILD_ID/$BUILDKITE_JOB_ID" ``` ###### Granting access to Cloud Storage Make sure the agent has permission to create new objects. If the agent is running on Google Compute Engine or Google Kubernetes Engine, you can grant Storage Write permission to the instance or cluster, or restrict access more specifically using [a service account](https://cloud.google.com/compute/docs/access/service-accounts). You can also set the application credentials using the environment variable `BUILDKITE_GS_APPLICATION_CREDENTIALS`. From Buildkite agent versions 3.15.2 and above, you can also use raw JSON with the `BUILDKITE_GS_APPLICATION_CREDENTIALS_JSON` variable. See the [Managing Pipeline Secrets](/docs/pipelines/security/secrets/managing) documentation to learn about setting up environment variables securely. ###### Configuring access control If you are using any of the non-public [predefined Access Control Lists (ACLs)](https://cloud.google.com/storage/docs/access-control/lists#predefined-acl) to control permissions on your bucket, you won't have automatic access to your artifacts through the links in the web interface of Buildkite Pipelines. Artifacts will inherit the permissions of the bucket into which they're uploaded. You can set a specific ACL on an artifact: ```shell export BUILDKITE_GS_ACL="publicRead" ``` ###### Authenticated access to artifacts If you need to be authenticated to view the objects in your bucket, you can use Google Cloud Storage's [cookie-based authentication](https://cloud.google.com/storage/docs/request-endpoints#cookieauth): ```shell export BUILDKITE_GCS_ACCESS_HOST="storage.cloud.google.com" ``` To use your own authenticating proxy for access control, set your proxy's domain as the access host: ```shell export BUILDKITE_GCS_ACCESS_HOST="myproxyhost.com" ``` ###### Customizing artifact paths If your proxy does not follow default GCS artifact path conventions, for example, the bucket name is not included in the URL, you can override the artifact path. To override the default path, export the environment variable `BUILDKITE_GCS_PATH_PREFIX`: ```shell export BUILDKITE_GCS_PATH_PREFIX="custom-folder-structure/" ``` The above variable export will cause the artifact path to use your custom prefix instead of the `GCS_BUCKET_NAME`: ```shell #### default path ${BUILDKITE_GCS_ACCESS_HOST}/${GCS_BUCKET_NAME}/${ARTIFACT_PATH} #### using the BUILDKITE_GCS_PATH_PREFIX environment variable ${BUILDKITE_GCS_ACCESS_HOST}/custom-folder-structure/${ARTIFACT_PATH} ``` ##### Suggested reading To continue exploring the possibilities of using the Buildkite agent on Google Cloud Platform, you will benefit from visiting the following documentation pages: - [Elastic CI Stack for GCP overview](/docs/agent/self-hosted/gcp/elastic-ci-stack) - [Terraform setup guide](/docs/agent/self-hosted/gcp/elastic-ci-stack/terraform) - [Configuration parameters](/docs/agent/self-hosted/gcp/elastic-ci-stack/configuration-parameters) - [Buildkite Agent Stack for Kubernetes](/docs/agent/self-hosted/agent-stack-k8s) - [Agent configuration](/docs/agent/self-hosted/configure) - [Agent hooks](/docs/agent/hooks) --- ### VPC design URL: https://buildkite.com/docs/agent/self-hosted/gcp/architecture/vpc #### VPC design for the Elastic CI Stack for GCP Agent orchestration deployments on GCP require a Virtual Private Cloud (VPC) network. Your VPC needs to provide routable access to the `buildkite.com` service to allow `buildkite-agent` processes to connect and retrieve the jobs assigned to them. ##### Network architecture The Elastic CI Stack for GCP creates a custom VPC network with: - **Custom VPC**: `10.0.0.0/16` CIDR block - **Subnet 0**: `10.0.1.0/24` primary subnet - **Subnet 1**: `10.0.2.0/24` secondary subnet for high availability - **Cloud NAT**: outbound internet access without external IPs - **Cloud Router**: dynamic routing Both subnets have Private Google Access enabled, allowing instances to access Google APIs without external IP addresses. ##### Firewall rules The stack creates several firewall rules: - **Internal communication** - allows all traffic between instances (`10.0.0.0/16`). - **SSH access** (optional) - controlled by `enable_ssh_access` and `ssh_source_ranges`. - **Health checks** - allows Google health check probes (`35.191.0.0/16`, `130.211.0.0/22`). - **Identity-Aware Proxy** (optional) - when `enable_iap_access = true`, it enables secure SSH via IAP (`35.235.240.0/20`). ##### Network security options It is recommended to use private instances with IAP access: ```hcl enable_ssh_access = false enable_iap_access = true ``` Alternatively, you can restrict SSH to specific IPs: ```hcl enable_ssh_access = true ssh_source_ranges = ["111.222.0.1/24"] # Your office IP range, for example ``` ##### Private Google access Be aware that both subnets have Private Google Access enabled, allowing instances without external IPs to access: - Cloud Storage - Secret Manager - Cloud Logging - Cloud Monitoring - Artifact Registry Traffic stays within Google's network, providing better network performance than when using a resource external to the VPC, and no egress charges. --- ### Overview URL: https://buildkite.com/docs/agent/self-hosted/gcp/elastic-ci-stack #### Elastic CI Stack for GCP overview The Buildkite Elastic CI Stack for GCP gives you a private, autoscaling [Buildkite agent](/docs/agent) cluster running on Google Cloud Platform. You can use it to run your builds on your own infrastructure, with complete control over security, networking, and costs. ##### Architecture The stack is organized into four Terraform modules: - **[Networking](https://github.com/buildkite/terraform-buildkite-elastic-ci-stack-for-gcp/tree/main/modules/networking)** - VPC, subnets, Cloud NAT, and firewall rules - **[IAM](https://github.com/buildkite/terraform-buildkite-elastic-ci-stack-for-gcp/tree/main/modules/iam)** - service accounts and permissions for agents and metrics - **[Compute](https://github.com/buildkite/terraform-buildkite-elastic-ci-stack-for-gcp/tree/main/modules/compute)** - instance groups, autoscaling, and agent configuration - **[Buildkite agent metrics](https://github.com/buildkite/terraform-buildkite-elastic-ci-stack-for-gcp/tree/main/modules/buildkite-agent-metrics)** - Cloud Function for publishing queue metrics ##### Features The Buildkite Elastic CI Stack for GCP supports: - All GCP regions - Linux operating system (Debian 13) - Configurable machine types (including ARM instances) - Configurable autoscaling based on build queue activity - Docker and Docker Compose v2 - Multi-architecture build support (ARM/x86 cross-platform) - Cloud Logging for system and Buildkite agent events - Cloud Monitoring metrics from the Buildkite API - Support for stable, beta, or edge Buildkite agent releases - Multiple stacks in the same GCP project - Rolling updates to stack instances to reduce interruption - Secret Manager integration for secure token storage - Preemptible VM support for cost optimization - Automated Docker garbage collection and disk space management ##### Get started with the Elastic CI Stack for GCP You can get started with the Buildkite Elastic CI Stack for GCP using Terraform. Follow the [Terraform setup guide](/docs/agent/self-hosted/gcp/elastic-ci-stack/terraform). ##### Architecture comparison The Elastic CI Stack for GCP is inspired by the [Elastic CI Stack for AWS](https://github.com/buildkite/elastic-ci-stack-for-aws) and provides similar functionality using GCP services: | Feature | AWS Implementation | GCP Implementation | |---------|--------------------|--------------------| | Compute | EC2 Auto Scaling Groups | Managed Instance Groups | | Networking | VPC, NAT Gateway | VPC, Cloud NAT | | Secrets | Secrets Manager / Parameter Store | Secret Manager | | Logging | CloudWatch Logs | Cloud Logging | | Metrics | CloudWatch Metrics | Cloud Monitoring | | Autoscaling Metrics | Lambda function | Cloud Function | | Image Building | Packer | Packer | | Infrastructure | CloudFormation or Terraform | Terraform | ##### What's on each machine? This is the list of contents on each machine running the Buildkite Elastic CI Stack for GCP: - [Debian 13 (Trixie)](https://www.debian.org/releases/trixie/) - [The Buildkite agent](/docs/agent) - [Git](https://git-scm.com/) - [Docker](https://www.docker.com) - [Docker Compose v2](https://docs.docker.com/compose/) - [Docker Buildx](https://docs.docker.com/buildx/working-with-buildx/) - [gcloud CLI](https://cloud.google.com/sdk/gcloud) - useful for performing any ops-related tasks - [jq](https://stedolan.github.io/jq/) - useful for manipulating JSON responses from CLI tools For more details on what versions are installed, see the [Packer templates](https://github.com/buildkite/terraform-buildkite-elastic-ci-stack-for-gcp/tree/main/packer). The Buildkite agent runs as user `buildkite-agent`. ##### Supported builds This stack is designed to run your builds in a shared-nothing pattern similar to the [12 factor application principles](http://12factor.net): - Each project should encapsulate its dependencies through Docker and Docker Compose. - Build pipeline steps should assume no state on the machine (and instead rely on the [build meta-data](/docs/pipelines/build-meta-data), [build artifacts](/docs/pipelines/artifacts), or Cloud Storage). - Secrets, including [SSH keys for source control](/docs/agent/self-hosted/gcp/elastic-ci-stack/terraform#advanced-configuration-ssh-keys-for-source-control), are configured using Secret Manager. By following these conventions, you get a scalable, repeatable, and source-controlled CI environment that any team within your organization can use. ##### Suggested reading To gain a better understanding of how Elastic CI Stack for GCP works and how to use it most effectively and securely, check out the following resources: - [GitHub repo for Elastic CI Stack for GCP](https://github.com/buildkite/terraform-buildkite-elastic-ci-stack-for-gcp) - [Terraform setup guide](/docs/agent/self-hosted/gcp/elastic-ci-stack/terraform) - [Configuration parameters for Elastic CI Stack for GCP](/docs/agent/self-hosted/gcp/elastic-ci-stack/configuration-parameters) - [Architecture overview](/docs/agent/self-hosted/gcp/architecture/vpc) --- ### Terraform URL: https://buildkite.com/docs/agent/self-hosted/gcp/elastic-ci-stack/terraform #### Terraform setup for the Elastic CI Stack for GCP This guide helps you to get started with the [Elastic CI Stack for GCP](https://github.com/buildkite/terraform-buildkite-elastic-ci-stack-for-gcp) using [Terraform](https://www.terraform.io/). Elastic CI Stack for GCP allows you to launch a private, autoscaling [Buildkite agent cluster](/docs/pipelines/security/clusters) in your own GCP project. ##### Before you start Before deploying the Elastic CI Stack for GCP, review the prerequisites, required skills, and billable services to ensure you have the necessary tools, knowledge, and budget planning in place. ###### Prerequisites - [Terraform](https://www.terraform.io/downloads.html) version >= 1.0 - [Buildkite Account](https://buildkite.com/signup) - [GCP Account](https://cloud.google.com/) with a project - [gcloud CLI](https://cloud.google.com/sdk/docs/install) configured ###### Required and recommended skills The Elastic CI Stack for GCP does not require familiarity with the underlying GCP services to deploy it. However, to run builds, some familiarity with the following GCP services is recommended: - [Google Compute Engine](https://cloud.google.com/products/compute) (to select a `machine_type` appropriate for your workload) - [Google Cloud Storage](https://cloud.google.com/storage) (for storing build artifacts) - [Secret Manager](https://cloud.google.com/security/products/secret-manager) (for storing the Buildkite agent token securely) Elastic CI Stack for GCP provides defaults and pre-configurations suited for most use cases without the need for additional customization. Still, you'll benefit from familiarity with VPCs, Cloud NAT, and firewall rules for custom instance networking. For post-deployment diagnostic purposes, deeper familiarity with Compute Engine is recommended to be able to access the instances launched to execute Buildkite jobs over SSH or [Identity-Aware Proxy](https://cloud.google.com/iap/docs). ###### Billable services The Elastic CI Stack for GCP template deploys several billable GCP services that do not require upfront payment and operate on a pay-as-you-go principle, with the bill proportional to usage. | Service name | Purpose | Required | | ---------------- | ----------------------------------------------- | -------- | | Compute Engine | Deployment of VM instances | ☑️ | | Persistent Disk | Root disk storage of VM instances | ☑️ | | Cloud Functions | Publishing queue metrics for autoscaling | ☑️ | | Secret Manager | Storing the Buildkite agent token (recommended) | ☑️ | | Cloud Logging | Logs for instances and Cloud Function | ☑️ | | Cloud Monitoring | Metrics for autoscaling | ☑️ | | Cloud NAT | Outbound internet access for instances | ☑️ | | Cloud Storage | Build artifacts storage (if enabled) | ❌ | Buildkite services are billed according to your [plan](https://buildkite.com/pricing). ###### What's on each machine? When using the default base image, each machine includes: - [Debian 13 (trixie)](https://www.debian.org/releases/trixie/) - [The Buildkite agent](/docs/agent) - [Git](https://git-scm.com/) - [Docker](https://www.docker.com) (when using custom Packer image) - [Docker Compose v2](https://docs.docker.com/compose/) (when using custom Packer image) - [Docker Buildx](https://docs.docker.com/buildx/working-with-buildx/) (when using custom Packer image) - [gcloud CLI](https://cloud.google.com/sdk/gcloud) - useful for performing any ops-related tasks - [jq](https://stedolan.github.io/jq/) - useful for manipulating JSON responses from CLI tools You can build a [custom image](/docs/agent/self-hosted/gcp/elastic-ci-stack/terraform#custom-images) if you need additional tools for your pipelines. For more details on what versions are installed, see the corresponding [Packer templates](https://github.com/buildkite/terraform-buildkite-elastic-ci-stack-for-gcp/tree/main/packer). The Buildkite agent runs as user `buildkite-agent`. ###### Supported builds This stack is designed to run your builds in a shared-nothing pattern similar to the [12 factor application principles](http://12factor.net): - Each project should encapsulate its dependencies through Docker and Docker Compose. - Build pipeline steps should assume no state on the machine (and instead rely on the [build meta-data](/docs/pipelines/build-meta-data), [build artifacts](/docs/pipelines/artifacts), or Cloud Storage). - Secrets, including [SSH keys for source control](/docs/agent/self-hosted/gcp/elastic-ci-stack/terraform#advanced-configuration-ssh-keys-for-source-control), are configured using Secret Manager. By following these conventions, you get a scalable, repeatable, and source-controlled CI environment that any team within your organization can use. ##### Custom images Custom images help teams ensure that their agents have all required tools and configurations before instance launch. This prevents instances from reverting to the base image state when agents restart, which would lose any manual changes made during run time. ###### Requirements To use the Packer templates provided, you will need to install the following installed on your system: - Docker - Make - gcloud CLI The following GCP IAM permissions are required for building custom images using the provided Packer templates: ```json { "title": "Packer Image Builder", "description": "Permissions required to build VM images with Packer", "includedPermissions": [ "compute.disks.create", "compute.disks.delete", "compute.disks.get", "compute.disks.use", "compute.images.create", "compute.images.delete", "compute.images.get", "compute.images.useReadOnly", "compute.instances.create", "compute.instances.delete", "compute.instances.get", "compute.instances.setMetadata", "compute.instances.setServiceAccount", "compute.machineTypes.get", "compute.networks.get", "compute.subnetworks.use", "compute.subnetworks.useExternalIp", "compute.zones.get", "iam.serviceAccounts.actAs" ] } ``` It is also recommended that you have a base knowledge of: - [Packer](https://developer.hashicorp.com/packer/docs/intro) - [HashiCorp Configuration Language (HCL)](https://github.com/hashicorp/hcl) - Bash scripting ###### Creating an image To create a custom image with Docker support (recommended for production): ```bash cd packer ./build --project-id your-gcp-project-id ``` This builds a Debian 13-based image with: - Pre-installed Buildkite agent - Docker Engine with Compose v2 and Buildx - Multi-architecture build support - Automated Docker garbage collection - Disk space monitoring and self-protection - Centralized logging with Ops Agent ##### Deploying the stack This section walks through the deployment process step by step, from obtaining your agent token to initializing and applying your Terraform configuration. ###### Step 1: Get your Buildkite agent token Obtain the value for the [agent token](/docs/agent/self-hosted/tokens) you'd previously configured for your Buildkite cluster. > 📘 > If you don't have your agent token's value, you'll need to [create a new one](/docs/agent/self-hosted/tokens#create-a-token), which you can do from the [**Agents** > **Clusters** > your specific cluster page](https://buildkite.com/organizations/-/agents). Once created, don't forget to copy the agent token's value and save it somewhere secure, as you won't be able to see its value from Buildkite again. ###### Step 2: Store the token in Secret Manager (recommended) For production deployments, store the token in Secret Manager: ```bash echo -n "your-agent-token" | gcloud secrets create buildkite-agent-token \ --data-file=- \ --project=your-project-id #### Verify the secret was created gcloud secrets describe buildkite-agent-token --project=your-project-id ``` ###### Step 3: Create your Terraform configuration Create a new directory for your Terraform configuration: ```bash mkdir buildkite-gcp-stack cd buildkite-gcp-stack ``` Create a `main.tf` file: ```hcl terraform { required_version = ">= 1.0" required_providers { google = { source = "hashicorp/google" version = ">= 4.0, = 0.1.0" # Required project_id = var.project_id buildkite_organization_slug = var.buildkite_organization_slug buildkite_agent_token_secret = "projects/${var.project_id}/secrets/buildkite-agent-token/versions/latest" # Stack configuration stack_name = "buildkite" buildkite_queue = "default" region = var.region # Scaling configuration min_size = 0 max_size = 10 # Instance configuration machine_type = "e2-standard-4" } ``` Create a `variables.tf` file: ```hcl variable "project_id" { description = "GCP project ID" type = string } variable "region" { description = "GCP region" type = string default = "us-central1" } variable "buildkite_organization_slug" { description = "Buildkite organization slug" type = string } ``` Create a `terraform.tfvars` file: ```hcl project_id = "your-gcp-project-id" region = "us-central1" buildkite_organization_slug = "your-org-slug" ``` Create an `outputs.tf` file (optional): ```hcl output "network_name" { description = "Name of the VPC network" value = module.buildkite_stack.network_name } output "instance_group_name" { description = "Name of the managed instance group" value = module.buildkite_stack.instance_group_manager_name } output "agent_service_account_email" { description = "Email of the agent service account" value = module.buildkite_stack.agent_service_account_email } ``` ###### Step 4: Initialize and deploy - Authenticate with GCP: ```bash gcloud auth application-default login ``` - Initialize Terraform: ```bash terraform init ``` - Review the planned changes: ```bash terraform plan ``` - Deploy the stack: ```bash terraform apply ``` - Type `yes` when prompted to confirm the deployment. The module will create: - VPC network with Cloud NAT - IAM service accounts with appropriate permissions - Managed instance group with Buildkite agents - Cloud Function for autoscaling metrics - Health checks and autoscaling based on queue depth ##### Advanced configuration This section covers some of the configurations you might want to use for a deeper customization of your stack. ###### Using a custom VM image If you built a custom Packer image with Docker support: ```hcl module "buildkite_stack" { source = "buildkite/elastic-ci-stack-for-gcp/buildkite" version = ">= 0.1.0" # ... other configuration ... # Use custom image family image = "buildkite-ci-stack" } ``` ###### Configuring agent tags Target specific agents in your pipeline steps using tags: ```hcl module "buildkite_stack" { source = "buildkite/elastic-ci-stack-for-gcp/buildkite" version = ">= 0.1.0" # ... other configuration ... buildkite_agent_tags = "docker=true,os=linux,environment=production" } ``` Then in your `pipeline.yml`, set the following: ```yaml steps: - command: echo "hello from production" agents: queue: "default" environment: "production" ``` For more information, see the [Queues overview](/docs/agent/queues) page. ###### Multiple queues To create multiple agent pools with different configurations, deploy multiple stacks with different queue names: ```hcl #### Production stack module "buildkite_stack_production" { source = "buildkite/elastic-ci-stack-for-gcp/buildkite" version = ">= 0.1.0" stack_name = "buildkite-production" buildkite_queue = "production" machine_type = "e2-standard-4" max_size = 20 # ... other configuration ... } #### Build stack for larger builds module "buildkite_stack_builds" { source = "buildkite/elastic-ci-stack-for-gcp/buildkite" version = ">= 0.1.0" stack_name = "buildkite-builds" buildkite_queue = "builds" machine_type = "n1-standard-8" max_size = 10 # ... other configuration ... } ``` ###### Enabling Cloud Storage access If your builds need to upload/download artifacts to Cloud Storage: ```hcl module "buildkite_stack" { source = "buildkite/elastic-ci-stack-for-gcp/buildkite" version = ">= 0.1.0" # ... other configuration ... enable_storage_access = true } ``` ###### Using IAP for secure SSH access Enable Identity-Aware Proxy for secure SSH access without external IPs: ```hcl module "buildkite_stack" { source = "buildkite/elastic-ci-stack-for-gcp/buildkite" version = ">= 0.1.0" # ... other configuration ... enable_iap_access = true } ``` Then connect to instances: ```bash gcloud compute ssh INSTANCE_NAME \ --zone ZONE \ --tunnel-through-iap \ --project PROJECT_ID ``` ###### Restricting SSH access Restrict SSH access to specific IP ranges: ```hcl module "buildkite_stack" { source = "buildkite/elastic-ci-stack-for-gcp/buildkite" version = ">= 0.1.0" # ... other configuration ... enable_ssh_access = true ssh_source_ranges = ["203.0.113.0/24"] # Your office IP range } ``` ###### SSH keys for source control The Elastic CI Stack for GCP automatically loads SSH keys from [GCP Secret Manager](https://cloud.google.com/security/products/secret-manager) and adds them to an ephemeral `ssh-agent` for your builds. This allows your builds to clone private repositories without storing keys on disk. The agent's [environment hook](https://github.com/buildkite/terraform-buildkite-elastic-ci-stack-for-gcp/blob/main/packer/linux/conf/buildkite-agent/hooks/environment) checks for secrets in the following order: 1. `{pipeline-slug}/private_ssh_key` - pipeline-specific SSH key 1. `{pipeline-slug}/id_rsa_github` - pipeline-specific GitHub deploy key 1. `private_ssh_key` - global SSH key shared across all pipelines 1. `id_rsa_github` - global GitHub deploy key shared across all pipelines Where `{pipeline-slug}` is the slug of the pipeline running the build. Pipeline-specific keys are checked first, then the global keys. The first key found is loaded into the agent. > 📘 The `enable_secret_access` Terraform variable must be set to `true` (the default) for agents to access secrets from Secret Manager. ###### Uploading an SSH key To generate a private SSH key and store it in Secret Manager: ```bash #### Generate a deploy key for your project ssh-keygen -t rsa -b 4096 -f id_rsa_buildkite pbcopy = 0.1.0" # ... other configuration ... labels = { team = "platform" environment = "production" cost-center = "engineering" } } ``` ##### Updating the stack To update your stack configuration: - Modify your Terraform configuration files - Review the changes: ```bash terraform plan ``` - Apply the changes: ```bash terraform apply ``` Terraform will automatically perform rolling updates to minimize disruption: - New instances will be created with the updated configuration - Old instances will be drained and terminated - The process of updating the stack respects `max_surge` and `max_unavailable` settings ##### Destroying the stack To tear down the entire stack, use: ```bash terraform destroy ``` ##### Additional information To gain a better understanding of how Elastic CI Stack for GCP works and how to use it most effectively and securely, check out the following resources: - [GitHub repo for Elastic CI Stack for GCP](https://github.com/buildkite/terraform-buildkite-elastic-ci-stack-for-gcp) - [Configuration parameters for Elastic CI Stack for GCP](/docs/agent/self-hosted/gcp/elastic-ci-stack/configuration-parameters) --- ### Configuration URL: https://buildkite.com/docs/agent/self-hosted/gcp/elastic-ci-stack/configuration-parameters #### Configuration parameters The Elastic CI Stack for GCP can be configured using Terraform variables. This page provides a reference of all available configuration options. The following tables list all of the available configuration parameters as Terraform variables in the [root module](https://github.com/buildkite/terraform-buildkite-elastic-ci-stack-for-gcp). Note that you must provide values for the required parameters (`project_id`, `buildkite_organization_slug`, and `buildkite_agent_token` or `buildkite_agent_token_secret`) to use the stack. All other parameters are optional and have sensible defaults. ##### Required configuration | Variable | Type | Description | |----------|------|-------------| | `project_id` | `string` | GCP project ID where the Elastic CI Stack will be deployed. Must be 6-30 characters long, start with a letter, contain only lowercase letters, numbers, single hyphens, and cannot contain the word 'google'. | | `buildkite_organization_slug` | `string` | Buildkite organization slug (from your Buildkite URL: `https://buildkite.com/`). Used for namespacing of metrics. Must contain only lowercase letters, numbers, and hyphens. | | `buildkite_agent_token` | `string` (sensitive) | Agent token from the relevant cluster of your Buildkite organization. If you hadn't stored this token's value securely, or you don't have its value, you'll need to [create a new one](/docs/agent/self-hosted/tokens#create-a-token), which you can do from the [**Agents** > **Clusters** > your specific cluster page](https://buildkite.com/organizations/-/agents). Leave this variable empty if you are using `buildkite_agent_token_secret`. | ##### Stack configuration | Variable | Type | Default | Description | |----------|------|---------|-------------| | `stack_name` | `string` | `"buildkite"` | Name prefix for all resources in this stack. Used to identify and organize resources. Must be a valid GCP resource name: lowercase letters, numbers, and hyphens only. | | `region` | `string` | `"us-central1"` | GCP region where resources will be deployed (for example, `us-central1`, `europe-west1`). | | `zones` | `list(string)` | `null` | List of availability zones within the region for high availability. If not specified, uses all zones in the region. | ##### Buildkite configuration | Variable | Type | Default | Description | |----------|------|---------|-------------| | `buildkite_agent_token_secret` | `string` | `""` | Alternative to `buildkite_agent_token`: GCP Secret Manager secret name containing the Buildkite agent token (for example, `projects/PROJECT_ID/secrets/buildkite-agent-token/versions/latest`). Recommended for production. | | `buildkite_queue` | `string` | `"default"` | A Buildkite queue name that agents will listen to. Agents in this stack will only pick up jobs targeting this queue. | | `buildkite_agent_tags` | `string` | `""` | Additional tags for Buildkite agents (comma-separated key=value pairs, for example, 'docker=true,os=linux'). Use these to target specific agents in pipeline steps. | | `buildkite_agent_release` | `string` | `"stable"` | Buildkite agent release channel. Allowed values: `stable` (recommended), `beta`, `edge`. | | `buildkite_api_endpoint` | `string` | `"https://agent.buildkite.com/v3"` | Buildkite API endpoint URL. Only change this if using a custom endpoint. | ##### Instance configuration | Variable | Type | Default | Description | |----------|------|---------|-------------| | `machine_type` | `string` | `"e2-standard-4"` | GCP machine type for agent instances (for example, "e2-standard-4", "n1-standard-2", "c2-standard-4"). Must be a valid GCP machine type. See: [GCP Machine Types](https://cloud.google.com/compute/docs/machine-types). | | `image` | `string` | `"debian-cloud/debian-12"` | Source image for boot disk. Use a custom Packer-built image or a public Debian image. | | `root_disk_size_gb` | `number` | `50` | Size of the root disk in GB. Increase for larger Docker images or build artifacts. Range: 10-65536 GB. | | `root_disk_type` | `string` | `"pd-balanced"` | Type of root disk. Allowed values: `pd-standard` (cheaper, slower), `pd-balanced` (recommended), `pd-ssd` (fastest). | ##### Scaling configuration | Variable | Type | Default | Description | |----------|------|---------|-------------| | `min_size` | `number` | `0` | Minimum number of agent instances. Set to 0 to scale to zero when idle (cost-effective) or higher than 0 for always-available capacity. Must be ≥ 0. | | `max_size` | `number` | `10` | Maximum number of agent instances. Controls cost ceiling and maximum parallelization. Must be ≥ 1. | | `enable_autoscaling` | `bool` | `true` | Enable autoscaling based on Buildkite job queue metrics. Requires `buildkite-agent-metrics` Cloud Function to be deployed. | | `cooldown_period` | `number` | `60` | Cooldown period in seconds between autoscaling actions to prevent flapping. Must be ≥ 30. | | `autoscaling_jobs_per_instance` | `number` | `1` | Target number of Buildkite jobs per instance for autoscaling. Lower values mean more parallelization and higher cost. Must be ≥ 1. | ##### Networking configuration | Variable | Type | Default | Description | |----------|------|---------|-------------| | `network_name` | `string` | `"elastic-ci-stack"` | Name of the VPC network to create. The stack will create a new VPC with this name. Must be a valid GCP resource name: lowercase letters, numbers, and hyphens only. | | `enable_ssh_access` | `bool` | `true` | Enable SSH access to instances via firewall rule. Set to false for additional security. | | `ssh_source_ranges` | `list(string)` | `["0.0.0.0/0"]` | CIDR blocks allowed to SSH to instances. Restrict to your IP for security (for example, ['203.0.113.0/24']). Only used if `enable_ssh_access` is true. All values must be valid CIDR blocks. | | `instance_tag` | `string` | `"elastic-ci-agent"` | Network tag applied to instances for firewall targeting. Generally no need to change. Must be a valid GCP network tag. | | `enable_iap_access` | `bool` | `false` | Enable Identity-Aware Proxy (IAP) for secure SSH without external IPs or VPN. | | `enable_secondary_ranges` | `bool` | `false` | Enable secondary IP ranges for future GKE support. | ##### IAM configuration | Variable | Type | Default | Description | |----------|------|---------|-------------| | `agent_service_account_id` | `string` | `"elastic-ci-agent"` | ID for the Buildkite agent service account. Usually doesn't need changing. Must be 6-30 characters, lowercase letters, digits, and hyphens only. | | `metrics_service_account_id` | `string` | `"elastic-ci-metrics"` | ID for the metrics function service account. Usually doesn't need changing. Must be 6-30 characters, lowercase letters, digits, and hyphens only. | | `agent_custom_role_id` | `string` | `"elasticCiAgentInstanceMgmt"` | ID for the custom IAM role for agent instance management. Usually doesn't need changing. Must be 3-64 characters, letters, numbers, underscores, and periods only. | | `metrics_custom_role_id` | `string` | `"elasticCiMetricsAutoscaler"` | ID for the custom IAM role for metrics autoscaling. Usually doesn't need changing. Must be 3-64 characters, letters, numbers, underscores, and periods only. | | `enable_secret_access` | `bool` | `true` | Grant agents access to Secret Manager. Enable if your builds need to access secrets. | | `enable_storage_access` | `bool` | `false` | Grant agents access to Cloud Storage. Enable if your builds need to upload/download artifacts. | ##### Health check configuration | Variable | Type | Default | Description | |----------|------|---------|-------------| | `enable_autohealing` | `bool` | `true` | Enable automatic replacement of unhealthy instances. | | `health_check_port` | `number` | `22` | Port for health checks (22 for SSH, or custom port if running health endpoint). Range: 1-65535. | | `health_check_interval_sec` | `number` | `30` | How often (in seconds) to perform health checks. Must be ≥ 1. | | `health_check_timeout_sec` | `number` | `10` | How long (in seconds) to wait for health check response before marking instance start as failed. Must be ≥ 1. | | `health_check_healthy_threshold` | `number` | `2` | Number of consecutive successful health checks before marking instance healthy. Must be ≥ 1. | | `health_check_unhealthy_threshold` | `number` | `3` | Number of consecutive failed health checks before marking instance unhealthy. Must be ≥ 1. | | `health_check_initial_delay_sec` | `number` | `300` | Time (in seconds) to wait after instance start before beginning health checks. Must be ≥ 0. | ##### Update policy configuration | Variable | Type | Default | Description | |----------|------|---------|-------------| | `max_surge` | `number` | `3` | Maximum number of instances that can be created above target size during rolling updates. Must be ≥ 0. | | `max_unavailable` | `number` | `0` | Maximum number of instances that can be unavailable during rolling updates. Must be ≥ 0. | ##### Security configuration | Variable | Type | Default | Description | |----------|------|---------|-------------| | `enable_secure_boot` | `bool` | `false` | Enable Secure Boot for shielded VM instances (additional security, slight performance overhead). | | `enable_vtpm` | `bool` | `true` | Enable virtual Trusted Platform Module for shielded VM instances (recommended). | | `enable_integrity_monitoring` | `bool` | `true` | Enable integrity monitoring for shielded VM instances (recommended). | ##### Additional configuration | Variable | Type | Default | Description | |----------|------|---------|-------------| | `labels` | `map(string)` | `{}` | Additional labels to apply to all resources for organization and billing. | ##### Example configuration Here's an example `terraform.tfvars` file with commonly used parameters: ```hcl #### Required project_id = "my-gcp-project" buildkite_organization_slug = "my-org" buildkite_agent_token_secret = "buildkite-agent-token" # Secret Manager secret name #### Stack identification stack_name = "buildkite-production" region = "us-central1" #### Buildkite configuration buildkite_queue = "default" buildkite_agent_tags = "docker=true,os=linux,environment=production" #### Instance configuration machine_type = "e2-standard-4" root_disk_size_gb = 100 root_disk_type = "pd-balanced" #### Scaling min_size = 1 max_size = 20 #### Security enable_ssh_access = true ssh_source_ranges = ["203.0.113.0/24"] # Your office IP range enable_iap_access = true #### Permissions enable_secret_access = true enable_storage_access = true #### Labels for cost tracking labels = { team = "platform" environment = "production" cost-center = "engineering" } ``` ##### Using Secret Manager for Agent token For production deployments, it's recommended to store the Buildkite agent token in Secret Manager: - Step 1. Create a secret in Secret Manager: ```bash echo -n "your-agent-token" | gcloud secrets create buildkite-agent-token \ --data-file=- \ --project=your-project-id ``` - Step 2. Configure the stack to use the secret: ```hcl #### In terraform.tfvars buildkite_agent_token_secret = "projects/your-project-id/secrets/buildkite-agent-token/versions/latest" ``` ##### Module-specific parameters For more detailed configuration options at the module level, see: - [Networking module variables](https://github.com/buildkite/terraform-buildkite-elastic-ci-stack-for-gcp/tree/main/modules/networking#variables) - [IAM module variables](https://github.com/buildkite/terraform-buildkite-elastic-ci-stack-for-gcp/tree/main/modules/iam#variables) - [Compute module variables](https://github.com/buildkite/terraform-buildkite-elastic-ci-stack-for-gcp/tree/main/modules/compute#variables) - [Buildkite agent metrics module variables](https://github.com/buildkite/terraform-buildkite-elastic-ci-stack-for-gcp/tree/main/modules/buildkite-agent-metrics#variables) --- ### Troubleshooting URL: https://buildkite.com/docs/agent/self-hosted/gcp/elastic-ci-stack/troubleshooting #### Troubleshooting the Elastic CI Stack for GCP Infrastructure as code isn't always easy to troubleshoot, but here are some ways to debug what's going on inside the [Elastic CI Stack for GCP](https://github.com/buildkite/terraform-buildkite-elastic-ci-stack-for-gcp), and some solutions for troubleshooting specific situations and issues. ##### Using Cloud Logging Elastic CI Stack for GCP sends logs to Cloud Logging via the Ops Agent. The following log sources are available: ###### Application logs - Buildkite agent logs - log name: `buildkite_agent` * Contains agent lifecycle events, job execution, and errors * Severity levels: `DEBUG`, `INFO`, `WARNING`, `ERROR`, `CRITICAL` * View in Logs Explorer: `log_name="projects/PROJECT_ID/logs/buildkite_agent"` - Docker Daemon logs (if Docker is installed) - log name: `docker` * Contains Docker daemon events and errors * View in Logs Explorer: `log_name="projects/PROJECT_ID/logs/docker"` - Preemption Monitor logs - log name: `preemption_monitor` * Contains preemptible instance termination handling logs * View in Logs Explorer: `log_name="projects/PROJECT_ID/logs/preemption_monitor"` ###### System logs - System messages - log name: `syslog` * General system messages and events * View in Logs Explorer: `log_name="projects/PROJECT_ID/logs/syslog"` - Authentication logs - log name: `auth` * SSH and authentication events * View in Logs Explorer: `log_name="projects/PROJECT_ID/logs/auth"` ###### Cloud Initialization logs - Cloud-init logs - log name: `cloud_init` * VM bootstrap process logs * View in Logs Explorer: `log_name="projects/PROJECT_ID/logs/cloud_init"` - Cloud-init output - log name: `cloud_init_output` * Output from startup scripts * View in Logs Explorer: `log_name="projects/PROJECT_ID/logs/cloud_init_output"` ###### Viewing logs in Cloud Console 1. Navigate to **Monitoring** > **Logs Explorer** in the Cloud Console 1. Use filters to view specific logs View all logs from a specific instance: ```text resource.type="gce_instance" resource.labels.instance_id="INSTANCE_ID" ``` View Buildkite agent errors: ```text resource.type="gce_instance" log_name="projects/PROJECT_ID/logs/buildkite_agent" severity >= ERROR ``` View startup script output: ```text resource.type="gce_instance" log_name="projects/PROJECT_ID/logs/cloud_init_output" ``` ###### Viewing logs with gcloud CLI View recent Buildkite agent logs: ```bash gcloud logging read "resource.type=gce_instance AND log_name=projects/PROJECT_ID/logs/buildkite_agent" \ --limit 50 \ --format json \ --project PROJECT_ID ``` View logs from a specific instance: ```bash gcloud logging read "resource.labels.instance_id=INSTANCE_ID" \ --limit 100 \ --freshness 1h \ --project PROJECT_ID ``` View ERROR-level logs only: ```bash gcloud logging read "resource.type=gce_instance AND severity>=ERROR" \ --limit 50 \ --format json \ --project PROJECT_ID ``` ##### Accessing Elastic CI Stack for GCP instances directly Sometimes, looking at the logs isn't enough to figure out what's going on in your instances. In these cases, it can be useful to access the shell on the instance directly. ###### SSH access (if enabled) If your Elastic CI Stack for GCP has been configured to allow SSH access (`enable_ssh_access = true`): ```bash #### SSH directly (requires external IP or Cloud NAT) gcloud compute ssh INSTANCE_NAME --zone ZONE --project PROJECT_ID ``` ###### Identity-aware proxy (IAP) If IAP is enabled (`enable_iap_access = true`), you can SSH without external IPs: ```bash #### SSH via IAP tunnel gcloud compute ssh INSTANCE_NAME \ --zone ZONE \ --tunnel-through-iap \ --project PROJECT_ID ``` Or use the **SSH** button in the Cloud Console: 1. Navigate to **Compute Engine** > **VM instances** 1. Click the **SSH** button next to the instance ###### Serial console For instances that won't boot or are inaccessible: ```bash #### View serial console output gcloud compute instances get-serial-port-output INSTANCE_NAME \ --zone ZONE \ --project PROJECT_ID ``` ##### Managed instance group fails to boot instances Resource shortage or configuration errors can cause this issue. Check the managed instance group's Activity log for diagnostics. Check instance group status: ```bash gcloud compute instance-groups managed describe INSTANCE_GROUP_NAME \ --region REGION \ --project PROJECT_ID ``` Check for quota issues: ```bash gcloud compute project-info describe --project PROJECT_ID ``` ##### Instances are abruptly terminated This can happen when using preemptible instances. GCP sends a notification to a preemptible instance 30 seconds prior to termination. The preemption-monitor service intercepts that notification and attempts to gracefully shut down. ###### To identify if your instance was preempted Check the Cloud Logging for the preemption monitor: ```bash gcloud logging read "resource.type=gce_instance AND log_name=projects/PROJECT_ID/logs/preemption_monitor" \ --limit 20 \ --format json \ --project PROJECT_ID ``` Look for log lines indicating termination notice: ```text Received preemption notice for instance INSTANCE_ID ``` ##### Stacks over-provision agents If you have multiple stacks, check that they listen to unique queues determined by the `buildkite_queue` variable. Each Elastic CI Stack for GCP you deploy should have a unique value for this parameter. Otherwise, each stack scales out independently to service all the jobs on the queue, but the jobs will be distributed amongst them. This will mean that your stacks are over-provisioned. This could also happen if you have agents that are not part of an Elastic CI Stack for GCP [started with a tag](/docs/agent/cli/reference/start#tags) of the form `queue=`. Any agents started like this will compete with a stack for jobs, but the stack will scale out as if this competition did not exist. ##### Instances fail to boot the Buildkite agent Check the managed instance group's activity logs and Cloud Logging for the booting instances to determine the issue. Observe where in the startup script the boot is failing. Identify what resource is failing when the instances are attempting to use it, and fix that issue. Check startup script logs: ```bash gcloud logging read "resource.labels.instance_id=INSTANCE_ID AND log_name=projects/PROJECT_ID/logs/cloud_init_output" \ --limit 100 \ --format json \ --project PROJECT_ID ``` ##### Instances fail jobs Successfully booted instances can fail jobs for numerous reasons. A frequent source of issues is their disk filling up before the hourly cleanup job fixes it or terminates them. Check disk space on an instance: ```bash #### SSH into the instance gcloud compute ssh INSTANCE_NAME --zone ZONE --project PROJECT_ID #### Check disk usage df -h #### Check inode usage df -i #### Check Docker disk usage sudo docker system df ``` Check Docker cleanup logs: ```bash #### View regular cleanup logs sudo journalctl -u docker-gc.service -n 50 #### View emergency cleanup logs sudo journalctl -u docker-low-disk-gc.service -n 50 ``` ###### Perform a manual cleanup If an instance has a full disk, you can manually trigger cleanup: ```bash #### Run regular garbage collection sudo systemctl start docker-gc.service #### Run emergency garbage collection sudo systemctl start docker-low-disk-gc.service #### Check disk space status sudo /usr/local/bin/bk-check-disk-space.sh echo $? # 0 = healthy, 1 = low disk space ``` ##### Autoscaling not working If the managed instance group isn't scaling based on queue depth, you can try the following troubleshooting steps. Check if autoscaling is enabled: ```bash gcloud compute instance-groups managed describe INSTANCE_GROUP_NAME \ --region REGION \ --project PROJECT_ID ``` Verify if the buildkite-agent-metrics function is deployed: ```bash gcloud functions list --project PROJECT_ID | grep buildkite-agent-metrics ``` Check if the metrics are being published: ```bash gcloud monitoring time-series list \ --filter 'metric.type="custom.googleapis.com/buildkite/scheduled_jobs"' \ --project PROJECT_ID ``` ##### Permission errors If instances can't access resources, start with checking service account permissions: ```bash gcloud projects get-iam-policy PROJECT_ID \ --flatten="bindings[].members" \ --filter="bindings.members:serviceAccount:elastic-ci-agent@*" ``` ###### Common permission issues - "Can't access Secret Manager" - enable `enable_secret_access = true`. - "Can't access Cloud Storage" - enable `enable_storage_access = true`. - "Can't pull Docker images from Artifact Registry" - grant Artifact Registry Reader role. - "Can't write logs" - verify that Logs Writer role is assigned. ##### Getting help If you're still stuck after trying the troubleshooting steps suggested above: - Check the GitHub repository - [Issues](https://github.com/buildkite/terraform-buildkite-elastic-ci-stack-for-gcp/issues). - Email Buildkite Support at [support@buildkite.com](mailto:support@buildkite.com) with: * Your stack configuration (redact sensitive values) * Relevant Cloud Logging logs * Terraform error messages * Instance group status and errors ##### Additional information The following GCP documentation resources can help you with the troubleshooting process: - [Cloud Logging documentation](https://cloud.google.com/logging/docs) - [Compute Engine troubleshooting](https://cloud.google.com/compute/docs/troubleshooting) - [Managed instance groups documentation](https://cloud.google.com/compute/docs/instance-groups) --- ### Overview URL: https://buildkite.com/docs/agent/self-hosted/azure #### Buildkite agents on Microsoft Azure The Buildkite agent can be run on Microsoft Azure by installing the agent on your self-managed virtual machines, or by running agent jobs within a Kubernetes cluster using Azure Kubernetes Service (AKS). This page covers common installation and setup recommendations for different scenarios of using the Buildkite agent on Azure. ##### Using the Buildkite Agent Stack for Kubernetes on Azure The Buildkite agent's jobs can be run within a Kubernetes cluster on Azure using Azure Kubernetes Service (AKS). To start, you will need your own Kubernetes cluster running on AKS. Learn more in the [AKS documentation](https://learn.microsoft.com/azure/aks/). Once your Kubernetes cluster is running on AKS, you can then set up the [Buildkite Agent Stack for Kubernetes](/docs/agent/self-hosted/agent-stack-k8s) to run in this cluster. Learn more about how to do this from its [installation documentation](/docs/agent/self-hosted/agent-stack-k8s/installation). ##### Installing the agent on your own Azure instances To run the Buildkite agent on your own [Azure virtual machine](https://azure.microsoft.com/products/virtual-machines), use whichever installer matches your instance operating system. For example, to install on an Ubuntu-based virtual machine: 1. Launch a virtual machine using the latest Ubuntu LTS image (create via the portal or `az vm create`). 1. Connect using SSH (using the portal or `az ssh vm`). 1. Follow the Buildkite agent installation instructions for [Ubuntu](/docs/agent/self-hosted/install/ubuntu). For other Linux distributions, see the Buildkite agent installation instructions for: * [Debian](/docs/agent/self-hosted/install/debian) * [Red Hat/CentOS](/docs/agent/self-hosted/install/redhat) ##### Auto-scaling Buildkite agents on Azure If you're interested in auto-scaling your Buildkite agents on Azure, contact [support@buildkite.com](mailto:support@buildkite.com). --- ### Overview URL: https://buildkite.com/docs/agent/self-hosted/install #### Installing the Buildkite agent The Buildkite agent runs on your own machine, whether it's a VPS, server, desktop computer, embedded device. There are installers for: * []() Alternatively you can install it manually using the instructions below. ##### Manual installation If you need to install the agent on a system not listed above you'll need to perform a manual installation using one of the binaries from [buildkite-agent's releases page](https://github.com/buildkite/agent/releases). Once you have a binary, create `bin` and `builds` directories in `~/.buildkite-agent` and copy the binary and `bootstrap.sh` file into place: ```bash mkdir ~/.buildkite-agent ~/.buildkite-agent/bin ~/.buildkite-agent/builds cp buildkite-agent ~/.buildkite-agent/bin cp bootstrap.sh ~/.buildkite-agent/bootstrap.sh ``` You should now be able to start the agent: ```bash buildkite-agent start --help ``` If your architecture isn't on the releases page send an email to support and we'll help you out, or check out the [buildkite-agent's README](https://github.com/buildkite/agent?tab=readme-ov-file#installing) for instructions on how to compile it yourself. ##### Upgrade agents To upgrade your agents, you can either: * Use the package manager for your operating system. * Re-run the installation script. As long as you're using Agent v3 or later, no configuration changes are necessary. --- ### Ubuntu URL: https://buildkite.com/docs/agent/self-hosted/install/ubuntu #### Installing Buildkite agent on Ubuntu The Buildkite agent is supported on Ubuntu versions 18.04 and above using our signed apt repository. ##### Installation First, add our signed apt repository. Buildkite agent versions come in three release channels: - **Stable**: Thoroughly tested, production-ready releases recommended for most users. - **Unstable/Beta**: Newer features that are still being tested, may contain bugs that affect stability. - **Experimental**: Built directly from the `main` branch, may be incomplete or have unresolved issues. The default version of the agent is `stable`. You can get the beta version by using `unstable` instead of `stable` or the experimental version by using `experimental` instead of `stable` in the installation commands that follow. Start by downloading the Buildkite PGP key to a directory that is only writable by `root` (create the directory before running the following command if it doesn't already exist): ```shell curl -fsSL https://keys.openpgp.org/vks/v1/by-fingerprint/32A37959C2FA5C3C99EFBC32A79206696452D198 | sudo gpg --dearmor -o /usr/share/keyrings/buildkite-agent-archive-keyring.gpg ``` > 📘 Is [keys.openpgp.org](https://keys.openpgp.org) down? > If you get a 404 or other error from `curl` in the previous command, see the [Alternative keyservers](#alternative-keyservers) section. Then add the signed source to your list of apt sources: ```shell echo "deb [signed-by=/usr/share/keyrings/buildkite-agent-archive-keyring.gpg] https://apt.buildkite.com/buildkite-agent stable main" | sudo tee /etc/apt/sources.list.d/buildkite-agent.list ``` And install the Buildkite agent: ```shell sudo apt-get update && sudo apt-get install -y buildkite-agent ``` Configure your [agent token](/docs/agent/self-hosted/tokens): ```shell sudo sed -i "s/xxx/INSERT-YOUR-AGENT-TOKEN-HERE/g" /etc/buildkite-agent/buildkite-agent.cfg ``` And then start the agent: ```shell sudo systemctl enable buildkite-agent && sudo systemctl start buildkite-agent ``` You can view the logs at: ```shell sudo journalctl -f -u buildkite-agent ``` ##### Updating keys installed using apt-key If you've previously installed keys using `apt-key`, move the Buildkite agent key from `/etc/apt/trusted.gpg` or `/etc/apt/trusted.gpg.d/` to `/usr/share/keyrings/buildkite-agent-archive-keyring.gpg`, making sure that both that file and directory are only writable by `root`. Update your Buildkite agent entries in `/etc/apt/sources.list.d/buildkite-agent.list` to: ```shell deb [signed-by=/usr/share/keyrings/buildkite-agent-archive-keyring.gpg] https://apt.buildkite.com/buildkite-agent stable main ``` ##### SSH key configuration SSH keys should be copied to (or generated into) `/var/lib/buildkite-agent/.ssh/`. For example, to generate a new private key which you can add to your source code host: ```bash $ sudo su buildkite-agent $ mkdir -p ~/.ssh && cd ~/.ssh $ ssh-keygen -t rsa -b 4096 -C "build@myorg.com" ``` See the [Buildkite agent code access](/docs/agent/self-hosted/code-access) documentation for more details. ##### File locations * Configuration: `/etc/buildkite-agent/buildkite-agent.cfg` * Agent Hooks: `/etc/buildkite-agent/hooks/` * Builds: `/var/lib/buildkite-agent/builds/` * Logs, depending on your system: - `journalctl -f -u buildkite-agent` (systemd) - `/var/log/upstart/buildkite-agent.log` (upstart) - `/var/log/buildkite-agent.log` (older systems) * Agent user home: `/var/lib/buildkite-agent/` * SSH keys: `/var/lib/buildkite-agent/.ssh/` ##### Configuration The configuration file is located at `/etc/buildkite-agent/buildkite-agent.cfg`. See the [configuration documentation](/docs/agent/self-hosted/configure) for an explanation of each configuration setting. ##### Default operating system user running the agent On Ubuntu, the Buildkite agent runs as the `buildkite-agent` operating system user account. You can override this default user through a [systemd modification](#systemd-modifications). ##### Running multiple agents You can run as many parallel agent workers on the one machine as you wish with the `spawn` configuration setting, or by passing the `--spawn` flag. ```ini #### Start 5 workers. Each one independently fetches and executes jobs. spawn=5 ``` ##### Upgrading The Buildkite agent can be upgraded like any other system package: ```shell sudo apt-get update && apt-get upgrade ``` ##### Alternative keyservers The PGP key used to sign the Buildkite agent package is also hosted on the following keyservers. Use these keyservers if the one in the installation instructions is down. - [keyserver.ubuntu.com](https://keyserver.ubuntu.com) ```shell curl -fsSL 'https://keyserver.ubuntu.com/pks/lookup?op=get&search=0x32A37959C2FA5C3C99EFBC32A79206696452D198&exact=on&options=mr' | sudo gpg --dearmor -o /usr/share/keyrings/buildkite-agent-archive-keyring.gpg ``` - [pgp.mit.edu](https://pgp.mit.edu) ```shell curl -fsSL 'https://pgp.mit.edu/pks/lookup?op=get&search=0x32A37959C2FA5C3C99EFBC32A79206696452D198&exact=on&options=mr' | sudo gpg --dearmor -o /usr/share/keyrings/buildkite-agent-archive-keyring.gpg ``` ##### Systemd modifications To override specific directives from the `buildkite-agent.service` systemd unit file, implement these configurations using the _drop-in_ directory `/etc/systemd/system/buildkite-agent.service.d`. Within this directory, any files ending with `.conf` are merged in alphanumeric order and parsed after the main `buildkite-agent.service` unit file. Therefore, these `*.conf` files can be used to override or extend the directives of the `buildkite-agent.service` systemd unit file. The following `.conf` file example overrides the operating system user account running the `buildkite-agent` service, and the environment variable for `HOME`: ```conf [Service] #### Run the buildite-agent service as a different user: User=my-service-account #### Change the environment variable for HOME: Environment=HOME=/opt/my-service-account ``` --- ### Debian URL: https://buildkite.com/docs/agent/self-hosted/install/debian #### Installing Buildkite agent on Debian The Buildkite agent is supported on Debian versions 8 and above using our signed apt repository. ##### Installation Firstly, ensure your list of packages is up to date: ```shell sudo apt-get update ``` > 📘 > Debian doesn't always have `sudo` available, so you can run these commands as root and omit the `sudo`, or install the sudo package as root first. Next, ensure you have the `apt-transport-https` package installed for the HTTPS package repository, and the `dirmngr` package installed for adding the signing key: ```shell sudo apt-get install -y apt-transport-https dirmngr curl gpg ``` Now, you can add Buildkite agent's signed apt repository. Buildkite agent versions come in three release channels: - **Stable**: Thoroughly tested, production-ready releases recommended for most users. - **Unstable/Beta**: Newer features that are still being tested, may contain bugs that affect stability. - **Experimental**: Built directly from the `main` branch, may be incomplete or have unresolved issues. The default version of the agent is `stable`. You can get the beta version by using `unstable` instead of `stable` or the experimental version by using `experimental` instead of `stable` in the installation commands that follow. To proceed with the installation, download the Buildkite PGP key to a directory that is only writable by `root` (create the directory before running the following command if it doesn't already exist): ```shell curl -fsSL https://keys.openpgp.org/vks/v1/by-fingerprint/32A37959C2FA5C3C99EFBC32A79206696452D198 | sudo gpg --dearmor -o /usr/share/keyrings/buildkite-agent-archive-keyring.gpg ``` > 📘 Is [keys.openpgp.org](https://keys.openpgp.org) down? > If you get a 404 or other error from `curl` in the previous command, see the [Alternative keyservers](#alternative-keyservers) section. Then add the signed source to your apt sources list: ```shell echo "deb [signed-by=/usr/share/keyrings/buildkite-agent-archive-keyring.gpg] https://apt.buildkite.com/buildkite-agent stable main" | sudo tee /etc/apt/sources.list.d/buildkite-agent.list ``` And install the Buildkite agent: ```shell sudo apt-get update && sudo apt-get install -y buildkite-agent ``` Configure your [agent token](/docs/agent/self-hosted/tokens): ```shell sudo sed -i "s/xxx/INSERT-YOUR-AGENT-TOKEN-HERE/g" /etc/buildkite-agent/buildkite-agent.cfg ``` And then start the agent: ```shell sudo systemctl enable buildkite-agent && sudo systemctl start buildkite-agent ``` You can view the logs at: ```shell sudo journalctl -f -u buildkite-agent ``` ##### Updating keys installed using apt-key If you've previously installed keys using `apt-key`, move the Buildkite agent key from `/etc/apt/trusted.gpg` or `/etc/apt/trusted.gpg.d/` to `/usr/share/keyrings/buildkite-agent-archive-keyring.gpg`, making sure that both that file and directory are only writable by `root`. Update your Buildkite agent entries in `/etc/apt/sources.list.d/buildkite-agent.list` to: ```shell deb [signed-by=/usr/share/keyrings/buildkite-agent-archive-keyring.gpg] https://apt.buildkite.com/buildkite-agent stable main ``` ##### SSH key configuration SSH keys should be copied to (or generated into) `/var/lib/buildkite-agent/.ssh/`. For example, to generate a new private key which you can add to your source code host: ```bash $ sudo su buildkite-agent $ mkdir -p ~/.ssh && cd ~/.ssh $ ssh-keygen -t rsa -b 4096 -C "build@myorg.com" ``` See the [Buildkite agent code access](/docs/agent/self-hosted/code-access) documentation for more details. ##### File locations * Configuration: `/etc/buildkite-agent/buildkite-agent.cfg` * Agent Hooks: `/etc/buildkite-agent/hooks/` * Builds: `/var/lib/buildkite-agent/builds/` * Logs, depending on your system: - `journalctl -f -u buildkite-agent` (systemd) - `/var/log/upstart/buildkite-agent.log` (upstart) - `/var/log/buildkite-agent.log` (older systems) * Agent user home: `/var/lib/buildkite-agent/` * SSH keys: `/var/lib/buildkite-agent/.ssh/` ##### Configuration The configuration file is located at `/etc/buildkite-agent/buildkite-agent.cfg`. See the [configuration documentation](/docs/agent/self-hosted/configure) for an explanation of each configuration setting. ##### Which user the agent runs as On Debian, the Buildkite agent runs as user `buildkite-agent`. ##### Running multiple agents You can run as many parallel agent workers on the one machine as you wish with the `spawn` configuration setting, or by passing the `--spawn` flag. ```ini #### Start 5 workers. Each one independently fetches and executes jobs. spawn=5 ``` ##### Upgrading The Buildkite agent can be upgraded like any other system package: ```shell sudo apt-get update && apt-get upgrade ``` ##### Alternative keyservers The PGP key used to sign the Buildkite agent package is also hosted on the following keyservers. Use these keyservers if the one in the installation instructions is down. - [keyserver.ubuntu.com](https://keyserver.ubuntu.com) ```shell curl -fsSL 'https://keyserver.ubuntu.com/pks/lookup?op=get&search=0x32A37959C2FA5C3C99EFBC32A79206696452D198&exact=on&options=mr' | sudo gpg --dearmor -o /usr/share/keyrings/buildkite-agent-archive-keyring.gpg ``` - [pgp.mit.edu](https://pgp.mit.edu) ```shell curl -fsSL 'https://pgp.mit.edu/pks/lookup?op=get&search=0x32A37959C2FA5C3C99EFBC32A79206696452D198&exact=on&options=mr' | sudo gpg --dearmor -o /usr/share/keyrings/buildkite-agent-archive-keyring.gpg ``` ##### Systemd modifications To override specific directives from the `buildkite-agent.service` systemd unit file, implement these configurations using the _drop-in_ directory `/etc/systemd/system/buildkite-agent.service.d`. Within this directory, any files ending with `.conf` are merged in alphanumeric order and parsed after the main `buildkite-agent.service` unit file. Therefore, these `*.conf` files can be used to override or extend the directives of the `buildkite-agent.service` systemd unit file. The following `.conf` file example overrides the operating system user account running the `buildkite-agent` service, and the environment variable for `HOME`: ```conf [Service] #### Run the buildite-agent service as a different user: User=my-service-account #### Change the environment variable for HOME: Environment=HOME=/opt/my-service-account ``` --- ### Red Hat/CentOS URL: https://buildkite.com/docs/agent/self-hosted/install/redhat #### Installing Buildkite agent on Red Hat Enterprise Linux, CentOS, and Amazon Linux The Buildkite agent is supported on the following operating systems, using the yum repository: - Red Hat Enterprise Linux + Red Hat Enterprise Linux 7 (RHEL7) + Red Hat Enterprise Linux 8 (RHEL8) + Red Hat Enterprise Linux 9 (RHEL9) + Red Hat Enterprise Linux 10 (RHEL10) - CentOS + CentOS 7 + CentOS 8 - Amazon Linux + Amazon Linux 2 (AL2) + Amazon Linux 2023 (AL2023) ##### Installation Start by adding the yum repository for your architecture (if unsure, run `uname -m` to find your system's architecture). Buildkite agent versions come in three release channels: - **Stable**: Thoroughly tested, production-ready releases recommended for most users. - **Unstable/Beta**: Newer features that are still being tested, may contain bugs that affect stability. - **Experimental**: Built directly from the `main` branch, may be incomplete or have unresolved issues. The default version of the agent is `stable`. You can get the beta version by using `unstable` instead of `stable` or the experimental version by using `experimental` instead of `stable` in the installation commands that follow. > 📘 > The `repo_gpgcheck=0` parameter is required when additional OS hardening has been enabled to verify the GPG signature of the repository's metadata. Without this extra parameter for disabling metadata signature checking, the package installation will not succeed. For 64-bit (x86_64): ```shell sudo sh -c 'echo -e "[buildkite-agent]\nname = Buildkite Pty Ltd\nbaseurl = https://yum.buildkite.com/buildkite-agent/stable/x86_64/\nenabled=1\ngpgcheck=0\nrepo_gpgcheck=0\npriority=1" > /etc/yum.repos.d/buildkite-agent.repo' ``` For 32-bit (i386): ```shell sudo sh -c 'echo -e "[buildkite-agent]\nname = Buildkite Pty Ltd\nbaseurl = https://yum.buildkite.com/buildkite-agent/stable/i386/\nenabled=1\ngpgcheck=0\nrepo_gpgcheck=0\npriority=1" > /etc/yum.repos.d/buildkite-agent.repo' ``` For ARM 64-bit (aarch64): ```shell sudo sh -c 'echo -e "[buildkite-agent]\nname = Buildkite Pty Ltd\nbaseurl = https://yum.buildkite.com/buildkite-agent/stable/aarch64/\nenabled=1\ngpgcheck=0\nrepo_gpgcheck=0\npriority=1" > /etc/yum.repos.d/buildkite-agent.repo' ``` Then install the agent: ```shell sudo yum -y install buildkite-agent ``` Configure your [agent token](/docs/agent/self-hosted/tokens): ```shell sudo sed -i "s/xxx/INSERT-YOUR-AGENT-TOKEN-HERE/g" /etc/buildkite-agent/buildkite-agent.cfg ``` After the installation, you can start the agent and tail the logs by using the following command: ```shell sudo systemctl enable buildkite-agent && sudo systemctl start buildkite-agent sudo tail -f /var/log/messages ``` ##### SSH key configuration SSH keys should be copied to (or generated into) `/var/lib/buildkite-agent/.ssh/`. For example, to generate a new private key which you can add to your source code host: ```bash $ sudo su buildkite-agent $ mkdir -p ~/.ssh && cd ~/.ssh $ ssh-keygen -t rsa -b 4096 -C "build@myorg.com" ``` See the [Buildkite agent code access](/docs/agent/self-hosted/code-access) documentation for more details. ##### File locations - Configuration: `/etc/buildkite-agent/buildkite-agent.cfg` - Agent Hooks: `/etc/buildkite-agent/hooks/` - Builds: `/var/buildkite-agent/builds/` - Logs, depending on your system: + `journalctl -f -u buildkite-agent` (systemd) + `/var/log/buildkite-agent.log` (older systems) - Agent user home: `/var/lib/buildkite-agent/` - SSH keys: `/var/lib/buildkite-agent/.ssh/` ##### Configuration The configuration file is located at `/etc/buildkite-agent/buildkite-agent.cfg`. See the [configuration documentation](/docs/agent/self-hosted/configure) for an explanation of each configuration setting. ##### Which user the agent runs as On Red Hat, the Buildkite agent runs as user `buildkite-agent`. ##### Running multiple agents You can run as many parallel agent workers on the one machine as you wish with the `spawn` configuration setting, or by passing the `--spawn` flag. ```ini #### Start 5 workers. Each one independently fetches and executes jobs. spawn=5 ``` ##### Upgrading ```shell sudo yum clean expire-cache && sudo yum update buildkite-agent ``` ##### Systemd modifications To override specific directives from the `buildkite-agent.service` systemd unit file, implement these configurations using the _drop-in_ directory `/etc/systemd/system/buildkite-agent.service.d`. Within this directory, any files ending with `.conf` are merged in alphanumeric order and parsed after the main `buildkite-agent.service` unit file. Therefore, these `*.conf` files can be used to override or extend the directives of the `buildkite-agent.service` systemd unit file. The following `.conf` file example overrides the operating system user account running the `buildkite-agent` service, and the environment variable for `HOME`: ```conf [Service] #### Run the buildite-agent service as a different user: User=my-service-account #### Change the environment variable for HOME: Environment=HOME=/opt/my-service-account ``` --- ### FreeBSD URL: https://buildkite.com/docs/agent/self-hosted/install/freebsd #### Installing Buildkite agent on FreeBSD You can install Buildkite agent on most FreeBSD systems. ##### Installation [FreeBSD](https://www.freebsd.org/) allows you to install Buildkite agent using the `pkg` package manager. ```shell pkg install buildkite-agent ``` Configure your [agent token](https://buildkite.com/docs/agent/self-hosted/tokens): ```shell sudo sed -i "s/xxx/INSERT-YOUR-AGENT-TOKEN-HERE/g" /usr/local/etc/buildkite/buildkite-agent.cfg ``` Then, start the agent: ```shell buildkite-agent start ``` Alternatively you can follow the [manual installation instructions](/docs/agent/self-hosted/install#manual-installation). ##### SSH key configuration SSH keys should be copied to (or generated into) `~/.ssh/` for the user the agent is running as. For example, to generate a new private key which you can add to your source code host: ```bash $ mkdir -p ~/.ssh && cd ~/.ssh $ ssh-keygen -t rsa -b 4096 -C "build@myorg.com" ``` See the [Buildkite agent code access](/docs/agent/self-hosted/code-access) documentation for more details. ##### File locations * Configuration: `/usr/local/etc/buildkite/buildkite-agent.cfg` * Agent Hooks: `/usr/local/etc/buildkite/hooks` * Builds: `/usr/local/var/buildkite/builds` * SSH keys: `~/.ssh` ##### Configuration The configuration file is located at `/usr/local/etc/buildkite/buildkite-agent.cfg`. See the [configuration documentation](/docs/agent/self-hosted/configure) for an explanation of each configuration setting. ##### Upgrading ``` pkg upgrade buildkite-agent ``` --- ### macOS URL: https://buildkite.com/docs/agent/self-hosted/install/macos #### Installing Buildkite agent on macOS The Buildkite agent is supported on macOS 11 (Big Sur) or newer using Homebrew or the Buildkite installer script, and supports pre-release versions of both macOS and Xcode. ##### Installation We recommend installing the agent using [Homebrew](http://brew.sh/) so you can use the [Buildkite formula repository](https://github.com/buildkite/homebrew-buildkite). If you don't use Homebrew you should follow the [Linux](/docs/agent/self-hosted/install/linux) install instructions. To install the agent using Homebrew: 1. On the command line, install the agent by running: ```shell brew install buildkite/buildkite/buildkite-agent ``` 1. Add your [agent token](/docs/agent/self-hosted/tokens) to authenticate the agent by replacing `INSERT-YOUR-AGENT-TOKEN-HERE` with your agent token and running: ```shell sed -i '' "s/xxx/INSERT-YOUR-AGENT-TOKEN-HERE/g" "$(brew --prefix)"/etc/buildkite-agent/buildkite-agent.cfg ``` **Note:** To verify that your agent token has been added to the `buildkite-agent.cfg` file, run `cat $(brew --prefix)/etc/buildkite-agent/buildkite-agent.cfg`, and check that the output contains your agent token. 1. Start the agent by running: ```shell buildkite-agent start ``` ##### SSH key configuration SSH keys should be copied to (or generated into) the `.ssh` directory in the users' home directory (for example, `/Users/alice/.ssh`). For example, to generate a new private key which you can add to your source code host: ```bash $ mkdir -p ~/.ssh && cd ~/.ssh $ ssh-keygen -t rsa -b 4096 -C "build@myorg.com" ``` See the [Buildkite agent code access](/docs/agent/self-hosted/code-access) documentation for more details. ##### File locations File locations depend on your installation method and Mac hardware. ###### Homebrew installation To see the paths to the agent's configuration, hooks, builds, and logs on your system, run `brew info buildkite-agent`. The typical paths for [Mac computers with Apple silicon](https://support.apple.com/en-gb/HT211814) (such as M1 chips) are: * Configuration: `/opt/homebrew/etc/buildkite-agent/buildkite-agent.cfg` * Agent Hooks: `/opt/homebrew/etc/buildkite-agent/hooks` * Builds: `/opt/homebrew/buildkite-agent/builds` * Log: `/opt/homebrew/var/log/buildkite-agent.log` The typical paths for Mac computers with Intel processors are: * Configuration: `/usr/local/etc/buildkite-agent/buildkite-agent.cfg` * Agent Hooks: `/usr/local/etc/buildkite-agent/hooks` * Builds: `/usr/local/var/buildkite-agent/builds` * Log: `/usr/local/var/log/buildkite-agent.log` ###### Linux installer script on macOS * Configuration: `~/.buildkite-agent/buildkite-agent.cfg` * Agent Hooks: `~/.buildkite-agent/hooks` * Builds: `~/.buildkite-agent/builds` ##### Configuration See the [configuration documentation](/docs/agent/self-hosted/configure) for an explanation of each configuration setting. ##### Which user the agent runs as On macOS, the Buildkite agent runs as the user who started the `launchd` service. ##### Starting on login If you installed the agent using Homebrew you can run the following command to get instructions on how to install the correct plist and have buildkite-agent start on login: ```bash brew info buildkite-agent ``` If you installed the buildkite-agent using the [Linux install script](linux) then you'll need to install the plist yourself using the following commands: ```bash #### Download the launchd config to ~/Library/LaunchAgents/ curl -o ~/Library/LaunchAgents/com.buildkite.buildkite-agent.plist https://raw.githubusercontent.com/buildkite/agent/main/templates/launchd_local_with_gui.plist #### Set buildkite-agent to be run as the current user (a full user, created using System Prefs) sed -i '' "s/your-build-user/$(whoami)/g" ~/Library/LaunchAgents/com.buildkite.buildkite-agent.plist #### Create the agent's log directory with the correct permissions mkdir -p ~/.buildkite-agent/log && sudo chmod 775 ~/.buildkite-agent/log #### Start the agent launchctl load ~/Library/LaunchAgents/com.buildkite.buildkite-agent.plist #### Check the logs tail -f ~/.buildkite-agent/log/buildkite-agent.log ``` > 🚧 Troubleshooting: `launchctl` fails with "Could not find domain for" > Ensure that you have a user logged in to the macOS host, then re-run: `launchctl load ~/Library/LaunchAgents/com.buildkite.buildkite-agent.plist` ##### Running multiple agents Launching and managing multiple agents can be done using `launchd`. If you need the same configuration on each agent, either configure the `launchd` service to use the [`--spawn` flag](/docs/agent/cli/reference/start#starting-an-agent-options) on the `buildkite-agent`, or the [`spawn` setting](/docs/agent/self-hosted/configure#spawn) in the `buildkite-agent.cfg` file. Using the existing agent `plist`, add the spawn flag to the `ProgramArguments` and change the number to how many agents you want to run. The below example will start five agents each time the service is started: ```xml ProgramArguments /Users/your-build-user/.buildkite-agent/bin/buildkite-agent start --spawn=5 ``` If your agents each need different configuration, you can create multiple `launchd` services: 1. Find your agent's `plist`. If you installed the agent with Homebrew you can find the `plist` in your user's `~/Library/LaunchAgents` directory. If you installed with the Linux script, you can take a copy of the [template plist](https://raw.githubusercontent.com/buildkite/agent/main/templates/launchd_local_with_gui.plist) from the Agent's GitHub repository. 2. Make as many copies of the plist as you require, one per configuration, ensuring that each has a unique label. 3. Once you've edited your plist/s with your custom config, make sure that all the referenced paths exist and have the correct permissions. See the [Starting on Login](#starting-on-login) section above for an example of how to check directories and permissions. 4. Load each `plist` into `launchd` using `launchctl`. ##### Upgrading If you installed the agent using Homebrew you can use the standard brew upgrade command to update the agent: ```shell brew update && brew upgrade buildkite-agent ``` If you installed the buildkite-agent using the [Linux install script](linux) then you should run the installer script again and it will update your agent. --- ### Windows URL: https://buildkite.com/docs/agent/self-hosted/install/windows #### Installing Buildkite agent on Windows The Buildkite agent is supported on Windows 10, Windows Server 2016, and newer. You can use either of the two installation methods: - [Automated installation (using PowerShell)](/docs/agent/self-hosted/install/windows#automated-install-with-powershell) - [Manual installation](/docs/agent/self-hosted/install/windows#manual-installation) > 🚧 Security considerations > The agent runs scripts from the agent's hooks directory, and checks out and runs scripts from code repositories. Please consider the file system permissions for these directories carefully, especially when operating in a multi-user environment. ##### Automated install with PowerShell You'll need to run the automated installer within PowerShell with administrative privileges. Once you're in an escalated PowerShell session, you can run this script to install the latest version of the agent: ```shell PS> $env:buildkiteAgentToken = "" PS> Set-ExecutionPolicy Bypass -Scope Process -Force iex ((New-Object System.Net.WebClient).DownloadString('https://raw.githubusercontent.com/buildkite/agent/main/install.ps1')) ``` ##### Manual installation 1. Download the latest Windows release from [Buildkite agent releases on GitHub](https://github.com/buildkite/agent/releases) 1. Extract the files to a directory of your choice (for example, `C:\buildkite-agent`) 1. Edit `buildkite-agent.cfg` and add your [agent token](/docs/agent/self-hosted/tokens) 1. Run `buildkite-agent.exe start` from a command prompt ##### SSH key configuration Copy or generate SSH keys into your `.ssh` directory. For example, typing the following into Git Bash generates a new private key which you can add to your source code host: ```bash $ ssh-keygen -t rsa -b 4096 -C "build@myorg.com" ``` See the [Buildkite agent code access](/docs/agent/self-hosted/code-access) documentation for more details. ##### File locations - Configuration: `C:\buildkite-agent\buildkite-agent.cfg` - Agent Hooks: `C:\buildkite-agent\hooks` - Builds: `C:\buildkite-agent\builds` - SSH keys: `%USERPROFILE%\.ssh` ##### Configuration The configuration file is located at `C:\buildkite-agent\buildkite-agent.cfg`. See the [configuration documentation](/docs/agent/self-hosted/configure) for an explanation of each configuration setting. There are two options to be aware of for this initial setup: - Set your [agent token](/docs/agent/self-hosted/tokens), if you did not set it as an environment variable during installation. - You may need to use the `shell` configuration option. On Windows, Buildkite defaults to using Batch. If you want to use PowerShell or PowerShell Core, you must point Buildkite to the correct shell. For example, to use PowerShell: ```cfg #Provide the path to PowerShell executables shell="C:\Windows\System32\WindowsPowerShell\v1.0\powershell.exe" ``` > 📘 > Using PowerShell Core (PowerShell 6 or 7) causes unusual behavior around pipeline upload. Refer to [Defining steps: pipeline.yml file](/docs/pipelines/configure/defining-steps#step-defaults-pipeline-dot-yml-file) for details. ##### Upgrading Rerun the install script. ##### Git for Windows While the agent will work without Git installed, you will need [Git for Windows](https://gitforwindows.org/) to interact with Git. > 📘 > Buildkite does not currently support using Git Bash to run Bash scripts as part of your pipeline. We recommend using CMD (default) or PowerShell 5.x. You can also use PowerShell Core, but be aware of the odd behavior around pipeline upload steps. Refer to [Defining steps: pipeline.yml file](/docs/pipelines/configure/defining-steps#step-defaults-pipeline-dot-yml-file) for more information. ##### Running as a service The simplest way to run buildkite-agent as a service is to use a third-party tool like [nssm](https://nssm.cc/). Once both nssm and the [Buildkite agent](#automated-install-with-powershell) have been installed, you can create the service that will run the Buildkite agent using either of the following (set of) commands: Run the nssm GUI, create the Buildkite agent service and configure it manually: ``` nssm install buildkite-agent ``` Alternatively, create the Buildkite agent service with the following set of nssm commands, ensuring that the command prompt or PowerShell running these commands has administrator privileges: ``` #### These commands assume you installed the agent using PowerShell #### Your paths may be different if you did a manual installation nssm install buildkite-agent "C:\buildkite-agent\bin\buildkite-agent.exe" "start" nssm set buildkite-agent AppParameters "start --queue=windows" nssm set buildkite-agent AppStdout "C:\buildkite-agent\buildkite-agent.log" nssm set buildkite-agent AppStderr "C:\buildkite-agent\buildkite-agent.log" nssm status buildkite-agent #### Expected output: SERVICE_STOPPED nssm start buildkite-agent #### Expected output: buildkite-agent: START: The operation completed successfully. nssm status buildkite-agent #### Expected output: SERVICE_RUNNING ``` If you'd like to change the user the buildkite-agent service runs as, you can use the same third-party tool [nssm](https://nssm.cc/) using the command line: ``` nssm set buildkite-agent ObjectName "COMPUTER_NAME\ACCOUNT_NAME" "PASSWORD" ``` > 📘 > Ensure that this new user is a local admin on the system or has been granted all the necessary permissions to run the buildkite-agent service using nssm. Replace the following: - `COMPUTER_NAME`: The system name under **Settings**. For example, `PC`. - `ACCOUNT_NAME`: The name of the account you'd like to use. For example, `Administrator`. - `PASSWORD`: The password for the account you'd like to use. You can reference a variable rather than directly specifying the value. ##### Which user the agent runs as On Windows, all commands run as the invoking user. ##### Installing Buildkite on Windows Subsystem for Linux 2 You can use Buildkite on Windows through WSL2, but it has limitations. At present (12 January 2022), hooks and plugins both have issues. We recommend using CMD (default) or PowerShell 5.x instead. To install the agent on WSL2, follow the [generic Linux installation guide](/docs/agent/self-hosted/install/linux). Do not use the guides for Ubuntu, Debian, and so on, even if that is the Linux distro you are using with WSL2. > 📘 > Using WSL2 causes unusual behavior during pipeline upload. Refer to [Defining steps: pipeline.yml file](/docs/pipelines/configure/defining-steps#step-defaults-pipeline-dot-yml-file) for details. --- ### Linux URL: https://buildkite.com/docs/agent/self-hosted/install/linux #### Installing Buildkite agent on Linux You can install Buildkite agent on most Linux based systems (including macOS). ##### Installation Run the following script ([view the source](https://raw.githubusercontent.com/buildkite/agent/main/install.sh)), which will download and install the correct binary for your system and architecture (you will need your [agent token](/docs/agent/self-hosted/tokens)): ```shell TOKEN="INSERT-YOUR-AGENT-TOKEN-HERE" bash -c "`curl -sL https://raw.githubusercontent.com/buildkite/agent/main/install.sh`" ``` Then, start the agent: ```shell ~/.buildkite-agent/bin/buildkite-agent start ``` Alternatively you can follow the [manual installation instructions](/docs/agent/self-hosted/install#manual-installation). ##### SSH key configuration SSH keys should be copied to (or generated into) `~/.ssh/` for the user the agent is running as. For example, to generate a new private key which you can add to your source code host: ```bash $ mkdir -p ~/.ssh && cd ~/.ssh $ ssh-keygen -t rsa -b 4096 -C "build@myorg.com" ``` See the [Buildkite agent code access](/docs/agent/self-hosted/code-access) documentation for more details. ##### File locations * Configuration: `~/.buildkite-agent/buildkite-agent.cfg` * Agent Hooks: `~/.buildkite-agent/hooks` * Builds: `~/.buildkite-agent/builds` * SSH keys: `~/.ssh` * Logs, depending on your system: - `journalctl -f -u buildkite-agent` (when started with `systemd`) - logs only go to stdout and do not persist (when started with `buildkite-agent start`) ##### Configuration The configuration file is located at `~/.buildkite-agent/buildkite-agent.cfg`. See the [configuration documentation](/docs/agent/self-hosted/configure) for an explanation of each configuration setting. ##### Which user the agent runs as When running an agent installed using the manual Linux installation method, all commands run as the invoking user. ##### Upgrading Rerun the install script. --- ### Docker URL: https://buildkite.com/docs/agent/self-hosted/install/docker #### Running Buildkite agent with Docker You can run the Buildkite agent inside a Docker container using the [official image on Docker Hub](https://hub.docker.com/r/buildkite/agent). >📘 Running each build in its own container > These instructions cover how to run the agent using Docker. If you want to learn how to isolate each build using Docker and any of our standard Linux-based installers read the [Containerized builds with Docker](/docs/pipelines/tutorials/docker-containerized-builds) guide. ##### Running using Docker Start an agent with the [official image](https://hub.docker.com/r/buildkite/agent/) based on Alpine Linux: ```shell docker run -d -t --name buildkite-agent buildkite/agent:3 start --token "" ``` A much larger Ubuntu-based image is also available: ```shell docker run -d -t --name buildkite-agent buildkite/agent:3-ubuntu start --token "" ``` >🚧 Caveats for builds that need Docker access. > If your build jobs require Docker access, and you're passing through the Docker socket, you must ensure the build path is consistent between the Docker host and the agent container. See [Allowing builds to use Docker](#allowing-builds-to-use-docker) for more details. ##### Version tagging The default tag (`buildkite/agent:latest`) will always point to the latest stable release, but we recommend you use `buildkite/agent:3` to prevent breaking changes. If you want to use an exact version, you can use the corresponding tag, such as `buildkite/agent:3.0.1`. See [Docker Hub](https://hub.docker.com/r/buildkite/agent/tags/) for a list of all the available versions. ##### Default file locations * Configuration: `/buildkite/buildkite-agent.cfg` * Agent Hooks: `/buildkite/hooks` * Builds: `/buildkite/builds` * Agent user home: `/root` ##### Configuration Most [agent configuration settings](/docs/agent/configuration) can be set with environment variables. You can also mount a configuration file in, for example: ```bash docker run \ -v "/path/to/buildkite-agent.cfg:/buildkite/buildkite-agent.cfg:ro" \ -d \ -t \ --name buildkite-agent \ buildkite/agent:3 start --token "" ``` ##### Which user the agent runs as On Docker, the default user is `root`, unless you use `docker run --user`, or use a Kubernetes Pod security context to override the user ##### Adding hooks You can add [custom agent hooks](/docs/agent/hooks) by mounting or copying them into the `/buildkite/hooks` directory, and ensuring they are executable. For example, this is how you'd mount the hooks directory using a read-only host volume: ```bash docker run \ -v "/path/to/buildkite-hooks:/buildkite/hooks:ro" \ -d \ -t \ --name buildkite-agent \ buildkite/agent:3 start --token "" ``` Alternatively, if you create your own image based off `buildkite/agent`, you can copy your hooks into the correct location: ```dockerfile FROM buildkite/agent:3 COPY hooks /buildkite/hooks/ ``` ##### Permissions errors when using Docker A problem you may encounter when using Docker volume mounts (-v) in Linux or Windows is that the container may create files on the host system with root user permissions. This can result in errors like the following: ``` $ git clean -fxdq warning: failed to remove dist/ ``` Permissions on the host are set based on the user running the Docker daemon, which under Linux is generally `root`. When the Agent (running as `buildkite-agent`) tries to subsequently remove or modify those files, permissions errors occur. To ensure correct file permissions, you can: * Change the way permissions are set on the files created by your Docker container: modify your container's `USER` or modify your build commands. * Configure [user namespace remapping](https://docs.docker.com/engine/security/userns-remap/) on your Docker host to ensure that container users are remapped to the same user running your `buildkite-agent`. * Run a script before or after builds that resets permissions. You can do this either using Docker (because it runs as root) or using `sudo`. See the Buildkite Elastic CI Stack for AWS's [fix-buildkite-agent-builds-permissions](https://github.com/buildkite/elastic-ci-stack-for-aws/blob/v2.3.4/packer/conf/buildkite-agent/scripts/fix-buildkite-agent-builds-permissions) script or the [sudoers.conf](https://github.com/buildkite/elastic-ci-stack-for-aws/blob/v2.3.4/packer/conf/buildkite-agent/sudoers.conf) script for examples of using an agent hook and sudo command to reset permissions. ##### Allowing builds to use Docker To use Docker and volume mounting from build scripts, you need to ensure that the builds directory, and the `buildkite-agent` binary path, are mounted in from the host machine, and that their paths on the host and in the agent container are the same. For example, when a script runs the command `docker run --volume "$PWD:/code" ...` the `$PWD` environment variable will resolve to the path in your agent's container (for example, `/var/lib/buildkite/builds/my-org/my-pipeline`). The Docker daemon, which exists on the host machine, will attempt to mount `/var/lib/buildkite/builds/my-org/my-pipeline` from the host file system, not the agent container file system. If that directory does not exist on the host, Docker will mount an empty directory to `/code` without showing any error. The following example shows how to configure the agent container with the correct host volumes mounts, and `BUILDKITE_BUILD_PATH` configuration: ```bash docker run \ -v "/var/lib/buildkite/builds:/var/lib/buildkite/builds" \ -v "/usr/local/bin/buildkite-agent:/usr/local/bin/buildkite-agent" \ -v "/var/run/docker.sock:/var/run/docker.sock" \ -e "BUILDKITE_BUILD_PATH=/var/lib/buildkite/builds" \ -d \ -t \ --name buildkite-agent \ buildkite/agent:3 start --token "" ``` >🚧 Security considerations > Providing builds with a Docker socket gives them access to whatever the docker daemon has access to on the host system. Typically this is `root`, which means builds have full root system access. This can be mitigated somewhat with [user namespace remapping](https://docs.docker.com/engine/security/userns-remap/), but caution should still be exercised. ##### Exposing build secrets into the container There are many approaches to exposing secrets to Docker containers. In addition, many Docker platforms have their own methods for exposing secrets. If you're running your own Docker containers, we recommend using a read-only [host volume](https://docs.docker.com/engine/storage/volumes/#use-a-read-only-volume). The following example mounts a directory containing secrets on the host machine (`$HOME/buildkite-secrets`) into the container as a read-only data volume at `/buildkite-secrets`: ```bash docker run \ -v "/path/to/buildkite-secrets:/buildkite-secrets:ro" \ -d \ -t \ --name buildkite-agent \ buildkite/agent:3 start --token "" ``` If you've exposed pipeline secrets as environment variables, you can pass them through to the container using the -e option: ``` bash docker run \ -e MY_SECRET_ENV \ -d \ -t \ --name buildkite-agent \ buildkite/agent:3 start --token "" ``` ##### Docker Hub rate limits If you're using Docker with Docker images hosted on Docker Hub, note that as of 2nd November 2020 there are [strict rate limits](/docs/pipelines/integrations/other/docker-hub) for image downloads. ##### Authenticating private git repositories To configure a [git-credentials file](https://git-scm.com/docs/git-credential-store#_storage_format) located at `/buildkite-secrets/git-credentials`, you could use the following [`environment` hook](/docs/agent/hooks#job-lifecycle-hooks) mounted to `/buildkite/hooks/environment`: ```bash #!/bin/bash set -euo pipefail git config --global credential.helper "store --file=/buildkite-secrets/git-credentials" #### You can export other secrets here too #### export FOO=bar ``` To configure a private SSH key located at `/buildkite-secrets/id_rsa_buildkite_git` you could use the following [`environment` hook](/docs/agent/hooks#job-lifecycle-hooks) mounted to `/buildkite/hooks/environment`: ```bash #!/bin/bash set -euo pipefail eval "$(ssh-agent -s)" ssh-add -k /buildkite-secrets/id_rsa_buildkite_git #### You can export other secrets here too #### export FOO=bar ``` Other options for configuring Git and SSH include: * Running `ssh-agent` on the host machine and mounting the ssh-agent socket into the containers. See the [Buildkite agent code access](/docs/agent/self-hosted/code-access) documentation for examples on using ssh-agent. * The least-secure approach: the built-in [docker-ssh-env-config](https://github.com/buildkite/docker-ssh-env-config) support allows you to pass in keys using environment variables. ##### Entrypoint customizations The entrypoint uses `tini` to correctly pass signals to, and kill, sub-processes. Instead of redefining `ENTRYPOINT` we recommend you copy executable scripts into `/docker-entrypoint.d/`. All executable scripts should not contain any file extension, and will be executed in alphanumeric order. --- ### Overview URL: https://buildkite.com/docs/agent/self-hosted/configure #### Buildkite agent configuration Every agent installer comes with a configuration file. You can also customize many of the configuration values using environment variables. ##### Example configuration file ```sh token="24db61df8338027652b24aadf82dd483b016eef98fcd332815" name="my-app-%spawn" tags="ci=true,docker=true" git-clean-flags="-ffdqx" debug=true ``` You can find the directory location of your configuration file in your platform's installation documentation. You can also set this folder using the `--config` command line argument or the `BUILDKITE_AGENT_CONFIG` environment variable. ```sh BUILDKITE_AGENT_CONFIG="/etc/buildkite-agent/custom-config-files-dir" buildkite-agent start ``` ##### Configuration settings | Required | `[#](#)` **Environment variable: ** `` **Default value: ** `` | | Optional | `[#](#)` **Environment variable: ** `` **Default value: ** `` | ###### Experimental features Buildkite frequently introduces new experimental features to the agent, which can be enabled using the [`experiment` flag or the `$BUILDKITE_AGENT_EXPERIMENT` environment variable setting](#experiment). These features are not yet considered stable and may change or be removed in future versions of the agent. Learn more about these experimental features in [Agent experiments](/docs/agent/self-hosted/configure/experiments). ##### Deprecated configuration settings | `disconnect-after-job-timeout` | When `disconnect-after-job` is specified, the number of seconds to wait for a job before shutting down. Not to be confused with [default and maximum build timeouts](/docs/pipelines/configure/build-timeouts#command-timeouts). _Default:_ `120` _Environment variable:_ `BUILDKITE_AGENT_DISCONNECT_AFTER_JOB_TIMEOUT` | `meta-data` | Meta data for the agent. _Default:_ `"queue=default"` _Environment variable:_ `BUILDKITE_AGENT_META_DATA` [Use instead: tags](#tags) | `meta-data-ec2` | Include the host's EC2 meta-data (instance-id, instance-type, and ami-id) as meta-data. _Default:_ `false` _Environment variable:_ `BUILDKITE_AGENT_META_DATA_EC2` [Use instead: tags-from-ec2](#tags-from-ec2) | `meta-data-ec2-tags` | Include the host's EC2 tags as meta-data. _Default:_ `false` _Environment variable:_ `BUILDKITE_AGENT_META_DATA_EC2_TAGS` [Use instead: tags-from-ec2-tags](#tags-from-ec2-tags) | `meta-data-gcp` | Include the host's GCP meta-data as meta-data. _Default:_ `false` _Environment variable:_ `BUILDKITE_AGENT_META_DATA_GCP_TAGS` [Use instead: tags-from-gcp](#tags-from-gcp) | `no-automatic-ssh-fingerprint-verification` | Do not automatically verify SSH fingerprints for first-time checkouts. _Default:_ `false` _Environment variable:_ `BUILDKITE_NO_AUTOMATIC_SSH_FINGERPRINT_VERIFICATION` [Use instead: no-ssh-keyscan](#no-ssh-keyscan) ##### Environment variables Most configuration options can be specified as environment variables when starting the agent, for example: ```sh BUILDKITE_AGENT_TAGS="queue=deploy,host=$(hostname)" buildkite-agent start ``` These variables cannot be modified through the Buildkite web interface, API or using pipeline upload for security reasons. You may be able to modify some of the options, such as `BUILDKITE_GIT_CLONE_FLAGS`, from within [hooks](/docs/agent/hooks). ##### Agent Naming The following template variables are supported when configuring the agent name: - `%hostname` - the agent machine's hostname - `%spawn` - the spawn index number (1, 2, 3, etc.) when launching multiple agents per host - `%random` - six (6) random alphanumeric characters [a-zA-Z0-9] - `%pid` - the agent's process ID > 📘 Note > If you're using `--spawn` to run multiple agents on a single host, it's recommended to use `%spawn` in your agent name to ensure that each agent running on the host using the same `build-path` has a unique agent name. --- ### Job dispatch URL: https://buildkite.com/docs/agent/self-hosted/configure/job-dispatch #### Job dispatch By default, self-hosted agents poll the Buildkite API at regular intervals to check for available jobs. When a job is available, the agent accepts it and begins execution. The polling interval is set by Buildkite Pipelines (the Buildkite platform) during agent registration, and each poll includes random jitter to avoid multiple agents synchronizing their requests. This polling-based approach is reliable and works across all network configurations, but introduces latency between a job becoming available and an agent picking it up. ##### Streaming job dispatch Streaming job dispatch reduces job acceptance latency by maintaining a persistent connection between the agent and Buildkite Pipelines. Instead of the agent periodically asking for work, Buildkite Pipelines pushes jobs to idle agents as soon as they become available. Streaming job dispatch is available from Buildkite agent version 3.122.0 and later. To opt in to this feature, when [starting your self-hosted agent](/docs/agent/cli/reference/start), point your agent at the streaming endpoint using the [`--endpoint` option](/docs/agent/cli/reference/start#endpoint): ```bash buildkite-agent start --endpoint https://agent-edge.buildkite.com/v3 ``` You can also set this using the [`BUILDKITE_AGENT_ENDPOINT` environment variable](/docs/agent/self-hosted/configure#endpoint) or by adding `endpoint=https://agent-edge.buildkite.com/v3` to your `buildkite-agent.cfg` file. The agent's [`--ping-mode` option](/docs/agent/cli/reference/start#ping-mode) controls the dispatch behavior: - `auto` (the default when the `--ping-mode` option is omitted): Uses streaming when available, and falls back to polling if the streaming connection fails. This is the recommended option. - `poll-only`: Uses the classical polling-based dispatch only. Specify this option if network issues prevent streaming from working effectively. - `stream-only`: Uses streaming dispatch only, with no fallback. The agent stops if the streaming connection fails. In `auto` mode, both the streaming and polling mechanisms run concurrently. The streaming connection takes priority when healthy, and the polling loop activates automatically if the streaming connection becomes unhealthy. --- ### Git mirrors URL: https://buildkite.com/docs/agent/self-hosted/configure/git-mirrors #### Git mirrors Git mirrors allow you to create local copies of Git repositories on Buildkite agents. Git mirrors work by maintaining a single bare Git mirror for each repository on a host, shared among multiple agents and pipelines. Checkouts then reference this mirror using `git clone --reference`, as do submodules. ##### When to use Git mirrors Git mirrors optimize performance and reduce network bandwidth, helping you minimize the time it takes to re-clone large repositories. They're particularly useful in [self-hosted](/docs/pipelines/architecture#self-hosted-hybrid-architecture) build infrastructure where multiple agents are running. Key benefits of Git mirrors: - Speed up cloning by sharing objects across checkouts instead of fetching everything repeatedly. - Reduce disk usage by storing common objects once in the mirror. - Achieve faster builds with less time spent on Git operations. - Maintain a complete copy of the repository on your infrastructure as a local backup. - Handle large repositories more efficiently, especially those with many files and extensive history. ##### How Git mirrors work Git mirrors leverage two core Git features: - `git clone --mirror` creates a complete copy of the remote repository, including all branches and references. This differs from `git clone --bare` by capturing everything from the remote. This is crucial when the agent doesn't "know" which branch it needs to build ahead of time. - `git clone --reference` creates checkouts that borrow objects from the mirror instead of fetching them from the remote. This saves both network bandwidth and disk space. One caveat is that checkouts depend on the mirror remaining healthy and available. ##### Setting up Git mirrors Configure Git mirroring using the `--git-mirrors-path` flag on agents. This flag sets the central location (the Git mirror directory) where the agent stores mirrors. Use these agent configuration options: - [git-clone-mirror-flags](/docs/agent/self-hosted/configure#git-clone-mirror-flags) - [git-mirrors-lock-timeout](/docs/agent/self-hosted/configure#git-mirrors-lock-timeout) - [git-mirrors-path](/docs/agent/self-hosted/configure#git-mirrors-path) - [git-mirrors-skip-update](/docs/agent/self-hosted/configure#git-mirrors-skip-update) - [git-skip-fetch-existing-commits](/docs/agent/self-hosted/configure#git-skip-fetch-existing-commits) ##### Git submodules Git submodules allow one repository to reference another, enabling code in the first repository to use contents from the second. When you enable mirrors, the agent also mirrors submodules. You can create and update submodule mirrors the same way as regular mirrors. Submodule mirrors require special handling on the agent. The agent must update the submodule configuration in the main repository using `git submodule update --reference `. This must be done for each submodule, which means parsing the submodule configuration and then iterating over the submodules to update them. ##### Common issues with Git mirrors This section covers common issues with Git mirrors and how to solve or prevent them. ###### Parallelism issues When multiple agents fetch from the same mirror simultaneously, conflicts may occur. The Buildkite agent implements a locking system that prevents multiple agents from updating the same mirror simultaneously. This file-based lock works across multiple machines, even when the mirror directory is a network file share. ###### Checkout corruption A mirror repository and a reference clone (checkout) work together. The checkout borrows most of its objects from the mirror instead of storing them locally. When `git fetch` updates the mirror, it automatically triggers maintenance that cleans up objects Git considers unnecessary. By default, `git fetch` runs `git maintenance run --auto`. However, some of those objects may actually be needed by the checkout. If this happens, you'll see checkout errors and need to delete the checkout directory and re-clone (which is quick with mirrors enabled). ###### Updating mirrors The command `git remote update` is commonly used to refresh mirrors because it updates all references. However, `git fetch origin ` is the preferred approach for most CI/CD use cases because it fetches only the objects a particular job requires. The Buildkite agent now uses `git fetch origin ` instead of `git remote update` for this reason. Remember that `git remote update` also runs auto maintenance that may cause the checkout corruption mentioned above. --- ### Experiments URL: https://buildkite.com/docs/agent/self-hosted/configure/experiments #### Agent experiments Buildkite frequently introduces new experimental features to the agent. Use the [`--experiment` flag](/docs/agent/self-hosted/configure#experiment) to opt-in to them and test them out: ``` buildkite-agent start --experiment experiment1 --experiment experiment2 ``` Or you can set them in your [agent configuration file](/docs/agent/self-hosted/configure): ``` experiment="experiment1,experiment2" ``` If an experiment doesn't exist, no error will be raised. > 🚧 > Please note that there is a likely chance that these experiments we will be removed or changed. Therefore, using them should be at your own risk, and without the expectation that these experiments will work in future. ##### Available experiments ###### Agent API This exposes a local API for interacting with the agent process. ...with primitives that can be used to solve local concurrency problems (such as multiple agents handling some shared local resource). The API is exposed using a Unix Domain Socket. The path to the socket is not available using a environment variable - rather, there is a single (configurable) path on the system. > 🛠 > To use this feature, set `experiment="agent-api"` in your [agent configuration](/docs/agent/self-hosted/configure#experiment). ###### Allow artifact path traversal Uploaded artifacts include a relative path used by the artifact download tool to download the artifact to a suitable location relative to the destination path. In most circumstances the relative paths generated by `artifact upload` won't contain `..` components, and so will always be downloaded at or inside the destination path. However, it is possible to upload artifacts using glob patterns containing one or more `..` components, which may be preserved in the artifact path. It is also possible for a user to call the Agent REST API directly in order to upload artifacts with arbitrary paths. Leaving this experiment disabled prevents `..` components in artifact paths from traversing up from the destination path. Enabling this experiment permits the less-secure behaviour of allowing artifact paths containing `..` to traverse up the destination path. For example, if an artifact was uploaded with the path `../../foo.txt`, then the command: ```shell buildkite-agent artifact download '*.txt' . ``` has a different effect depending on this experiment: - With `allow-artifact-path-traversal` disabled, `foo.txt` is downloaded to `./foo.txt`. - With `allow-artifact-path-traversal` enabled, `foo.txt` is downloaded to `../../foo.txt`. > 🛠 > To use this feature, set `experiment="allow-artifact-path-traversal"` in your [agent configuration](/docs/agent/self-hosted/configure#experiment). ###### Descending spawn priority When using `--spawn` with `--spawn-with-priority`, the agent assigns ascending priorities to each spawned agent (1, 2, 3, ...). This experiment changes the priorities to be descending (-1, -2, -3, ...) instead. This helps jobs be assigned across all hosts in cases where the value of `--spawn` varies between hosts. > 🛠 > To use this feature, set `experiment="descending-spawn-priority"` in your [agent configuration](/docs/agent/self-hosted/configure#experiment). ###### Interpolation prefers runtime env When interpolating the pipeline level environment block, a pipeline level environment variable could take precedence over environment variables depending on the ordering. This may contravene Buildkite's [documentation](https://buildkite.com/docs/pipelines/environment-variables#environment-variable-precedence) that suggests the Job runtime environment takes precedence over that defined by combining environment variables defined in a pipeline. We previously made this the default behaviour of the agent (as of v3.63.0) but have since reverted it. > 🛠 > To use this feature, set `experiment="interpolation-prefers-runtime-env"` in your [agent configuration](/docs/agent/self-hosted/configure#experiment). ###### Normalised upload paths Artifacts found by `buildkite-agent artifact upload` will be uploaded using URI/Unix-style paths, even on Windows. This changes the URLs that artifacts uploaded from Windows agents are stored at, but to one which is URI-compatible. Artifact names displayed in Buildkite's web UI, as well as in the API, are changed by this. Take `buildkite-agent artifact upload coverage\report.xml` as an example: - By default, and without this experiment, this file is uploaded to `s3://example/coverage\report.xml`. - With this experiment enabled, it would be `s3://example/coverage/report.xml`. **Status**: a major improvement for Windows compatibility, we'd like this to be the standard behaviour in 4.0. 👍👍 > 🛠 > To use this feature, set `experiment="normalised-upload-paths"` in your [agent configuration](/docs/agent/self-hosted/configure#experiment). ###### Override zero exit on cancel If the job is cancelled, and the exit status of the process is 0, it is overridden to be 1 instead. When cancelling a job, the agent signals the process, which typically causes it to exit with a non-zero status code. On Windows this is not true - the process exits with code 0 instead, which makes the job appear to be successful. (It successfully exited, no?) By overriding the status to 1, a cancelled job should appear as a failure, regardless of the OS the agent is running on. > 🛠 > To use this feature, set `experiment="override-zero-exit-on-cancel"` in your [agent configuration](/docs/agent/self-hosted/configure#experiment). ###### Propagate agent config vars Prepends agent configuration variables (such as `BUILDKITE_GIT_*`, `BUILDKITE_SHELL`, `BUILDKITE_CANCEL_GRACE_PERIOD`, etc.) to the environment file used by the job runner. This is useful in environments like Docker where the agent configuration is not otherwise available to the job process. > 🛠 > To use this feature, set `experiment="propagate-agent-config-vars"` in your [agent configuration](/docs/agent/self-hosted/configure#experiment). ###### PTY raw Set PTY to raw mode, to avoid mapping LF (\n) to CR,LF (\r\n) in job command output. These extra newline characters are normally not noticed, but can make raw logs appear double-spaced in some circumstances. We run commands in a PTY mostly (entirely?) so that the program detects a PTY and behaves like it's running in a terminal, using ANSI escapes to provide colours, progress meters etc. But we don't need the PTY to modify the stream. (Or do we? That's why this is an experiment) > 🛠 > To use this feature, set `experiment="pty-raw"` in your [agent configuration](/docs/agent/self-hosted/configure#experiment). ###### Resolve commit after checkout After repository checkout, resolve `BUILDKITE_COMMIT` to a commit hash. This makes `BUILDKITE_COMMIT` useful for builds triggered against non-commit-hash refs such as `HEAD`. **Status**: broadly useful, we'd like this to be the standard behaviour in 4.0. 👍👍 > 🛠 > To use this feature, set `experiment="resolve-commit-after-checkout"` in your [agent configuration](/docs/agent/self-hosted/configure#experiment). ###### Zip plugins Allows plugins to be downloaded as zip archives instead of being cloned from a Git repository. This is useful for plugins hosted as zip files on HTTP(S) URLs. > 🛠 > To use this feature, set `experiment="zip-plugins"` in your [agent configuration](/docs/agent/self-hosted/configure#experiment). ##### Promoted experiments The following features started as experiments before being promoted to fully supported features. Therefore, these features are now a part of the Buildkite agent's default behavior, and there's no additional configuration required to use them. ###### ANSI timestamps Promoted in [v3.48.0](https://github.com/buildkite/agent/releases/tag/v3.48.0). Learn more about this feature in [ANSI timestamps and disabling them](/docs/pipelines/configure/managing-log-output#ansi-timestamps-and-disabling-them). ###### Avoid recursive trap Promoted in [v3.66.0](https://github.com/buildkite/agent/releases/tag/v3.66.0). ###### Flock file locks Promoted in [v3.48.0](https://github.com/buildkite/agent/releases/tag/v3.48.0). Learn more about this feature in [Flock file locks](/docs/agent/cli/reference/lock#flock-file-locks). ###### Git mirrors Promoted in [v3.47.0](https://github.com/buildkite/agent/releases/tag/v3.47.0). Learn more about this feature in [Git mirrors](/docs/agent/self-hosted/configure/git-mirrors) and [Setting up Git mirrors](/docs/agent/self-hosted/configure/git-mirrors#setting-up-git-mirrors). ###### Inbuilt status page Promoted in [v3.48.0](https://github.com/buildkite/agent/releases/tag/v3.48.0). ###### Isolated plugin checkout Promoted in [v3.67.0](https://github.com/buildkite/agent/releases/tag/v3.67.0). ###### Job API Promoted in [v3.64.0](https://github.com/buildkite/agent/releases/tag/v3.64.0). Learn more about this feature in [Internal job API](/docs/apis/agent-api/internal-job). ###### Kubernetes exec Promoted in [v3.74.0](https://github.com/buildkite/agent/releases/tag/v3.74.0). ###### Polyglot hooks Promoted in [v3.85.0](https://github.com/buildkite/agent/releases/tag/v3.85.0). Learn more about this feature in [Polyglot hooks](/docs/agent/hooks#polyglot-hooks). ###### Use zzglob Promoted in [v3.104.0](https://github.com/buildkite/agent/releases/tag/v3.104.0). Learn more about this feature in [Glob pattern syntax](/docs/pipelines/configure/glob-pattern-syntax). --- ### Pausing and resuming URL: https://buildkite.com/docs/agent/self-hosted/pausing-and-resuming #### Pause and resume an agent You can _pause_ an agent to prevent any jobs of the cluster's pipelines from being dispatched to that particular agent. This is similar to [pausing and resuming queues](/docs/agent/queues/managing#pause-and-resume-a-queue), but instead, applies to individual agents. _Pausing_ an agent is a useful alternative to _stopping_ an agent, especially when resources are tied to the lifetime of the agent, such as a cloud instance configured to terminate when the agent exits. By pausing an agent, you can investigate problems in its environment more easily, without the worry of jobs being dispatched to it. Pausing is also useful when performing maintenance on an agent's environment, where idleness would be preferred, especially for maintenance operations that would affect the reliability or speed of jobs if they ran at the same time. Some examples of maintenance operations that could benefit from pausing an agent include: - pruning Docker caches - emptying temporary directories - updating code mirrors - installing software updates - compacting or vacuuming databases > 📘 Pause timeouts > A paused agent continues to consume resources even while it is not running any jobs. Since it could be undesirable to do this indefinitely, each pause has a timeout specified in minutes. The default timeout is 5 minutes. With Buildkite agent v3.93 and later, a paused ephemeral agent also remains running after it would normally exit. An _ephemeral_ agent is an agent started with any one of these flags: - `--acquire-job` - `--disconnect-after-job` - `--disconnect-after-idle-timeout` Pausing an ephemeral agent is useful for preventing ephemeral resources such as EC2 instances or Kubernetes pods from being automatically removed. This allows manually inspecting and diagnosing a failing agent's environment. An ephemeral agent that is paused but otherwise idle will exit once it is resumed. > 📘 Paused agents and scaling > The Agent Scaler component of Elastic CI Stack for AWS considers paused agents to be available for jobs, even though they are not. The stack will _not_ scale up extra instances to maintain capacity merely because an agent becomes paused. To pause an agent: 1. Select **Agents** in the global navigation to access the **Clusters** page. 1. Select the cluster with the agent to pause. 1. On the **Queues** page, select the queue with the agent to resume. 1. On the queue's details page, select the agent to pause. 1. On the agent's details page, select **Pause Agent**. 1. Enter a timeout (in minutes) and an optional note, and select **Yes, pause this agent** to pause the agent. **Note:** Use this note to explain why you're pausing the agent. The note will be displayed on the agent's details page. Jobs _already_ started by an agent that becomes paused will continue to run. New jobs that target the agent's queue will be dispatched to other agents in the queue, or wait. To resume an agent: 1. Select **Agents** in the global navigation to access the **Clusters** page. 1. Select the cluster with the agent to resume. 1. On the **Queues** page, select the queue with the agent to resume. 1. On the queue's details page, select the agent to resume. 1. On the agent's details page, select **Yes, resume this agent**. Jobs will resume being dispatched to the agent as usual, including any jobs waiting to run. ##### Using the REST API To pause an agent (clustered or unclustered) using the [REST API](/docs/apis/rest-api), run the following example `curl` command: ```bash curl -H "Authorization: Bearer ${TOKEN}" \ -X PUT "https://api.buildkite.com/v2/organizations/{org.slug}/agents/{id}/pause" \ -H "Content-Type: application/json" \ -d '{ "note": "A short note explaining why this agent is being paused", "timeout_in_minutes": 60 }' ``` where: - `$TOKEN` is an [API access token](https://buildkite.com/user/api-access-tokens) scoped to the relevant **Organization** and **REST API Scopes** that your request needs access to in Buildkite. - `{org.slug}` can be obtained: * From the end of your Buildkite URL, after accessing **Pipelines** in the global navigation of your organization in Buildkite. * By running the [List organizations](/docs/apis/rest-api/organizations#list-organizations) REST API query to obtain this value from `slug` in the response. For example: ```bash curl -H "Authorization: Bearer $TOKEN" \ -X GET "https://api.buildkite.com/v2/organizations" ``` To resume an agent using the [REST API](/docs/apis/rest-api), run the following example `curl` command: ```bash curl -H "Authorization: Bearer ${TOKEN}" \ -X PUT "https://api.buildkite.com/v2/organizations/{org.slug}/agents/{id}/resume" \ -H "Content-Type: application/json" \ -d '{}' ``` ##### Using the GraphQL API To pause an agent (clustered or unclustered) using the [GraphQL API](/docs/apis/graphql-api), run the following example [mutation](/docs/apis/graphql/schemas/mutation/agentpause): ```graphql mutation { agentPause( input: { id: "The GraphQL ID for the agent to pause" note: "Note explaining why the agent is being paused" timeoutInMinutes: 60 } ) { agent { uuid paused pausedAt pausedBy { uuid } pausedNote } } } ``` where the GraphQL ID for an agent can be found from an `agents` GraphQL query: ```graphql query { organization(slug: "Your_org_slug") { agents(first: 10) { edges { node { id } } } } } ``` To resume an agent using the [GraphQL API](/docs/apis/graphql-api), run the following example [mutation](/docs/apis/graphql/schemas/mutation/agentresume): ```graphql mutation { agentResume( input: { id: "The GraphQL ID for the agent to resume" } ) { agent { uuid paused } } } ``` --- ### Prioritization URL: https://buildkite.com/docs/agent/self-hosted/prioritization #### Buildkite agent prioritization Agent prioritization controls how Buildkite assigns jobs to available agents. Understanding how the job dispatch system works helps you optimize your agent configuration for better performance and resource utilization. ##### Agent selection criteria When Buildkite's job dispatch system is selecting an agent to process a job, the evaluation is based on several factors: agent's priority, success in running previous jobs, or targeting constraints. ###### Priority-based selection Agent priority is the primary factor in job assignment: - Agents with higher priority values are assigned jobs before agents with lower priority values. - Priority can be set to any integer value, with higher numbers indicating higher priority. - Agents with the default priority of `null` are assigned jobs last. ###### Success-based preference When selecting from a pool of agents of the same priority level, Buildkite's job dispatch favors agents that have most recently completed jobs successfully. This helps ensure jobs are assigned to more reliable agents and infrastructure. If the most successful agent is busy, the next most successful available agent is selected. ###### Job targeting constraints Jobs can be targeted to specific agents using [agent tags](/docs/agent/cli/reference/start#setting-tags) that define queues, and other capabilities. ##### Agent priority You can configure agent priority in the agent configuration file, by using a command line flag, or through an environment variable. ###### Configuration file Set the priority in your agent configuration file: ```ini priority=5 ``` ###### Command line flag Use the `--priority` flag when starting the agent: ```bash buildkite-agent start --priority 5 ``` ###### Environment variable Set the priority using the `BUILDKITE_AGENT_PRIORITY` environment variable: ```bash BUILDKITE_AGENT_PRIORITY=5 buildkite-agent start ``` ##### Load balancing strategies Agent priority allows you to apply sophisticated load balancing strategies within your infrastructure. Here are a few example strategies you might choose to implement. ###### Common load balancing Distributing jobs evenly across multiple machines can be accomplished with the `--spawn-with-priority` command-line [option](/docs/agent/cli/reference/start#spawn-with-priority): **Machine A:** ```bash buildkite-agent start --tags "queue=ci-builds" --spawn 5 --spawn-with-priority ``` **Machine B:** ```bash buildkite-agent start --tags "queue=ci-builds" --spawn 5 --spawn-with-priority ``` **Machine C:** ```bash buildkite-agent start --tags "queue=ci-builds" --spawn 5 --spawn-with-priority ``` This configuration will launch 5 agents on each machine (a total of 15 agents) that handle scheduled jobs in the `ci-builds` queue. Using the `--spawn-with-priority` option will launch each agent with a priority equal to their agent's index. Jobs will be equally distributed across agents running on all machines. ###### Resource-based prioritization If your environment has a mix of hardware capabilities, you can adjust agent priority to ensure jobs are assigned to your most capable hardware first. Here is how to prioritize jobs to agents with the highest hardware capabilities: ```bash #### High-performance agents running on larger hardware for intensive jobs buildkite-agent start --priority 16 --tags "queue=ci-builds,performance=high,cpu=16-core" #### Standard agents running on standard hardware for regular jobs buildkite-agent start --priority 8 --tags "queue=ci-builds,performance=standard,cpu=8-core" #### Lightweight agents running on smaller hardware for simple tasks buildkite-agent start --priority 4 --tags "queue=ci-builds,performance=basic,cpu=4-core" ``` This configuration schedules jobs in the `ci-builds` queue onto larger hardware first, but still allows users to target jobs to a specific agent's hardware using [tags](/docs/pipelines/configure/defining-steps#targeting-specific-agents). ###### Spillover strategy Spillover strategy is an advanced strategy that greatly increases overall resource utilization on your self-hosted infrastructure. This strategy is applied by configuring agents with overlapping capabilities that can handle multiple job types based on priority and availability, while also leveraging job priorities to ensure higher priority jobs are always dispatched first. ###### Agent configuration for spillover strategy Set up agents with overlapping tags where some agents can handle multiple job types: **Dedicated release agents (higher priority):** ```bash buildkite-agent start --spawn 3 --priority 5 --tags "queue=ci-builds,build_type=release" ``` **Flexible agents (lower priority, multiple capabilities):** ```bash buildkite-agent start --spawn 5 --priority 1 --tags "queue=ci-builds,build_type=normal,build_type=release" ``` ###### Pipeline configuration for spillover strategy Configure your pipelines with higher priority jobs for "release" steps, while also targeting specific agent tags: **High-priority release builds:** ```yaml steps: - command: "make release" priority: 2 agents: queue: "ci-builds" build_type: "release" ``` **Regular-priority development builds:** ```yaml steps: - command: "make test" priority: 1 agents: queue: "ci-builds" build_type: "normal" ``` ###### How spillover strategy works The configuration described in the previous section creates a spillover system that operates as follows: 1. High-priority "release" jobs are handled by dedicated `build_type=release` agents first. 1. When these dedicated agents are all busy, "release" jobs can spillover to flexible agents that have agent tags for both `build_type=normal` and `build_type=release`. 1. Higher priority "release" jobs will always be processed before lower priority "normal" jobs, regardless of which jobs were created first. 1. Flexible agents return to handling "normal" jobs when there is sufficient dedicated agent capacity for high-priority "release" jobs. ##### Retry agent affinity When a job fails on a [self-hosted queue](/docs/agent/queues/managing#create-a-self-hosted-queue), and you retry it, Buildkite Pipelines will (by default) retry the job the agent that has most recently finished a job. Often, this can be the agent that ran the job that originally failed. There may be scenarios where you might want to retry the job on a different agent, such as a [flaky test](/docs/test-engine/glossary#flaky-test), where environment settings that could have caused the job to fail are unlikely to be present. Therefore, you can configure your self-hosted queue to instead retry the job on a different agent, where such an agent is available. This type of configuration is known as _agent affinity_, which has the following settings: - **Prefer Warmest Agent**: The default setting, where jobs are retried on the agents that most recently finished a job (that is, the _warmest_ agent). - **Prefer Different Agent**: Retry jobs on any agent which is different to the one that ran the previous attempt, if they're available. If no other different agents are available, the job will be retried on the warmest agent. It is also possible to configure a self-hosted queue's retry agent affinity setting when [updating the queue using the REST API](/docs/apis/rest-api/clusters/queues#update-a-queue). --- ### Overview URL: https://buildkite.com/docs/agent/self-hosted/monitoring-and-observability #### Monitoring and observability By default, the Buildkite agent is only observable either through Buildkite or through log output on the host. For help choosing between the different monitoring approaches available across Buildkite Pipelines, see the [monitoring and observability decision matrix](/docs/pipelines/best-practices/monitoring-and-observability#getting-metrics-out-of-buildkite-pipelines-decision-matrix). The default observability options are: - **Job logs:** Relate to the jobs the agent runs. These are uploaded to Buildkite and shown for each step in a build. - **Agent logs:** Relate to how the agent itself is running. These are not uploaded or saved (except where the output from the agent is read or redirected by another process, such as [systemd] or [launchd]). ##### Health checking, metrics, and status page The agent can optionally run an HTTP service that describes the agent's state. The service is suitable for both automated health checks and human inspection. You can enable the service with the `--health-check-addr` flag or `$BUILDKITE_AGENT_HEALTH_CHECK_ADDR` environment variable. For example, to enable the service listening on local port 3901, you can use: ```shell buildkite-agent start --health-check-addr=:3901 ``` The flag expects [a "host:port" address](https://pkg.go.dev/net#Dial). Passing `:0` allows the agent to choose a port, which will be logged at startup. For security reasons, we recommend that you do _not_ expose the service directly to the internet. While there should be no ability to manipulate the agent state using this service, it may expose information, or provide a vector for a denial-of-service attack. We may also add new features to the service in the future. ###### Health checking service routes The URL paths available from the health checking service are as follows: - **`/`**: Returns HTTP status 200 with the text `OK: Buildkite agent is running`. - **`/agent/(worker number)`**: Reports the time since the agent worker last sent a successful heartbeat. Workers are numbered starting from 1, and the number of workers is set with the `--spawn` flag. If the previous heartbeat for this worker failed, it returns HTTP status 500 and a description of the failure. Otherwise, it returns HTTP status 200. - **`/metrics`**: (Added in Buildkite agent version 3.113.0) [Prometheus plain-text metrics](https://prometheus.io/docs/instrumenting/exposition_formats/) describing agent behaviour over time. - **`/status`**: A human-friendly page detailing various systems inside the agent. To aid debugging, this page does _not_ automatically refresh—it shows the status of each internal component of the agent at a particular moment in time. The following shows the `/status` page for an agent: [Image: status-page.png] ###### Prometheus metrics reference Prometheus metrics were added to the health-checking service in Buildkite agent version 3.113.0. Metric | Type | Description --- | --- | --- `buildkite_agent_jobs_ended_total` | Counter | Count of jobs that ended in any way for any reason `buildkite_agent_jobs_started_total` | Counter | Count of jobs started `buildkite_agent_logs_bytes_uploaded_total` | Counter | Count of log bytes uploaded `buildkite_agent_logs_bytes_uploads_errored_total` | Counter | Count of log bytes that were not uploaded due to an error `buildkite_agent_logs_chunk_uploads_errored_total` | Counter | Count of log chunks that were not uploaded due to an error `buildkite_agent_logs_chunks_uploaded_total` | Counter | Count of log chunks uploaded `buildkite_agent_logs_upload_duration_seconds_total` | Histogram | Time taken to upload log chunks `buildkite_agent_pings_actions_total` | Counter | Count of actions taken following a ping, by `action` `buildkite_agent_pings_duration_seconds_total` | Histogram | Time taken to ping (the API call, not including the subsequent action) `buildkite_agent_pings_errors_total` | Counter | Count of pings that failed due to an error `buildkite_agent_pings_sent_total` | Counter | Count of pings sent `buildkite_agent_pings_wait_duration_seconds_total` | Histogram | Time spent waiting prior to each ping (ping interval plus jitter) `buildkite_agent_workers_ended_total` | Counter | Count of agent workers (i.e. `--spawn` flag) that have stopped running `buildkite_agent_workers_started_total` | Counter | Count of agent workers (i.e. `--spawn` flag) that have started running To send the Prometheus metrics to Datadog, configure the [Datadog Agent's OpenMetrics integration](https://docs.datadoghq.com/integrations/openmetrics/) to scrape the `/metrics` endpoint. For example, with the health check service listening on port 3901, you will need to add the following to your Datadog Agent's `openmetrics.d/conf.yaml`: ```yaml instances: - openmetrics_endpoint: "http://localhost:3901/metrics" namespace: "buildkite_agent" metrics: - "buildkite_agent_*" ``` A count of currently-running agent workers can be found by subtracting `ended_total` from `started_total`: ```promql sum(buildkite_agent_workers_started_total - buildkite_agent_workers_ended_total) ``` Similarly, a count of currently-running jobs using the same method: ```promql sum(buildkite_agent_jobs_started_total - buildkite_agent_jobs_ended_total) ``` As all counter and histogram metrics are cumulative, information such as job or log throughput can be found using functions such as `rate`: ```promql #### Throughput of jobs started over 5m interval sum(rate(buildkite_agent_jobs_started_total[5m])) #### Throughput of log bytes uploaded over 5m interval sum(rate(buildkite_agent_logs_bytes_uploaded_total[5m])) ``` ##### Datadog metrics The Buildkite agent supports sending job duration metrics directly to Datadog through [DogStatsD](https://docs.datadoghq.com/extend/dogstatsd/). These metrics track job success counts and timing and are separate from the [Prometheus metrics](/docs/agent/self-hosted/monitoring-and-observability#health-checking-metrics-and-status-page-prometheus-metrics-reference) exposed on the `/metrics` endpoint. To send Prometheus metrics such as `buildkite_agent_workers_started_total` to Datadog, use the [OpenMetrics integration approach described above](/docs/agent/self-hosted/monitoring-and-observability#health-checking-metrics-and-status-page-prometheus-metrics-reference). To enable Datadog metrics, start the agent with the `--metrics-datadog` option or set `metrics-datadog=true` in the agent's configuration file. The agent sends metrics to a DogStatsD server, which is bundled with the [Datadog Agent](https://docs.datadoghq.com/extend/dogstatsd/). ```shell buildkite-agent start --metrics-datadog ``` Additional configuration options: Option | Description ----------------------------------- | ----------- `--metrics-datadog-host` | The DogStatsD instance to send metrics to using UDP. _Environment variable:_ `BUILDKITE_METRICS_DATADOG_HOST` _Default:_ `127.0.0.1:8125` `--metrics-datadog-distributions` | Use [Datadog Distributions](https://docs.datadoghq.com/metrics/types/?tab=distribution#metric-types) for timing metrics. This is recommended when running multiple agents to prevent metrics from multiple agents from being rolled up and appearing to have the same value. _Environment variable:_ `BUILDKITE_METRICS_DATADOG_DISTRIBUTIONS` _Default:_ `false` Once enabled, the agent will generate the following metrics (duration measured in milliseconds): - `buildkite.jobs.success` - `buildkite.jobs.duration.success.avg` - `buildkite.jobs.duration.success.max` - `buildkite.jobs.duration.success.count` - `buildkite.jobs.duration.success.median` - `buildkite.jobs.duration.success.95percentile` For organization-level queue and agent metrics in Datadog (such as scheduled jobs count, idle agents, and busy agent percentage), use the [buildkite-agent-metrics CLI](/docs/agent/self-hosted/monitoring-and-observability#buildkite-agent-metrics-cli-sending-metrics-to-datadog) with the StatsD backend. ##### Buildkite agent metrics CLI The [buildkite-agent-metrics](https://github.com/buildkite/buildkite-agent-metrics) tool is a standalone command-line binary that collects agent and job metrics from the [`metrics` endpoint of the Buildkite agent API](/docs/apis/agent-api/metrics) and publishes these metrics to a monitoring and observability backend of your choice. This tool is particularly useful for enabling autoscaling based on queue depth and agent availability. The tool supports the following backends: - [AWS CloudWatch](https://aws.amazon.com/cloudwatch/) (default) - [StatsD](https://github.com/etsy/statsd) (including Datadog-compatible tagging) - [Prometheus](https://prometheus.io) - [Google Cloud Monitoring](https://cloud.google.com/monitoring) - [New Relic](https://newrelic.com/products/insights) - [OpenTelemetry](https://opentelemetry.io) ###### Installing Download the latest binary from [GitHub Releases](https://github.com/buildkite/buildkite-agent-metrics/releases), or run it as a Docker container: ```shell docker run --rm public.ecr.aws/buildkite/agent-metrics:latest \ -token "$BUILDKITE_AGENT_TOKEN" \ -interval 30s \ -queue my-queue ``` You can also install from source using Go: ```shell go install github.com/buildkite/buildkite-agent-metrics/v5@latest ``` ###### Running The tool requires an [agent token](/docs/agent/self-hosted/tokens), which could be the same one used when [assigning the self-hosted agent to a queue](/docs/agent/queues#assigning-a-self-hosted-agent-to-a-queue), or another agent token configured within the same [cluster](/docs/pipelines/security/clusters). The simplest deployment runs it as a long-running daemon that collects metrics across all queues in an organization: ```shell buildkite-agent-metrics -token "$BUILDKITE_AGENT_TOKEN" -interval 30s ``` To restrict collection to specific queues, use the `-queue` flag (repeatable): ```shell buildkite-agent-metrics -token "$BUILDKITE_AGENT_TOKEN" -interval 30s -queue my-queue ``` To select a backend, use the `-backend` flag: ```shell buildkite-agent-metrics -token "$BUILDKITE_AGENT_TOKEN" -interval 30s -backend statsd ``` ###### Collected metrics The tool collects the following metrics per organization and per queue: | Metric | Description | | ###### Sending metrics to Datadog To send organization-level queue and agent metrics to Datadog, use the StatsD backend with the `-statsd-tags` flag. The metrics will be sent to a [DogStatsD](https://docs.datadoghq.com/extend/dogstatsd/) server (bundled with the Datadog Agent), which forwards them to Datadog with queue-level tagging: ```shell buildkite-agent-metrics \ -token "$BUILDKITE_AGENT_TOKEN" \ -interval 30s \ -backend statsd \ -statsd-host "127.0.0.1:8125" \ -statsd-tags ``` The `-statsd-tags` flag enables Datadog-compatible tagging, so metrics are tagged by `queue` rather than including the queue name in the metric name. This allows you to filter and group metrics by queue in Datadog dashboards. > 📘 Ensure DogStatsD is running > The Datadog Agent includes a DogStatsD server that listens on UDP port 8125 by default. Before starting the metrics collector, verify that the Datadog Agent is running and DogStatsD is enabled. For setup details, see the [DogStatsD documentation](https://docs.datadoghq.com/extend/dogstatsd/). For more details on configuration options, AWS Lambda deployment, and backend-specific settings, see the [buildkite-agent-metrics README](https://github.com/buildkite/buildkite-agent-metrics?tab=readme-ov-file#buildkite-agent-metrics). ##### Tracing For Datadog APM or OpenTelemetry tracing, see [Tracing in the Buildkite agent](/docs/agent/self-hosted/monitoring-and-observability/tracing). [systemd]: https://www.freedesktop.org/software/systemd/man/systemd-journald.service.html [launchd]: https://developer.apple.com/library/archive/documentation/MacOSX/Conceptual/BPSystemStartup/Chapters/CreatingLaunchdJobs.html --- ### Tracing URL: https://buildkite.com/docs/agent/self-hosted/monitoring-and-observability/tracing #### Tracing in the Buildkite agent Distributed tracing tools like [Datadog APM](https://www.datadoghq.com/product/apm/) or [OpenTelemetry](https://opentelemetry.io/) tracing allow you to gain more insight into the performance of your CI runs - what's fast, what's slow, what could be optimized, and more importantly, how these things are changing over time. The Buildkite agent currently supports the two tracing backends listed above, Datadog APM (using OpenTracing) and OpenTelemetry. This doc will guide you through setting up tracing using either of these backends. ##### Using Datadog APM If you are looking to use Datadog's Application Performance Monitoring (APM) tracing with a Buildkite agent, [Using Datadog APM](/docs/pipelines/integrations/observability/datadog#using-datadog-apm) section of Buildkite Pipelines' [Datadog integration](/docs/pipelines/integrations/observability/datadog) documentation. ##### Using OpenTelemetry tracing Before starting the Buildkite agent, install and configure an OpenTelemetry Collector. Learn more about this from OpenTelemetry's [Install the Collector](https://opentelemetry.io/docs/collector/installation/) page of their documentation. Once the Collector is up and running, start the Buildkite agent with: ```bash buildkite-agent start --tracing-backend opentelemetry ``` This will enable OpenTelemetry tracing, and start sending traces to an OpenTelemetry Collector. The Buildkite agent's OpenTelemetry implementation uses the OTLP gRPC exporter to export trace information. This means that there must be a Collector capable of ingesting OTLP gRPC traces accessible by the Buildkite agent. By default, the Buildkite agent will export trace information to `https://localhost:4317`, but this can be overridden by passing in an environment variable `OTEL_EXPORTER_OTLP_ENDPOINT` containing an updated endpoint for the Collector when the agent is started. Once traces are being sent, you can view the internal state of the collector by visiting the TraceZ debug interface: `http://localhost:55679/debug/tracez` This interface shows active and sampled spans and is helpful for troubleshooting your OpenTelemetry trace pipeline. > 📘 Note on OTLP protocol > The Buildkite agent defaults to the `grpc` transport for OpenTelemetry, but can overridden using the `OTEL_EXPORTER_OTLP_PROTOCOL` environment variable to `http/protobuf` on [`v3.101.0`](https://github.com/buildkite/agent/releases/tag/v3.101.0) or later versions of the Buildkite agent. To set the OpenTelemetry service name, provide the `--tracing-service-name example-buildkite-agent`. The default service name when not specified is `buildkite-agent`. If using the OpenTelemetry Tracing Notification Service, you can provide the `--tracing-propagate-traceparent` flag to propagate traces from the Buildkite control plane, and through to your Agent trace spans. Learn more about configuring the OpenTelemetry integration with Buildkite Pipelines from the [OpenTelemetry](/docs/pipelines/integrations/observability/opentelemetry) integrations page. ###### Trace context propagation Starting from Buildkite agent version [v3.100](https://github.com/buildkite/agent/releases/tag/v3.100.0), when a Buildkite agent executes a command (build script, hook, plugin, and so on), the current trace context is automatically propagated to the child process via [environment variables](/docs/pipelines/configure/environment-variables). This enables distributed tracing across job boundaries, and your build scripts can continue the trace started by the agent or the Buildkite Pipelines backend. The agent serializes the trace context into multiple formats for compatibility with various tracing libraries: | Environment Variable | Format | |---------------------|--------| | TRACEPARENT, TRACESTATE | W3C Trace Context | | UBER_TRACE_ID | Jaeger | | X_B3_TRACEID, X_B3_SPANID, X_B3_SAMPLED | Zipkin B3 | | X_AMZN_TRACE_ID | AWS X-Ray | The environment variable names follow the [OpenTelemetry Environment Variable Carriers specification](https://opentelemetry.io/docs/specs/otel/context/env-carriers/). To continue the trace in your build script, configure your tracing library to extract context from the environment variables. For example, with the OpenTelemetry SDK, you can read the `TRACEPARENT` variable and create a child span that links back to the agent's span. ###### Sending OpenTelemetry traces to Honeycomb To send traces to [Honeycomb](https://www.honeycomb.io/), in addition to starting the Buildkite agent with the `--tracing-backend opentelemetry` option, you also need to add the following environment variables. The API token provided by Honeycomb will need to be replaced in the `OTEL_EXPORTER_OTLP_HEADERS` below. ```bash #### this is the same as --tracing-backend opentelemetry export BUILDKITE_TRACING_BACKEND="opentelemetry" #### service name is configurable export OTEL_SERVICE_NAME="buildkite-agent" #### the agent only supports GRPC transport export OTEL_EXPORTER_OTLP_PROTOCOL="grpc" #### the GRPC transport requires a port to be specified in the URL export OTEL_EXPORTER_OTLP_ENDPOINT="https://api.honeycomb.io:443" #### authentication of traces is done via the API key in this header export OTEL_EXPORTER_OTLP_HEADERS="x-honeycomb-team=xxxxx" ``` --- ### Overview URL: https://buildkite.com/docs/agent/self-hosted/security #### Buildkite agent security In cases where a Buildkite agent is being deployed into a sensitive environment, there are a few default settings which may be adjusted and techniques that may be used. ##### Securely storing secrets For best practices and recommendations about secret storage in the Agent, see [Managing pipeline secrets](/docs/pipelines/security/secrets/managing). Learn more about other secrets management approaches in Buildkite on the [Secrets overview](/docs/pipelines/security/secrets) page. ##### Set the agent token expiration date For secure and automated agent token lifecycle management, you can use Buildkite's APIs to set the expiration date for agent tokens. Learn more about this feature in [Agent token lifetime](/docs/agent/self-hosted/tokens#agent-token-lifetime). This feature allows for automated token rotation for long-lived tokens. Once set, an agent token's expiration date cannot be changed. ##### Disable automatic ssh-keyscan By default the agent will automatically accept the Git SSH host using the `ssh-keyscan` command when doing the first checkout on a new agent host. The agent runs a similar command to this: ```bash ssh-keyscan "" >> "~/.ssh/known_hosts" ``` If you choose to disable this functionality, you'll need to manually perform your first checkout, or ensure the SSH fingerprint of your source code host is already present on your build machine. Automatic ssh-keyscan can be disabled by setting [`no-ssh-keyscan`](/docs/agent/self-hosted/configure#no-ssh-keyscan): - Environment variable: `BUILDKITE_NO_SSH_KEYSCAN=true` - Command line flag: `--no-ssh-keyscan` - Configuration setting: `no-ssh-keyscan=true` ##### Restrict access by the Buildkite agent controller To safeguard your organization's infrastructure in case of Buildkite infrastructure being compromised, you want to restrict what the agent can and cannot do. All of the information above about securing the Buildkite agent applies, specifically: - Disallow execution of arbitrary commands by [customizing the list of allowed plugins](#restrict-access-by-the-buildkite-agent-controller-allow-a-list-of-plugins) or [disabling plugins](#restrict-access-by-the-buildkite-agent-controller-disable-plugins) entirely - [Disable command evaluation](#restrict-access-by-the-buildkite-agent-controller-disable-command-evaluation) - [Disable local hooks](#restrict-access-by-the-buildkite-agent-controller-disable-local-hooks) - [Enable strict validation](#restrict-access-by-the-buildkite-agent-controller-strict-checks-using-a-pre-bootstrap-hook) As a result, your Buildkite agent will refuse to run anything that's not a single argumentless invocation of a script that exists locally (after the `git clone` step of the setup) unless it's explicitly allowed by you. Since the [agent](https://github.com/buildkite/agent) is open-source, if necessary you can verify that assertion to whatever degree of certainty is required. ###### Allow a list of plugins Defining an [environment hook](/docs/agent/hooks#job-lifecycle-hooks) in the [agent `hooks-path`](/docs/agent/hooks#hook-locations-agent-hooks), you can create a list of plugins that an agent is allowed to run by inspecting the `BUILDKITE_PLUGINS` [environment variable](/docs/pipelines/configure/environment-variables). For an example of this, see the [buildkite/buildkite-allowed-plugins-hook-example](https://github.com/buildkite/buildkite-allowed-plugins-hook-example) repository on GitHub. ###### Disable plugins As plugins execute in the same way as local hooks, they can pose a potential security risk. If you're using third party plugins, you'll be executing the third party's code on your agent. You can disable plugins with the command line flag: `--no-plugins` or the [`no-plugins`](/docs/agent/self-hosted/configure#no-plugins) setting. If you still want to use plugins, you can check out a tool for [signing pipelines](/docs/agent/self-hosted/security#sign-your-pipelines). ###### Disable command evaluation By default the agent allows you to run any command on the build server (for example, `make test`). You can disable command evaluation and allow only the execution of scripts (with no ability to pass command line flags). Disabling command evaluation will also disable plugin execution. Once disabled your build steps will need to be checked into your repository as scripts, and the only way to pass arguments is using environment variables. This option is intended to protect your infrastructure from a scenario where Buildkite itself gets compromised, and subsequently sends malicious commands to your agents. It is not designed, nor effective at protecting against malicious actors with commit access to your repositories. Command line evaluation can be disabled by setting [`no-command-eval`](/docs/agent/self-hosted/configure#no-command-eval): - Environment variable: `BUILDKITE_NO_COMMAND_EVAL=1` - Command line flag: `--no-command-eval` - Configuration setting: `no-command-eval=true` > 🚧 Custom hooks and environment variables > If you have a custom `command` hook, using `no-command-eval` will have no effect on your command execution. See [Allowing a list of plugins](#restrict-access-by-the-buildkite-agent-controller-allow-a-list-of-plugins) and [Custom bootstrap scripts](#customize-the-bootstrap) for examples of how to completely lock down your agent from arbitrary code execution. > > Using `no-command-eval` only prevents command evaluation by the agent itself. Other programs such as build or test tools that run during the job could be influenced into executing arbitrary commands via environment variables (for example, `BASH_ENV` or `GIT_SSH_COMMAND`). See [Strict checks using a pre-bootstrap hook](#restrict-access-by-the-buildkite-agent-controller-strict-checks-using-a-pre-bootstrap-hook) and [`enable-environment-variable-allowlist`](/docs/agent/cli/reference/start#enable-environment-variable-allowlist) for possible approaches to filtering environment variables. ###### Disable local hooks Local hooks are hooks defined in pipeline's repository. If you have enforced a security policy using agent hooks (for example, you force commands to run within a Docker container or a chroot environment), you should also disable local hooks so that your security measures cannot be evaded with local hooks. Disabling local hooks also disables plugins from all sources. You can disable local hooks with the [`no-local-hooks`](/docs/agent/self-hosted/configure#no-local-hooks) setting. If local hooks are disabled and one is in the checkout, the job will fail. >🚧 Building untrusted commits >If you build untrusted commits, be careful to contain the build scripts and anything else that may be influenced by the repository contents within chroots, containers, VMs, etc as is appropriate for your needs. ###### Strict checks using a pre-bootstrap hook You can use a [`pre-bootstrap` hook](/docs/agent/hooks#job-lifecycle-hooks) to add strict checks for which repositories, commands, and plugins are allowed to run on your agent. The `pre-bootstrap` hook is executed before any source code is checked out, and before any commands are executed. For example, the following `pre-bootstrap` hook allows only a single file from a single repository to be executed by the agent: ```bash #!/bin/bash set -euo pipefail for line in "$(grep "^BUILDKITE_REPO=" "${BUILDKITE_ENV_FILE}")" do repo="$(echo "${line}" | cut -d= -f2 | sed -e 's/^"//' -e 's/"$//')" if [ "${repo}" != "git@server:repo.git" ] then echo "Repository not allowed: ${repo}" exit 1 fi done for line in "$(grep "^BUILDKITE_COMMAND=" "${BUILDKITE_ENV_FILE}")" do command="$(echo "${line}" | cut -d= -f2 | sed -e 's/^"//' -e 's/"$//')" if [ "${command}" != "some-script.sh" ] then echo "Command not allowed: ${command}" exit 1 fi done ``` You can see from the previous example that `$BUILDKITE_ENV_FILE` is the location of file that contains the environment variables that the control plane passes to a job. You may use this to block jobs from executing if certain environment variables are set. For example, the following `pre-bootstrap` hook blocks a job from executing if the `ENVIRONMENT_VARIABLE_TO_DENY` environment variable is set. ```bash #!/bin/bash set -euo pipefail if grep '^ENVIRONMENT_VARIABLE_TO_DENY=' "$BUILDKITE_ENV_FILE" > /dev/null then echo "Rejecting job because the environment variable ENVIRONMENT_VARIABLE_TO_DENY has been set" exit 1 fi ``` But also remember that some [environment variables may be essential](/docs/pipelines/configure/environment-variables) to the execution of jobs, so adding them to a blocklist in this manner is not advisable. ##### Sign your pipelines You can sign the steps your pipeline runs for extra security. This allows the agent to verify that the steps it runs haven't been tampered with or smuggled from one pipeline to another. For more information, see [Signed pipelines](/docs/agent/self-hosted/security/signed-pipelines). ##### Customize the bootstrap The Buildkite agent comes with a default bootstrap handler, but can be [configured](/docs/agent/self-hosted/configure#bootstrap-script) to run your own instead. Providing your own bootstrap provides the highest level of security and control of your agent. You can use it to customize your agent, sanitize command output, and implement your own security logic. The Buildkite agent is separated into a daemon executable and a bootstrap executable. The daemon is responsible for communicating with the Buildkite API and executing the bootstrap for each assigned job. The bootstrap is responsible for checking out source code, calling hooks, running commands, and uploading build artifacts. The bootstrap is passed environment variables by the daemon process, and has its output streams and exit status captured. For example, the following custom bootstrap will print out the environment variables passed by the main agent process, print a hello world, and exit with a failure status: ```bash #!/bin/bash set -euo pipefail env echo "Hello world" exit 1 ``` ##### Force clean checkouts By default, Buildkite will reuse (after cleaning) a previous checkout. This may be unsafe if building commits from untrusted sources (for example, third-party pull requests). To force a clean checkout every time, set `BUILDKITE_CLEAN_CHECKOUT=true` in the environment. The following example shows how to enforce a clean checkout at the step level: ```yaml steps: - label: "Clean Checkout" command: echo "clean checkout" env: BUILDKITE_CLEAN_CHECKOUT: true ``` In the logs for this step, you will find a log group called "Cleaning pipeline checkout." ##### Run the agent behind a proxy To run the agent behind a proxy, you'll need to export the following proxy environment variables for your process manager: - `http_proxy` - `https_proxy` Both of these variables should be set to the URL for your proxy server. For example, if using systemd, create a directory named `/etc/systemd/system/buildkite-agent.service.d` that contains a `proxy.conf` file. An example systemd `proxy.conf` file: ``` [Service] #### Proxy Env Vars Environment=http_proxy=http://username:password@proxyserver:8080/ Environment=https_proxy=http://username:password@proxyserver:8080/ ``` After creating this file, systemd will require a reload and the `buildkite-agent` service will require a restart. ##### Restrict agent connection by IP address [Clusters](/docs/pipelines/security/clusters) provide a mechanism to restrict which IP addresses can connect using a given agent token. This protects against the misuse of agent tokens and the hijacking of agent sessions. To restrict agent connection by IP address, set the [**Allowed IP Addresses** attribute](/docs/pipelines/security/clusters/manage#restrict-an-agent-tokens-access-by-ip-address). This restricts agent registration to those IPs, and any existing agents outside the allowed IP ranges will be forcefully disconnected. --- ### Network requirements URL: https://buildkite.com/docs/agent/self-hosted/security/network-requirements #### Network requirements Self-hosted [Buildkite agents](/docs/agent) only make outbound HTTPS connections. No inbound ports need to be opened. This page lists the hosts and ports your network must allow agents to access. ##### Required hosts Every self-hosted agent must be able to access the following hosts over HTTPS (port 443): Host | Purpose ---- | ------- `agent-edge.buildkite.com` | The default [Agent API](/docs/apis/agent-api) endpoint for agents running version 3.122.0 or later. Supports both [streaming](/docs/agent/self-hosted/configure/job-dispatch#streaming-job-dispatch) and polling-based job dispatch, along with agent registration, log uploads, [artifact](/docs/pipelines/configure/artifacts) coordination, [metadata](/docs/pipelines/configure/build-meta-data), [secrets](/docs/pipelines/security/secrets), [OIDC token](/docs/pipelines/security/oidc) requests, [pipeline uploads](/docs/pipelines/configure/dynamic-pipelines), and cache operations. `agent.buildkite.com` | The default [Agent API](/docs/apis/agent-api) endpoint for agents running versions earlier than version 3.122.0. Supports polling-based job dispatch only. Provides the same functionality as `agent-edge.buildkite.com` except for streaming job dispatch. `buildkiteartifacts.com` | Default artifact storage. When using the built-in artifact storage, the Agent API provides upload and download URLs on this domain. > 📘 > All agent-to-Buildkite communication uses TLS encryption. The agent connects to its configured endpoint on port 443 using HTTPS. There is no need to open any inbound ports on your firewall or security groups. For more detail on how the agent communicates with Buildkite, see [Buildkite architectures](/docs/pipelines/architecture). ##### Optional hosts Depending on your agent configuration, agents may also need to access the following hosts. ###### Customer-managed artifact storage If you configure a custom [artifact upload destination](/docs/pipelines/configure/artifacts#storage-providers-encryption-and-retention), agents need access to the relevant storage provider instead of, or in addition to, `buildkiteartifacts.com`: Storage provider | Hosts ---------------- | ----- Amazon S3 | `*.s3.amazonaws.com` (port 443) Google Cloud Storage | `storage.googleapis.com`, `www.googleapis.com` (port 443) Azure Blob Storage | `*.blob.core.windows.net` (port 443) Artifactory | Your Artifactory server's hostname (port 443) ###### Cloud instance metadata When running on a cloud provider, agents can automatically detect instance metadata to populate [agent tags](/docs/agent/cli/reference/start#tags). These metadata endpoints are instance-local and do not require internet-routable firewall rules: Cloud provider | Endpoint | Purpose -------------- | -------- | ------- AWS (EC2 and ECS) | `169.254.169.254` (port 80, HTTP) | [EC2 instance metadata](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2-instance-metadata.html) and [ECS task metadata](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/task-metadata-endpoint.html) Google Cloud | `metadata.google.internal` (port 80, HTTP) | [GCP instance metadata](https://cloud.google.com/compute/docs/metadata/overview) ##### Hosts your build jobs may need In addition to the hosts the agent itself connects to, your build scripts and [plugins](/docs/pipelines/integrations/plugins) may require access to other services. These depend on what your pipelines do, but common examples include: - **Source control**: your Git host, such as `github.com`, `gitlab.com`, or an internal Git server - **Package registries**: such as `registry.npmjs.org`, `pypi.org`, `registry.yarnpkg.com`, or Docker Hub (`registry-1.docker.io`, `auth.docker.io`, `production.cloudflare.docker.com`) - **Buildkite Package Registries**: `api.buildkite.com` (port 443) if you use [Buildkite Package Registries](/docs/package-registries) from your build scripts - **Other external services**: deployment targets, notification endpoints, code analysis tools, or any other services your builds interact with ##### Buildkite platform egress IPs If your internal services need to accept inbound connections from the Buildkite platform (for example, [webhooks](/docs/apis/webhooks) or commit status updates to a self-hosted source control system), use the [Meta API](/docs/apis/rest-api/meta) to obtain the current set of platform egress IP addresses. --- ### Signed pipelines URL: https://buildkite.com/docs/agent/self-hosted/security/signed-pipelines #### Signed pipelines Signed pipelines are a security feature where pipelines are cryptographically signed when uploaded to Buildkite. Agents then verify the signature before running the job. If an agent detects a signature mismatch, it'll refuse to run the job. Maintaining a strong security boundary is important to Buildkite and informs how we design features. It's also a key reason people choose Buildkite over other CI/CD tools. Signing pipelines improves your security posture by ensuring agents don't run jobs where a malicious actor has modified the instructions. This moves you towards zero-trust CI/CD by further isolating you from Buildkite itself being compromised. The signature guarantees the origin of jobs by asserting: - The jobs were uploaded from a trusted source. - The jobs haven't been modified after upload. These signatures mean that if a threat actor could modify a job in flight, the agent would refuse to run it due to mismatched signatures. **🤔 I think I've seen this before...** This work is inspired by the [buildkite-signed-pipeline](https://github.com/buildkite/buildkite-signed-pipeline) tool, which you could add to your agent instances. It had a similar idea—signing steps before they're uploaded to Buildkite, then verifying them when they're run. However, it had some limitations, including: It had to be installed on every agent instance, leading to more configuration. It only supported symmetric signatures (using HMAC-SHA256), meaning that every verifier could also sign uploads. It couldn't sign [matrix steps](/docs/pipelines/configure/workflows/build-matrix). This newer version of pipeline signing is built right into the agent and addresses all of these limitations. Being built into the agent, it's also easier to configure and use. Many thanks to [SEEK](https://www.seek.com.au/), who we collaborated with on the older version of the tool, and whose prior art has been instrumental in the development of this newer version. ##### Pipeline signatures Pipeline signatures establish that important aspects of steps haven't been changed since they were uploaded. The following fields are included in the signature for each step: - **Commands.** - **Environment variables defined in the pipeline YAML.** Environment variables set by the agent, hooks, or the user's shell are _not_ signed, and can override the environment a step's command is started with. - **Plugins and plugin configuration.** - **Matrix configuration.** The matrix configuration is signed as a whole rather than each individual matrix job. This means the signature is the same for each job in the matrix. When signatures are verified for matrix jobs, the agent double-checks that the job it received is a valid construction of the matrix and that the signature matches the matrix configuration. - **The repository the commands are running in.** This prevents you from copying a signed step from one repository to another. > 📘 Compatibility with pipeline templates > [Pipeline templates](/docs/pipelines/governance/templates) are designed to be used across multiple pipelines and therefore, repositories. Due to the inclusion of repositories in step signatures, signed steps cannot be used with pipeline templates. ##### Enabling signed pipelines on your agents You'll need to configure your agents and update pipeline definitions to enable signed pipelines. Behind the scenes, signed pipelines use [JSON Web Signing (JWS)](https://datatracker.ietf.org/doc/html/rfc7797) to generate signatures. There are two options for creation of keys used with JWS, these are: - Self managed key pairs - AWS KMS managed keys ##### Self-managed key creation You'll need to generate a [JSON Web Key Set (JWKS)](https://datatracker.ietf.org/doc/html/rfc7517) to sign and verify your pipelines with, then configure your agents to use those keys. ###### Step 1: Generate a key pair Luckily, the agent has you covered! A JWKS generation tool is built into the agent, which you can use to generate a key pair. To use it, you'll need to [install the agent on your machine](/docs/agent/self-hosted/install), and then run: ```bash buildkite-agent tool keygen --alg --key-id ``` Replacing the following: - `` with the signing algorithm you want to use. - `` with the key ID you want to use. Note that both the algorithm and key ID are optional - if `alg` isn't provided, the agent will default to `EdDSA`. If `key-id` isn't provided, the agent will generate a random one for you. For example, to generate an [EdDSA](https://en.wikipedia.org/wiki/EdDSA) key pair with a key ID of `my-key-id`, you'd run: ```bash buildkite-agent tool keygen --alg EdDSA --key-id my-key-id ``` The agent generates a JWKS key pair in your current directory: one private and one public. You can then use these keys to sign and verify your pipelines. Note that the value of `--alg` must be a valid [JSON Web Signing Algorithm](https://datatracker.ietf.org/doc/html/rfc7518#section-3), and that the agent does not support all JWA signing algorithms. At the time of writing, the agent supports: - `EdDSA` (the default) - `PS512` - `ES512` For an up-to-date list of supported algorithms, run: ```sh buildkite-agent tool keygen --help ``` Also note that the `PS512` and `ES512` algorithms are nondeterministic, which means that they will generate different signatures each time they are used. This feature can be desirable for dynamically generated pipelines, but may make it difficult to detect drift when the signed result is persisted—for example, when using the [Terraform provider](https://registry.terraform.io/providers/buildkite/buildkite/latest/docs/data-sources/signed_pipeline_steps). **Why doesn't the agent support RSASSA-PKCS1 v1.5 signatures?** In short, RSASSA-PKCS1 v1.5 signatures are less secure than the newer RSA-PSS signatures. While RSASSA-PKCS1 v1.5 signatures are still relatively secure, we want to encourage our users to use the most secure algorithms possible, so when using RSA keys, we only support RSA-PSS signatures. We also recommend looking into ECDSA and EdDSA signatures, which are more secure than RSA signatures. ###### Algorithm options When using signed pipelines, we recommend having multiple disjoint pools of agents, each using a different [queue](/docs/agent/queues). One pool should be the _uploaders_ and have access to the private keys. Another pool should be the _runners_ and have access to the public keys. This creates a security boundary between the agents that upload and sign pipelines and the agents that run jobs and verify signatures. Regarding your specific algorithm choice, any of the supported signing algorithms are fine and will be secure. If you're not sure which one to use, `EdDSA` is proven to be secure, has a modern design, wasn't designed by a Nation State Actor, and produces nice short signatures. It's also the default when running `buildkite-agent tool keygen`. ###### Step 2: Configure the agents Next, you need to configure your agents to use the keys you generated. On agents that upload pipelines, add the following to the agent's config file: ```ini signing-jwks-file= signing-jwks-key-id= verification-jwks-file= ``` This ensures that whenever those agents upload steps to Buildkite, they'll generate signatures using the private key you generated earlier. It also ensures that those agents verify the signatures of any steps they run, using the public key. ```ini verification-failure-behavior= ``` This setting determines the Buildkite agent's response when it receives a job without a proper signature, and also specifies how strictly the agent should enforce signature verification for incoming jobs. The agent will warn about missing or invalid signatures, but will still proceed to execute the job. If not explicitly specified, the default behavior is `block`, which prevents any job without a valid signature from running, ensuring a secure pipeline environment by default. On instances that verify jobs, add: ```ini verification-jwks-file= ``` ###### Step 3: Sign all steps So far, you've configured agents to sign and verify any steps they upload and run. However, you also define steps in a pipeline's settings through the Buildkite dashboard. For example, teams commonly use a single step in the Pipeline Settings to upload a pipeline definition from [a YAML file in the repository](/docs/pipelines/configure/defining-steps#step-defaults-pipeline-dot-yml-file). These steps should also be signed. > 🚧 Non-YAML steps > You must use YAML to sign steps configured in the Pipeline Settings page. If you don't use YAML, you'll need to [migrate to YAML steps](/docs/pipelines/tutorials/pipeline-upgrade) before continuing. To sign steps configured in the Pipeline Settings page, you need to add static signatures to the YAML. To do this, run: ```sh buildkite-agent tool sign \ --graphql-token \ --jwks-file \ --jwks-key-id \ --organization-slug \ --pipeline-slug \ --update ``` Replacing the following: - `` with a Buildkite GraphQL token that has the `write_pipelines` scope. - `` with the path to the private key set you generated earlier. - `` with the key ID from earlier. - `` with the slug of the organization the pipeline is in. - `` with the slug of the pipeline you want to sign. This will download the pipeline definition using the Buildkite GraphQL API, sign all steps, and upload the signed pipeline definition back to Buildkite. ###### Rotating signing keys Regularly rotating signing and verification keys is good security practice, as it reduces the impact of a compromised key. Because signed pipelines use JWKS as their key format, rotating keys is easy. To rotate your keys: 1. [Generate a new key pair](#self-managed-key-creation-step-1-generate-a-key-pair). 1. Add the new keys to your existing key sets. Be careful not to mix public and private keys. 1. Update the `signing-key-id` on your signing agents to use the new key ID. The verifying agents will automatically use the public key with the matching key ID, if it's present. ##### AWS KMS managed key setup AWS Key Management Service (AWS KMS) is a web service that securely protects cryptographic keys, when using this service with signed pipelines the agent never has access to the private key used to sign pipelines, with calls going with the KMS API. ###### Step 1: Create a KMS key AWS KMS has a myriad of options when creating keys, for pipeline signing we require that you use some specific settings. 1. The key type must be Asymmetric and have a usage type of `SIGN_VERIFY`. 2. The key spec must be `ECC_NIST_P256`. If your using the AWS CLI the key can be created as follows: ```bash aws kms create-key --key-spec ECC_NIST_P256 --key-usage SIGN_VERIFY ``` Once created you can retrieve the key identifier, this will be a UUID, for example `1234abcd-12ab-34cd-56ef-1234567890ab`. Optionally you can create a key alias, or friendly name for the key as follows: ```bash aws kms create-alias \ --alias-name alias/example-alias \ --target-key-id 1234abcd-12ab-34cd-56ef-1234567890ab ``` ###### Step 2: Configure the agents Next, you need to configure your agents to use the KMS key you created. On agents that upload pipelines, add the following to the agent's config file: ```ini signing-aws-kms-key= ``` This ensures that whenever those agents upload steps to Buildkite, they'll generate signatures using the private key you generated earlier. It also ensures that those agents verify the signatures of any steps they run, using the public key. ```ini verification-failure-behavior= ``` This setting determines the Buildkite agent's response when it receives a job without a proper signature, and also specifies how strictly the agent should enforce signature verification for incoming jobs. The agent will warn about missing or invalid signatures, but will still proceed to execute the job. If not explicitly specified, the default behavior is `block`, which prevents any job without a valid signature from running, ensuring a secure pipeline environment by default. ###### Step 3: Sign all steps To sign steps configured in the Pipeline Settings page, you need to add static signatures to the YAML. To do this, run: ```sh buildkite-agent tool sign \ --graphql-token \ --signing-aws-kms-key \ --organization-slug \ --pipeline-slug \ --update ``` Replacing the following: - `` with a Buildkite GraphQL token that has the `write_pipelines` scope. - `` with the path to the private key set you generated earlier. - `` with the AWS KMS key ID or alias created earlier. - `` with the slug of the organization the pipeline is in. - `` with the slug of the pipeline you want to sign. ###### Step 4: Assign IAM permissions to your agents There are two common roles for agents when using signed pipelines, these being those that sign and upload pipelines, and those that verify steps. To follow least privilege best practice you should access to the KMS key using IAM to specific actions as seen below. For agents which will sign and verify pipelines the following IAM Actions are required. - kms:Sign - kms:Verify - kms:GetPublicKey For agents which only verify pipelines the following IAM Actions are required. - kms:Verify - kms:GetPublicKey --- ### Overview URL: https://buildkite.com/docs/agent/self-hosted/versions-directory #### Agent versions directory The following lists of Buildkite agent versions are of stable version 3.x releases in reverse chronological order. Each version links through to its changelog on GitHub. Agent versions with known issues are indicated in these tables. ##### Agent versions 3.120 to 3.129 | Release changelog | Date of release | Known issues | [](https://github.com/buildkite/agent/releases/tag/) | | ##### Agent versions 3.110 to 3.119 | Release changelog | Date of release | Known issues | [](https://github.com/buildkite/agent/releases/tag/) | | ##### Agent versions 3.100 to 3.109 | Release changelog | Date of release | Known issues | [](https://github.com/buildkite/agent/releases/tag/) | | ##### Agent versions 3.90 to 3.99 | Release changelog | Date of release | Known issues | [](https://github.com/buildkite/agent/releases/tag/) | | ##### Agent versions 3.80 to 3.89 | Release changelog | Date of release | Known issues | [](https://github.com/buildkite/agent/releases/tag/) | | ##### Agent versions 3.70 to 3.79 | Release changelog | Date of release | Known issues | [](https://github.com/buildkite/agent/releases/tag/) | | ##### Agent versions 3.60 to 3.69 | Release changelog | Date of release | Known issues | [](https://github.com/buildkite/agent/releases/tag/) | | ##### Agent versions 3.50 to 3.59 | Release changelog | Date of release | Known issues | [](https://github.com/buildkite/agent/releases/tag/) | | ##### Agent versions 3.40 to 3.49 | Release changelog | Date of release | Known issues | [](https://github.com/buildkite/agent/releases/tag/) | | ##### Agent versions 3.30 to 3.39 | Release changelog | Date of release | Known issues | [](https://github.com/buildkite/agent/releases/tag/) | | ##### Agent versions 3.20 to 3.29 | Release changelog | Date of release | Known issues | [](https://github.com/buildkite/agent/releases/tag/) | | ##### Agent versions 3.10 to 3.19 | Release changelog | Date of release | Known issues | [](https://github.com/buildkite/agent/releases/tag/) | | ##### Agent versions 3.0 to 3.9 | Release changelog | Date of release | Known issues | [](https://github.com/buildkite/agent/releases/tag/) | | ##### Agent versions 2.x Buildkite version 2.x agent releases are not listed on this page. However, their installer bundles and changelogs are still available from the [Buildkite agent releases](https://github.com/buildkite/agent/releases) page. To upgrade from a 3.0-beta or 2.x agent version to a stable 3.x one, see [Upgrading from 3.0-beta and 2.x versions](/docs/agent/self-hosted/versions-directory/upgrading-from-3-dot-0-beta-and-v2). --- ### Upgrading from 3.0-beta and 2.x versions URL: https://buildkite.com/docs/agent/self-hosted/versions-directory/upgrading-from-3-dot-0-beta-and-v2 #### Upgrading from 3.0-beta and 2.x versions This page provides guidelines on how to upgrade your Buildkite agents from 3.0-beta and [2.x versions](/docs/agent/self-hosted/versions-directory#agent-versions-2-dot-x) to a [stable 3.x version](/docs/agent/self-hosted/versions-directory). To start, upgrade your unsupported agents using your operating system package manager, or by re-running the installation script. ##### Upgrading from 3.0-beta to a stable 3.0 agent To upgrade a **Ubuntu / Debian** 3.0 beta agent: * Edit `/etc/apt/sources.list.d/buildkite-agent.list` and replace the word `unstable` (or `experimental`) with `stable` * Run `sudo apt-get update && sudo apt-get upgrade -y buildkite-agent` To upgrade a **Red Hat / CentOS** 3.0 beta agent: * Edit `/etc/yum.repos.d/buildkite-agent.repo` and replace the word `unstable` (or `experimental`) with `stable` * Run `sudo yum clean expire-cache && sudo yum update buildkite-agent` If you didn't install the agent using the above packages, update the agent like you did originally and you will get the latest stable version. ##### Upgrading from a 2.0 agent To upgrade, install the new version 3 agent using one of the [standard installation methods](/docs/agent/self-hosted/install). To make installation easier, there are packages for each of the major operating systems. You can test your updated agents in parallel to your existing agents by using agent tags to create a new queue for 3.0 builds. ##### Overview of what has changed in version 3 agents Added: * [Plugins](/docs/pipelines/integrations/plugins) for sharing functionality between pipelines and customizing how agents behave * [Variable interpolation](/docs/agent/cli/reference/pipeline) in `pipeline.yml` * [Build annotations](/docs/agent/cli/reference/annotate) * [pre-exit hook](/docs/agent/hooks#job-lifecycle-hooks) Changed: * Agent meta-data has been renamed to "tags" * Much better Windows support, including .BAT hooks support * Checkout clean no longer ignores files in `.gitignore` * The bootstrap (run as a sub-process for every job) has moved from a [shell script](https://github.com/buildkite/agent/blob/2-6-stable/templates/bootstrap.sh) to [`buildkite-agent bootstrap`](/docs/agent/cli/reference/bootstrap). This means it's written in Go and cross-platform. Deprecated: * Built-in [Docker and Docker Compose support](/docs/pipelines/tutorials/docker-containerized-builds) has been deprecated. The same functionality is available from the dedicated plugins: [docker-compose](https://github.com/buildkite-plugins/docker-compose-buildkite-plugin) and [docker](https://github.com/buildkite-plugins/docker-buildkite-plugin). ###### Bootstrap customizations If you customized your `bootstrap.sh` file, you will need to move the changes to [hooks](/docs/agent/hooks), or update your bootstrap.sh to call `buildkite-agent bootstrap`. ###### Docker and Docker Compose support In v2 we supported a variety of environment variables like `BUILDKITE_DOCKER_COMPOSE_CONTAINER` and `BUILDKITE_DOCKER`. These are deprecated in favour of the [docker-compose](https://github.com/buildkite-plugins/docker-compose-buildkite-plugin) and [docker](https://github.com/buildkite-plugins/docker-buildkite-plugin) pipeline plugin. You can keep using the old environment variables in v3, but they will be removed in v4. ###### Steps using `BUILDKITE_DOCKER_COMPOSE_CONTAINER` This is a step that uses the v2 `BUILDKITE_DOCKER_COMPOSE_CONTAINER` environment variable to run the command in a docker-compose container: ```yaml steps: - label: ':hammer: Tests' command: 'scripts/tests.sh' env: BUILDKITE_DOCKER_COMPOSE_CONTAINER: app ``` The same action with the [docker-compose plugin](https://github.com/buildkite-plugins/docker-compose-buildkite-plugin) looks like this: ```yaml steps: - label: ':hammer: Tests' command: 'scripts/tests.sh' plugins: - docker-compose#v1.8.4: run: app ``` ###### Steps using `BUILDKITE_DOCKER` This is a step that uses the v2 `BUILDKITE_DOCKER` environment variable to run the command in docker container: ```yaml steps: - label: ':hammer: Tests' command: 'scripts/tests.sh' env: BUILDKITE_DOCKER: true ``` There isn't a direct conversion for this at present. You can either add a docker-compose file and use the [docker-compose plugin](https://github.com/buildkite-plugins/docker-compose-buildkite-plugin), or if you want to run your build in a docker container without providing a `Dockerfile` or a `docker-compose` file, you can use the [docker plugin](https://github.com/buildkite-plugins/docker-buildkite-plugin): ```yaml steps: - label: ':hammer: Tests' command: 'scripts/tests.sh' plugins: - docker#v1.1.1: image: "node:7" workdir: /app ``` ###### Environment variables in your pipeline.yml Previously we didn't support environment variable interpolation, such as `${MY_VARIABLE_NAME}` or `$MY_VARIABLE_NAME`. If you have any of these in your `pipeline.yml`, they will now be interpolated. To render the literal text, you will need to escape the dollar signs, for example `$$MY_VARIABLE_NAME`. See the [environment variable substitution](/docs/agent/cli/reference/pipeline#environment-variable-substitution) for more details. ###### Checkout clean no longer ignores files in .gitignore Older agents didn't remove files from your working directory that were ignored by git. The new default values for git clean are `-fxdq`. If you've previously overridden your `git-clean-flags` in your config, it might be a good chance to comment them out and use the standard behavior. --- ### Overview URL: https://buildkite.com/docs/agent/buildkite-hosted #### Buildkite hosted agents Buildkite hosted agents provides a fully-managed platform on which you can run your pipeline jobs, so that you don't have to manage [Buildkite agents](/docs/agent) in your own self-hosted environment. With hosted agents, Buildkite handles infrastructure management tasks, such as provisioning, scaling, and maintaining the servers that run your agents. ##### Why use Buildkite hosted agents Buildkite hosted agents provides numerous benefits over similar hosted machine and runner features of other CI/CD providers. The following cost benefits deliver enhanced value through accelerated build times, reduced operational overhead, and a lower total cost of ownership (TCO). - **Superior performance**: Buildkite hosted agents uses the latest generation Mac and AMD Zen-based hardware, which deliver up to 3x faster performance compared to equivalent sized machines/runners from other CI/CD providers and cloud platforms, powered by dedicated quality hardware and a proprietary low-latency virtualization layer exclusive to Buildkite. The hosted agents also dynamically autoscale to operate concurrently to meet high demand. - **Ephemeral, isolated environments that scale**: Hosted agents are provisioned on demand and destroyed after each job, providing clean, reproducible builds that dynamically scale and operate concurrently to meet high demand. - **Pricing is calculated per second**: Charges apply only to the precise duration of command or script execution—excluding startup and shutdown periods, with no minimum charges and no rounding to the nearest minute. - **Caching is included at no additional cost**: There are no supplementary charges for storage or cache usage. [Cache volumes](/docs/agent/buildkite-hosted/cache-volumes) operate on high-speed, local NVMe-attached disks, substantially accelerating caching and disk operations. This results in faster job completion, reduced minute consumption, and lower overall costs. - **Transparent Git mirroring**: This significantly accelerates git clone operations by caching repositories locally on the agent at startup—particularly beneficial for large repositories and monorepos. - **Transparent remote Docker builders at no additional cost**: Offloading Docker build commands to [dedicated, pre-configured machines](/docs/agent/buildkite-hosted/linux/remote-docker-builders) equipped with Docker layer caching and additional performance optimizations. This feature operates without any additional configuration, and is available to [Enterprise](https://buildkite.com/pricing/) plan customers only. - **An internal container registry**: Speed up your pipeline build times by managing your jobs' container images through your [internal container registry](/docs/agent/buildkite-hosted/internal-container-registry), which provides deterministic storage for Open Container Initiative (OCI) images. - **Consistently rapid queue times**: Job are dispatched to hosted agents within a matter of seconds, providing consistently low queue times. Buildkite hosted agents also provides the following assurances: - The platform: * Runs in a private cloud, which is purpose built and optimized for CI/CD workloads. * Is exclusively hosted in US East Coast data centers, operated by a trusted infrastructure provider, strategically selected to provide optimal performance, reliability and low-latency connectivity to major cloud regions. - Buildkite manages and runs hosted agents to ensure consistency under load for all customers. ##### How Buildkite hosted agents work When a pipeline's job is scheduled on a [Buildkite hosted queue](/docs/agent/queues/managing#create-a-buildkite-hosted-queue), this action begins the process of starting the job's execution on a new [ephemeral agent](/docs/pipelines/glossary#ephemeral-agent). The hosted queue's ephemeral agent begins its lifecycle with the initiation of a virtualized environment. - For [Linux hosted agents](/docs/agent/buildkite-hosted/linux), this environment includes a base image for containerization, which is either the hosted queue's [configured agent image](/docs/agent/buildkite-hosted/linux#agent-images), or one that you've configured to use in your pipeline, to which custom layers are added, including the Buildkite agent, and Buildkite-specific configurations. - For [macOS hosted agents](/docs/agent/buildkite-hosted/macos), this environment is a virtual machine, based on the macOS and Xcode version configured in your queue settings, running on dedicated Mac hardware. As part of this initiation process, any configured [cache volumes](/docs/agent/buildkite-hosted/cache-volumes) are attached, and then the entire virtualized environment is started. This process can take a few seconds to complete (appearing as job wait time), and varies depending on the size and recency of the cache volumes and the base image being used. Once started, the Buildkite agent running in the virtualized environment acquires the job and proceeds to run the job through to its completion. Once the job is complete, regardless of its exit status, the virtualized environment and all of its associated data, including data it generated during job execution, is removed and destroyed. Any cache volume data, however, is persisted. > 📘 Cluster isolation > Every Buildkite hosted queue and its agents are configured within a [Buildkite cluster](/docs/pipelines/security/clusters), which benefits from hypervisor-level isolation, ensuring robust separation between each instance. Each cluster also has its own [cache volumes](/docs/agent/buildkite-hosted/cache-volumes), [remote Docker builders](/docs/agent/buildkite-hosted/linux/remote-docker-builders) and [internal container registry](/docs/agent/buildkite-hosted/internal-container-registry), as well as [Buildkite secrets](/docs/pipelines/security/secrets/buildkite-secrets), which are not available to any other cluster. The ephemeral nature of Buildkite hosted agents' virtualized environments also offer the following benefits: - Each Buildkite hosted agent begins with a clean state, with no residual data from previous builds that could introduce vulnerabilities or cross-contamination between projects. Job dependencies are also pulled cleanly each time. - Short-lived hosted agents mitigate the window of opportunity for attackers to compromise the build environment, and any data generated or used during job execution, such as secrets or credentials, are destroyed after job completion or failure. ##### Getting started with Buildkite hosted agents Buildkite offers both [Linux](/docs/agent/buildkite-hosted/linux) and [macOS](/docs/agent/buildkite-hosted/macos) hosted agents, whose respective pages explain how to start setting them up. Buildkite hosted agent services support both public and private repositories. Learn more about setting up code access in [Hosted agent code access](/docs/agent/buildkite-hosted/code-access). If you need to migrate your existing Buildkite pipelines from using Buildkite agents in a [self-hosted architecture](/docs/pipelines/architecture#self-hosted-hybrid-architecture) to those using Buildkite hosted agents, see [Hosted agent pipeline migration](/docs/agent/buildkite-hosted/pipeline-migration) for details. When a Buildkite hosted agent machine is running (during a pipeline build) you can access the machine through a terminal. Learn more about this feature in [Hosted agents terminal access](/docs/agent/buildkite-hosted/terminal-access). Last, learn more about how to secure your network when using Buildkite hosted agents in [Network security](/docs/agent/buildkite-hosted/network-security). ##### Buildkite agent version updates As part of the hosted agents service, Buildkite aims to keep [Buildkite agents](/docs/agent) in your hosted agents up to date and to the latest version. If you find that your hosted agent queues are not on the latest version of the Buildkite agent, contact Buildkite support at support@buildkite.com and we'd be happy to get them updated for you. --- ### Overview URL: https://buildkite.com/docs/agent/buildkite-hosted/linux #### Linux hosted agents Buildkite's Linux hosted agents are: - [Buildkite agents](/docs/agent) hosted by Buildkite that run in a Linux environment. - Configured as part of a _Buildkite hosted queue_, where the Buildkite hosted agent's machine type is Linux, has a particular [size](#sizes) to efficiently manage jobs with varying requirements, and comes pre-installed with software in the form of [agent images](#agent-images), which can be [customized with other software](/docs/agent/buildkite-hosted/linux/custom-base-images). Learn more about: - Best practices for configuring queues in [How should I structure my queues](/docs/pipelines/security/clusters#clusters-and-queues-best-practices-how-should-i-structure-my-queues) of the [Clusters overview](/docs/pipelines/security/clusters), as well as [Manage queues](/docs/agent/queues/managing). - How to configure a Linux hosted agent in [Create a Buildkite hosted queue](/docs/agent/queues/managing#create-a-buildkite-hosted-queue). - The [concurrency](#concurrency) and [security](#security) of Linux hosted agents. ##### Sizes Buildkite offers a selection of Linux instance types (each based on a different combination of size and architecture, known as an _instance shape_), allowing you to tailor your hosted agent resources to the demands of your jobs. The architectures supported include AMD64 (x64_86) and ARM64 (AArch64). | Instance shape | Size | Architecture | vCPU | Memory | Disk space | `LINUX_AMD64_2X4` | Small | AMD64 | 2 | 4 GB | 47 GB | `LINUX_AMD64_4X16` | Medium | AMD64 | 4 | 16 GB | 95 GB | `LINUX_AMD64_8X32` | Large | AMD64 | 8 | 32 GB | 158 GB | `LINUX_AMD64_16X64` | Extra Large | AMD64 | 16 | 64 GB | 284 GB | `LINUX_ARM64_2X4` | Small | ARM642 | 4 GB | 47 GB | `LINUX_ARM64_4X16` | Medium | ARM64 | 4 | 16 GB | 95 GB | `LINUX_ARM64_8X32` | Large | ARM64 | 8 | 32 GB | 158 GB | `LINUX_ARM64_16X64` | Extra Large | ARM64 | 16 | 64 GB | 284 GB Note the following about Linux hosted agent instances. - The [Personal plan](https://buildkite.com/pricing/) only provides access to small-sized instance shapes. - Extra large instances are available on request. - To accommodate different workloads, instances are capable of running up to 8 hours. If you need extra large instances, or longer running hosted agents (over 8 hours), please contact Support at support@buildkite.com. ##### Concurrency Linux hosted agents can operate concurrently when running your Buildkite pipeline jobs. This concurrency is measured by the number of hosted agent machines that are allocated for your jobs at any given time. A machine is counted from the moment it starts booting to acquire a job, since that's when the machine starts consuming capacity. As a result, concurrency may not always match the exact number of running jobs, particularly with short-lived workloads. The number of Linux hosted agents (of a [Buildkite hosted queue](/docs/agent/queues/managing#create-a-buildkite-hosted-queue)) that can process your pipeline jobs concurrently is calculated by your Buildkite plan's _maximum combined vCPU_ value divided by your [instance shape's](#sizes) _vCPU_ value. See the [Buildkite pricing](https://buildkite.com/pricing/) page for details on the **Linux Concurrency** that applies to your plan. For example, if your Buildkite plan provides you with a maximum combined vCPU value of up to 48, and you've configured a Buildkite hosted queue with the `LINUX_AMD64_4X16` (Medium AMD64) [instance shape](#sizes), whose vCPU value is 4, then the number of concurrent hosted agents that can run jobs on this queue is 12 (that is, 48 / 4 = 12). When concurrency limits are exceeded, additional jobs will be queued until sufficient capacity becomes available. ##### Security Customer security is paramount to Buildkite, where our source code, build artifacts and deployment processes represent some of our most valuable and sensitive assets. The shift from a [self-hosted](/docs/pipelines/architecture#self-hosted-hybrid-architecture) to a [Buildkite hosted](/docs/pipelines/architecture#buildkite-hosted-architecture) architecture for Buildkite agents, introduces the potential for new attack vectors and shared responsibility models, and hence, additional security considerations. The security model for Buildkite hosted agents has the following characteristics to address these security considerations and to mitigate attack risks. - **Infrastructure and isolation security**: Buildkite employs a multi-tenant architecture, where each job runs in a completely isolated virtualized environment. Once a job is complete, regardless of its exit status, the virtualized environment is destroyed, along with all its data (except for [cache volumes](/docs/agent/buildkite-hosted/cache-volumes) that persist across jobs). This ephemeral approach ensures that customer workloads remain isolated from each other, even though the underlying hardware is shared across multiple customers. - **Physical and operational security**: The Buildkite hosted agent fleet operates from multiple [Tier 3+ data centers](https://en.wikipedia.org/wiki/Data_centre_tiers) with restricted physical access controls and regular security monitoring. The platform maintains SOC 2 compliance through regular audits of both hardware and software security controls. ##### Agent images Buildkite provides a Linux agent image pre-configured with common tools and utilities to help you get started quickly. This image also provides tools required for running jobs on hosted agents. The image is based on Ubuntu 22.04 and includes the following tools: - docker - docker-compose - docker-buildx - git-lfs - node - aws-cli You can customize the image that your hosted agents use by creating a [custom agent image](/docs/agent/buildkite-hosted/linux/custom-base-images). This approach is recommended for production workloads, as a custom agent image would give you full control over installed packages, security updates, and dependencies. --- ### Custom agent images URL: https://buildkite.com/docs/agent/buildkite-hosted/linux/custom-agent-images #### Custom agent images Custom agent images let you control which packages, tools, and security patches run in your hosted agent environment. A custom agent image is recommended for production workloads. Creating a custom agent image requires you to define a Dockerfile that installs the tools and utilities you require. You can [create a custom agent image](#create-an-agent-image) using the [Buildkite interface](#create-an-agent-image-using-the-buildkite-interface), [agent hooks](#create-an-agent-image-using-agent-hooks) or the [internal container registry](/docs/pipelines/hosted-agents/internal-container-registry). ##### Requirements within the image Buildkite Linux hosted agents have the `buildkite-agent` and `docker` binaries layered dynamically into the job running environment. This means that any base image being used does not need to install or maintain these versions or their configurations. Several tools are required for the `buildkite-agent` to successfully acquire and run a job. These are: - `git` - `ca-certificates` - `bash` There is also no requirement into which Linux flavor this image is based on. The default Buildkite Linux hosted agents image is based on Ubuntu, with other Linux flavors such as Alpine or CentOS being perfectly acceptable. > 📘 > Buildkite Linux hosted agents do not support changing the `USER` within the `Dockerfile`, nor setting the `GID` and `UID` environment variables. ##### Create an agent image Creating an agent image requires you to define a Dockerfile that installs the tools and utilities you require. This Dockerfile should be based on the [Buildkite hosted agent base image](https://hub.docker.com/r/buildkite/hosted-agent-base/tags). An example Dockerfile that installs the `awscli` and `kubectl`: ```dockerfile #### Set the environment variable to avoid interactive prompts during awscli installation ENV DEBIAN_FRONTEND=noninteractive #### Install AWS CLI RUN apt-get update && apt-get install -y awscli #### Install kubectl using pkgs.k8s.io RUN curl -LO https://storage.googleapis.com/kubernetes-release/release/$(curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt)/bin/linux/amd64/kubectl \ && chmod +x kubectl \ && mv kubectl /usr/local/bin/ ``` ###### Using the Buildkite interface To create an agent image using the Buildkite interface: 1. Select **Agents** in the global navigation to access the **Clusters** page. 1. Select the cluster in which to create the new agent image. **Note:** Before continuing, ensure you have created a Buildkite hosted queue (based on Linux architecture) within this cluster. Learn more about how to do this in [Create a Buildkite hosted queue](/docs/agent/queues/managing#create-a-buildkite-hosted-queue). 1. Select **Agent Images** to open the **Agent Images** page. 1. Select **New Image** to open the **New Agent Image** dialog. 1. Enter the **Name** for your agent image. 1. In the **Dockerfile** field, enter the contents of your Dockerfile. **Notes:** * The top of the Dockerfile contains the required `FROM` instruction, which cannot be changed. This instruction obtains the required Buildkite hosted agent base image. * Ensure any modifications you make to the existing Dockerfile content are correct before creating the agent image, since mistakes cannot be edited or corrected once the agent image is created. 1. Select **Create Agent Image** to create your new agent image. ###### Using agent hooks You can [create a custom agent image](#create-an-agent-image) and modify its Dockerfile to embed the following types of [job lifecycle hooks](/docs/agent/hooks#job-lifecycle-hooks) as [agent hooks](/docs/agent/hooks#hook-locations-agent-hooks): `environment`, `pre-checkout`, `checkout`, `post-checkout`, `pre-command`, `command`, `post-command`, `pre-artifact`, `post-artifact`, and `pre-exit`. Be aware that the `pre-bootstrap` job lifecycle hook and [agent lifecycle hooks](/docs/agent/hooks#agent-lifecycle-hooks) operate outside of a job's execution itself, and are therefore not supported within a Buildkite hosted agent context. To embed hooks in your agent image's Dockerfile: 1. Follow the [Create an agent image](#create-an-agent-image) instructions to begin creating your hosted agent within its Linux architecture-based Buildkite hosted queue. As part of this process, modify the agent image's Dockerfile to: 1. Add the `BUILDKITE_ADDITIONAL_HOOKS_PATHS` environment variable whose value is the path to where the hooks will be located. 1. Add any specific hooks to the path defined by this variable. An example excerpt from a `Dockerfile` that would include your own hooks: ```Dockerfile ENV BUILDKITE_ADDITIONAL_HOOKS_PATHS=/custom/hooks COPY ./hooks/*.sh /custom/hooks/ ``` This results in an agent image with the directory `/custom/hooks` that includes any `.sh` files located at `./hooks/` from where the image is created. 1. Follow the [Use an agent image](#use-an-agent-image) instructions to apply this new agent image to your Buildkite hosted queue. > 📘 > Buildkite hosted agents run with the `BUILDKITE_HOOKS_PATH` value of `/buildkite/agent/hooks`, which is the global agent hooks location. This path is fixed and is read-only when a job starts. Therefore, avoid setting the value of `BUILDKITE_ADDITIONAL_HOOKS_PATHS` to this path in your agent image's Dockerfile, as any files you copy across to this location will be overwritten when the job commences. ##### Use an agent image You can use an agent image in the following ways: - [Set an image as the default for a queue](#use-an-agent-image-set-the-default-image-for-a-queue) using the Buildkite interface. - [Specify a custom image for a queue](#use-an-agent-image-specify-a-custom-image-for-a-queue) using the Buildkite interface or API. - [Specify the image in your pipeline YAML](#use-an-agent-image-specify-an-image-in-your-pipeline-yaml), which allows different steps to use different images within the same queue. ###### Set the default image for a queue Once you have [created an agent image](#create-an-agent-image), you can set it as the default for a [Buildkite hosted queue](/docs/agent/queues/managing#create-a-buildkite-hosted-queue) based on Linux architecture. Any agents in the queue will use this image in new jobs, unless overridden in the pipeline YAML. To set a Buildkite hosted queue to use a custom Linux agent image: 1. Select **Agents** in the global navigation to access the **Clusters** page. 1. Select the cluster with the Linux architecture-based Buildkite hosted queue whose agent image requires configuring. 1. On the **Queues** page, select the Buildkite hosted queue based on Linux architecture. 1. Select the **Base image** tab to open its settings. 1. In the **Agent image** dropdown, select your agent image. **Note:** If you see an **Image URL** field, see [Specify a custom image for a queue](#use-an-agent-image-specify-a-custom-image-for-a-queue) for details on how to use this feature. 1. Select **Save settings** to save this update. ###### Specify a custom image for a queue > 📘 Private preview feature > The custom image URL feature is currently in _private preview_. To enable this feature for your Buildkite organization, contact support@buildkite.com. You can specify the URL of a custom image for a [Buildkite hosted queue](/docs/agent/queues/managing#create-a-buildkite-hosted-queue). When configured, this URL overrides the [agent image selected from the **Agent image** dropdown](#use-an-agent-image-set-the-default-image-for-a-queue). This image must be publicly available or have been pushed to the [internal container registry](/docs/pipelines/hosted-agents/internal-container-registry). To set a custom image URL through the Buildkite interface: 1. Select **Agents** in the global navigation to access the **Clusters** page. 1. Select the cluster with the Buildkite hosted queue. 1. On the **Queues** page, select the Buildkite hosted queue. 1. Select the **Base image** tab to open its settings. 1. In the **Image URL** field, enter the custom image URL. 1. Enter the **Image URL** for a custom image. The format for this URL follows the standard container image reference syntax of `registry.url/image-name:tag`. For example: * **Docker Hub:** `docker.io/node:latest`. * **Standard public container image reference:** Any publicly accessible container image URL, such as `my-registry.example.com/my-org/my-image:tag`. * **Internal container registry:** See [internal container registry](/docs/pipelines/hosted-agents/internal-container-registry) for more information. 1. Select **Save settings** to save this update. You can also set a custom image URL through the Buildkite API or Terraform: - **REST API:** Use the `agentImageRef` parameter in the `hostedAgents` object when [updating](/docs/apis/rest-api/clusters/queues#update-a-queue) a queue. **Note:** You must always specify the `instanceShape` parameter when using `agentImageRef`. If you don't wish to change the `instanceShape` value, specify its current value when submitting your call to specify the `agentImageRef` value. - **GraphQL API:** Use the `agentImageRef` field in the `hostedAgents` input when calling the [`clusterQueueUpdate` mutation](/docs/apis/graphql/cookbooks/hosted-agents#set-a-custom-image-url-for-a-buildkite-hosted-queue). - **Terraform:** Use the `agent_image_ref` attribute in the `hosted_agents.linux` block of the [`buildkite_cluster_queue` resource](https://registry.terraform.io/providers/buildkite/buildkite/latest/docs/resources/cluster_queue). ###### Specify an image in your pipeline YAML You can specify an agent image directly in your pipeline YAML using the `image` attribute under `agents`. The image name must match the name of an [agent image you have created](#create-an-agent-image) in the cluster. To set a default image for all steps in a pipeline, add the `image` attribute at the root level: ```yaml agents: queue: "hosted-linux" image: "DevOps Agent Image" steps: - label: "Build" command: "make build" ``` You can also override the image for individual steps, allowing different steps to use different images within the same queue: ```yaml agents: queue: "hosted-linux" image: "DevOps Agent Image" steps: # Uses "DevOps Agent Image" from root-level agents - label: "Build" command: "make build" # Overrides root-level image - label: "Run integration tests" command: "make integration-test" agents: image: "Default Agent Image" # Uses "DevOps Agent Image" from root-level agents - label: "Deploy" command: "make deploy" ``` ##### Issues with starting a job There are several scenarios where a job may not start successfully, and various reasons why this might happen. The following is a non-exhaustive list of common reasons why jobs may not be starting: - The specified base image configured on the Buildkite hosted queue cannot be found. This could be due to the full URL or a specific tag for that image not being available - in particular, note that images are bound to a single cluster, and can't be used by agents in other clusters. It's also possible that this could be a timing issue, where the tag being requested is not available _yet_ and waiting may be sufficient. - When the image is a publicly available one, especially when using a registry other than [Docker Hub](https://hub.docker.com/), Buildkite may be rate-limited when attempting to retrieve it. It is highly recommended using the [internal container registry](/docs/agent/buildkite-hosted/internal-container-registry) to mirror the image and avoid this issue. - The [required packages](#requirements-within-the-image) have not been installed within the image. This is especially the case for `ca-certificates`, as this package will prevent the `buildkite-agent` from being able to communicate with the Buildkite platform. ##### Delete an agent image To delete a [previously created agent image](#create-an-agent-image), it must not be [used by any Buildkite hosted queues](#use-an-agent-image). To delete an agent image: 1. Select **Agents** in the global navigation to access the **Clusters** page. 1. Select the cluster in which to delete the agent image. 1. Select **Agent Images** to open the **Agent Images** page. 1. Select the agent image to delete > **Delete**. **Note:** If you are prompted that the agent image is currently in use, follow the link/s to each Buildkite hosted queue on the **Delete Image** message to change the queue's **Agent image** (from the **Base Image** tab) to another agent image. 1. On the **Delete Image** message, select **Delete Image** and the agent image is deleted. --- ### Remote Docker builders URL: https://buildkite.com/docs/agent/buildkite-hosted/linux/remote-docker-builders #### Remote Docker builders _Remote Docker builders_ are dedicated machines available to [Buildkite hosted agents](/docs/agent/buildkite-hosted), which are specifically designed and configured to handle the [building of Docker images](https://docs.docker.com/build/) with the `docker build` command. This feature substantially speeds up the build times of pipelines that need to build Docker images. > 📘 Default Enterprise plan feature > Remote Docker builders is a _default feature_ available to all new and existing Buildkite customers on the [Enterprise](https://buildkite.com/pricing) plan. This means that for Enterprise plan customers, this feature is used automatically whenever native `docker build` commands are encountered within Buildkite pipelines. However, you can disable this feature, so that Docker images are built on the Buildkite hosted agents themselves. Learn more about how to do this in [Building Docker images on the Buildkite hosted agent](#building-docker-images-on-the-buildkite-hosted-agent). If your Buildkite organization doesn't have access to this feature, then [additional volumes](#additional-volumes) are created in your Buildkite clusters. ##### Remote Docker builders overview When using the remote Docker builders feature, any `docker build` commands within your pipeline are directed to and run on an external [builder service](https://docs.docker.com/build/builders/) (the remote Docker builder), rather than being run on the Buildkite hosted agent instance itself. While the agent orchestrates and streams the build configuration to this remote builder service, the builder service itself builds the images and returns the completed images and metadata to the job that made the `docker build` call on your agent. These completed images are also stored in your [container cache volumes](/docs/agent/buildkite-hosted/cache-volumes#container-cache-volumes), if you've enabled this feature. Learn more about this in [Step-by-step remote Docker builder process](#step-by-step-remote-docker-builder-process). The remote builder service also maintains a [cache](https://docs.docker.com/build/cache/) of its built image layers (stored in the builder service's local file system, and in your [container cache volumes](/docs/agent/buildkite-hosted/cache-volumes#container-cache-volumes)). Images already stored in this local file system usually don't need to be re-built upon a `docker build` call, and any images in your container cache volumes can be pulled to jobs requesting them, which in turn, speeds up your overall pipeline builds, since your Buildkite hosted agents running these pipelines are free to build the rest of your pipeline and conduct other work. When using remote Docker builders, your first few pipeline builds will typically require more time to complete. However, once the required layers and their images have been built, any subsequent pipeline builds are completed much more rapidly. Learn more about how remote Docker builders improve the speed and performance of your of your pipeline builds in [Benefits of using remote Docker builders](#benefits-of-using-remote-docker-builders). ##### Step-by-step remote Docker builder process The following steps outlines this remote Docker builder process in more detail: 1. A Buildkite hosted agent encounters a `docker build` command in one of its pipeline jobs, and then the agent generates a [Buildx](https://docs.docker.com/build/concepts/overview/#buildx) configuration to target the remote [builder](https://docs.docker.com/build/builders/) service, which uses [BuildKit](https://docs.docker.com/build/concepts/overview/#buildkit). Learn more about Buildx and BuildKit in [Docker Build overview](https://docs.docker.com/build/concepts/overview/). 1. The remote builder service executes stages in parallel where possible, reusing unchanged image layers in your container cache volumes and rebuilding images from only new layers that are needed. 1. The build outputs from `docker build` are delivered based on flags used on its command, for example, loaded back to the agent with no additional flags, or pushed to a registry or exported to an OCI archive with `--push`. ##### Benefits of using remote Docker builders This section provides more details about the benefits provided by [remote Docker builders](#remote-docker-builders-overview). ###### Faster builds Remote Docker builders run on remote dedicated machines, which have been optimized for [BuildKit](https://docs.docker.com/build/concepts/overview/#buildkit). Therefore, CPU-bound stages are completed much more rapidly. Your [container cache volumes](/docs/agent/buildkite-hosted/cache-volumes#container-cache-volumes) is both shared and persistent, ensuring your job will start and run as quickly as possible. Incremental builds also reliably skip unchanged image layers as they're kept on the dedicated remote Docker builder's local file system, often yielding 2-40 times build speed increases. Using remote Docker builders with the container cache volumes alongside [Git mirror volumes](/docs/agent/buildkite-hosted/cache-volumes#git-mirror-volumes) can provide drastic reductions in job runtimes. ###### Smaller agents with a simple setup Using remote Docker builders means that you can maintain smaller Buildkite hosted agents with a simpler setup, since Docker images are built through the remote Docker builder. ###### Improved cache hit rates and reproducibility The remote Docker builders are dedicated machines with their own local file system cache that temporarily stores their image layers for 30 minutes from each build. Therefore, during periods of time when frequent image builds occur, the availability of stored relevant image layers on this file system improves the reuse of these layers, leading to a greater environmental consistency. ##### Building Docker images on the Buildkite hosted agent Since [remote Docker builders](#remote-docker-builders-overview) is a [default Enterprise plan feature](#default-enterprise-plan-feature), when using the `docker build` command in your Buildkite pipelines, you can configure this command to build Docker images on the Buildkite hosted agent itself, by either [disabling BuildKit](#building-docker-images-on-the-buildkite-hosted-agent-disable-buildkit) or [using Buildx and its default local builder](#building-docker-images-on-the-buildkite-hosted-agent-use-buildx-and-its-default-local-builder). ###### Disable BuildKit Disabling BuildKit, which can be done by setting the `DOCKER_BUILDKIT` environment variable value to `0` _before_ running the `docker build` command, results in the Docker image being built on the Buildkite hosted agent. For example: ```yaml steps: - label: "\:docker\: Build Docker image locally" command: | export DOCKER_BUILDKIT=0 docker build -t my-image:latest . ``` Or: ```yaml steps: - label: "\:docker\: Build Docker image locally" env: DOCKER_BUILDKIT: "0" command: | docker build -t my-image:latest . ``` The `my-image:latest` image will be built on the Buildkite hosted agent. ###### Use Buildx and its default local builder Using Buildx and its default local builder (with the [`docker buildx use` command](https://docs.docker.com/reference/cli/docker/buildx/use/)) and then the [`docker buildx build` command](https://docs.docker.com/reference/cli/docker/buildx/build/), also results in the Docker image being built on the Buildkite hosted agent. For example: ```yaml steps: - label: "\:docker\: Build Docker image locally" command: | docker buildx use default docker buildx build -t my-image:latest . ``` The `my-image:latest` image will also be built on the Buildkite hosted agent. ##### Additional volumes If your Buildkite organization doesn't have access to the [remote Docker builders](#remote-docker-builders-overview) feature, then new [volumes](/docs/agent/buildkite-hosted/cache-volumes) will appear in your [cluster](/docs/pipelines/security/clusters)'s volumes list—one for each unique Git repository used by a pipeline. The naming convention for these volumes is based on your cloud-based Git service's account and repository name, and begins with "buildkite-local-builder-". For example, **buildkite-local-builder-my-account-my-repository**. You can view all of your current cluster's volumes through its **Cached Storage > Volumes** page. --- ### Overview URL: https://buildkite.com/docs/agent/buildkite-hosted/macos #### macOS hosted agents Buildkite's macOS hosted agents are: - [Buildkite agents](/docs/agent) hosted by Buildkite that run in a macOS environment. - Configured as part of a _Buildkite hosted queue_, where the Buildkite hosted agent's machine type is macOS, has a particular [size](#sizes) to efficiently manage jobs with varying requirements, and comes pre-installed with [software](#macos-instance-software-support). > 📘 Pro and Enterprise plan feature > Buildkite macOS hosted agents are only available to Buildkite customers on [Pro or Enterprise](https://buildkite.com/pricing) plans. Learn more about: - Best practices for configuring queues in [How should I structure my queues](/docs/pipelines/security/clusters#clusters-and-queues-best-practices-how-should-i-structure-my-queues) of the [Clusters overview](/docs/pipelines/security/clusters), as well as [Manage queues](/docs/agent/queues/managing). - How to configure a macOS hosted agent in [Create a Buildkite hosted queue](/docs/agent/queues/managing#create-a-buildkite-hosted-queue). - How to use macOS hosted agents to [build iOS apps](/docs/agent/buildkite-hosted/macos/getting-started-with-ios). - The [concurrency](#concurrency) and [security](#security) of macOS hosted agents. ##### Sizes Buildkite offers a selection of macOS instance types (each based on a different size combination of virtual CPU power and memory capacity, known as an _instance shape_), allowing you to tailor your hosted agents' resources to the demands of your jobs. | Instance shape | Size | vCPU | Memory | Disk space | `MACOS_ARM64_M4_6X28` | Medium | 6 | 28 GB | 182 GB | `MACOS_ARM64_M4_12X56` | Large | 12 | 56 GB | 294 GB **Note:** Shapes `MACOS_M2_4X7`, `MACOS_M2_6X14`, `MACOS_M2_12X28`, `MACOS_M4_12X56` were deprecated and removed on July 1, 2025. Also note the following about macOS hosted agent instances. - Only [Apple silicon](https://en.wikipedia.org/wiki/Apple_silicon) architectures are supported. - To accommodate different workloads, instances are capable of running up to 4 hours. If you have specific needs for longer running hosted agents (over 4 hours), please contact Support at support@buildkite.com. ##### Concurrency macOS hosted agents can operate concurrently when running your Buildkite pipeline jobs. This concurrency is measured by the number of hosted agent machines that are allocated for your jobs at any given time. A machine is counted from the moment it starts booting to acquire a job, since that's when the machine starts consuming capacity. As a result, concurrency may not always match the exact number of running jobs, particularly with short-lived workloads. The number of macOS hosted agents (of a [Buildkite hosted queue](/docs/agent/queues/managing#create-a-buildkite-hosted-queue)) that can process your pipeline jobs concurrently is calculated by your Buildkite plan's _maximum combined vCPU_ value divided by your [instance shape's](#sizes) _vCPU_ value. See the [Buildkite pricing](https://buildkite.com/pricing/) page for details on the **Mac M4 Concurrency** that applies to your plan. For example, if your Buildkite plan provides you with a maximum combined vCPU value is up to 24, and you've configured a Buildkite hosted queue with the `MACOS_ARM64_M4_6X28` (Medium) [instance shape](#sizes), whose vCPU value is 6, then the number of concurrent hosted agents that can run jobs on this queue is 4 (that is, 24 / 6 = 4). When concurrency limits are exceeded, additional jobs will be queued until sufficient capacity becomes available. ##### macOS instance software support All standard macOS [Tahoe](#macos-tahoe), [Sequoia](#macos-sequoia) and [Sonoma](#macos-sonoma) version instances have their own respective Xcode and runtime software versions available by default (listed below). Each macOS version also has its own set of [Homebrew packages](#homebrew-packages) with specific versions optimized for that operating system. If you have specific requirements for software that is not listed here, please contact Support at support@buildkite.com. While you currently cannot provide custom base images for macOS hosted agents (as is possible using [agent images](/docs/agent/buildkite-hosted/linux#agent-images) for Linux hosted agents), you do have significant control over these virtual machines during job execution—including the ability to install software using Homebrew, use [git mirroring](/docs/agent/buildkite-hosted/cache-volumes#git-mirror-volumes) for performance, and leverage persistent [cache volumes](/docs/agent/buildkite-hosted/cache-volumes). Updated Xcode versions will be available one week after Apple offers them for download. This includes Beta, Release Candidate (RC), and official release versions. ##### macOS Tahoe - 26.3.1 ###### Xcode - 26.3 - 26.2 - 26.1.1 - 26.1 - 26.0.1 - 26.0 - 16.4 ###### Runtimes ###### iOS - 26.2 - 26.1 - 26.0 - 18.6 - 17.5 ###### tvOS - 26.2 - 26.1 - 26.0 - 17.5 - 16.4 ###### visionOS - 26.2 - 26.1 - 26.0 - 2.5 - 1.2 ###### watchOS - 26.2 - 26.1 - 26.0 - 11.5 - 10.5 ##### macOS Sequoia - 15.7.4 ###### Xcode - 26.4-Beta2 - 26.4-Beta - 26.3-RC2 - 26.3 - 26.2 - 26.1.1 - 26.1 - 26.0.1 - 26.0 - 16.4 - 16.3 - 16.2 - 16.1 - 16.0 - 15.4 ###### Runtimes ###### iOS - 26.4 beta 2 - 26.2 - 26.1 - 26.0 - 18.6 - 18.5 - 18.4 - 18.2 - 18.1 - 18.0 - 17.5 - 16.4 - 15.5 ###### tvOS - 26.4 beta 2 - 26.2 - 26.1 - 26.0 - 18.5 - 18.4 - 18.2 - 18.1 - 18.0 - 17.5 - 16.4 ###### visionOS - 26.4 beta 2 - 26.2 - 26.1 - 26.0 - 2.5 - 2.4 - 2.2 - 2.1 - 2.0 - 1.2 - 1.1 - 1.0 ###### watchOS - 26.4 beta 2 - 26.2 - 26.1 - 26.0 - 11.5 - 11.4 - 11.2 - 11.1 - 11.0 - 10.5 - 9.4 ##### macOS Sonoma - 14.8.3 ###### Xcode - 16.3 - 16.2 - 16.1 - 16.0 - 15.4 - 15.3 - 15.2 - 15.1 - 14.3.1 ###### Runtimes ###### iOS - 18.4 - 18.2 - 18.1 - 18.0 - 17.5 - 17.4 - 17.2 - 16.4 - 16.2 - 15.5 ###### tvOS - 18.4 - 18.2 - 18.1 - 18.0 - 17.5 - 17.4 - 17.2 - 16.4 ###### visionOS - 2.4 - 2.2 - 2.1 - 2.0 - 1.2 - 1.1 - 1.0 ###### watchOS - 11.4 - 11.2 - 11.1 - 11.0 - 10.5 - 10.4 - 10.2 - 9.4 ##### Homebrew packages The versions for each of these packages varies by macOS version. See [Identifying Homebrew package versions](#homebrew-packages-identifying-homebrew-package-versions) for instructions on how to identify each package's version. - ant - applesimutils - aria2 - awscli - azcopy - azure-cli - bazelisk - bicep - carthage - cmake - cocoapods - curl - deno - docker - docker-buildx - fastlane - gcc@13 - gh - git - git-lfs - gmp - gnu-tar - gnupg - go - gradle - httpd - jq - kotlin - libpq - llvm - llvm@15 - maven - mint - nginx - node - openssl@3 - p7zip - packer - perl - php - pkgconf - postgresql@14 - python@3.14 - r - rbenv - rbenv-bundler - ruby - ruby@3.4 - rust - rustup - selenium-server - swiftformat - swiftlint - tmux - unxip - wget - wireguard-go - wireguard-tools - xcbeautify - xcodes - yq - zstd ###### Identifying Homebrew package versions To find the [Homebrew package](#homebrew-packages) version used by your macOS hosted agent: 1. Select **Agents** in the global navigation > your [cluster](/docs/pipelines/security/clusters/manage) containing the [macOS Buildkite hosted agent queue](/docs/agent/queues/managing) > your macOS hosted agent. 1. On your macOS hosted agent's page, select **Base image** and scroll down to **Specifications** > **Homebrew packages** to view these packages, along with their respective versions. ###### Managing Homebrew package versions Homebrew package versions are periodically updated when macOS hosted agent images are refreshed. If your builds require explicit, repeatable versions, pin the versions you need as part of your pipeline rather than relying on the image defaults. ###### Inspect currently installed packages To view installed packages and their versions on the agent: ```bash brew list --versions brew info brew info @ # when the formula supports versioned installs ``` For example: ```bash brew list --versions ruby ruby@3.4 rbenv brew info ruby@3.4 ruby --version bundler --version ``` ###### Use a version manager for language runtimes For languages that have version managers such as Ruby, pin the language version in your jobs using a version manager rather than relying on the image's installed runtime. Example using Ruby with `rbenv`, including a cache for installed Ruby versions: ```yaml steps: - label: "Pin Ruby with rbenv" command: | eval "$(rbenv init -)" rbenv install 3.4.7 --skip-existing rbenv global 3.4.7 ruby -v gem install bundler bundler -v cache: paths: - "~/.rbenv/versions" size: 20g name: "rbenv-versions" ``` ###### Pin dependencies using a Brewfile Commit a `Brewfile` to your repository and install from it during the build to make Homebrew dependencies explicit and reduce unexpected version changes. Example `Brewfile`: ```ruby brew "wget" brew "jq" brew "rbenv" brew "ruby@3.4" ``` Pipeline step: ```yaml steps: - label: "Install Homebrew dependencies" command: | brew update brew bundle --file Brewfile brew list --versions ``` When a versioned formula is available (for example, `ruby@3.4` or `python@3.12`), use it to pin to a specific major version. ###### Pin an exact formula version Homebrew does not support installing an arbitrary historical version of every formula. Options for stricter version control include: - Using a versioned formula when available (for example, `ruby@3.4` or `python@3.12`) - Using a language or tool version manager (recommended for runtimes) - Downloading an exact version from upstream release binaries with a pinned URL and checksum ##### Security Customer security is paramount to Buildkite, where our source code, build artifacts and deployment processes represent some of our most valuable and sensitive assets. The shift from a [self-hosted](/docs/pipelines/architecture#self-hosted-hybrid-architecture) to a [Buildkite hosted](/docs/pipelines/architecture#buildkite-hosted-architecture) architecture for Buildkite agents, introduces the potential for new attack vectors and shared responsibility models, and hence, additional security considerations. The security model for Buildkite hosted agents has the following characteristics to address these security considerations and to mitigate attack risks. - **Infrastructure and isolation security**: Buildkite employs a multi-tenant architecture, where each job runs in a completely isolated virtualized environment. Once a job is complete, regardless of its exit status, the virtualized environment is destroyed, along with all its data (except for [cache volumes](/docs/agent/buildkite-hosted/cache-volumes) that persist across jobs). This ephemeral approach ensures that customer workloads remain isolated from each other, even though the underlying hardware is shared across multiple customers. - **Physical and operational security**: The Buildkite hosted agent fleet operates from multiple [Tier 3+ data centers](https://en.wikipedia.org/wiki/Data_centre_tiers) with restricted physical access controls and regular security monitoring. The platform maintains SOC 2 compliance through regular audits of both hardware and software security controls. Note that for macOS hosted agents, virtualization is achieved through Apple's Virtualization framework on Apple Silicon, providing lightweight but secure virtual machine isolation. Learn more about [How Buildkite hosted agents work](/docs/agent/buildkite-hosted#how-buildkite-hosted-agents-work). --- ### Getting started with iOS URL: https://buildkite.com/docs/agent/buildkite-hosted/macos/getting-started-with-ios #### Getting started with iOS This getting started with iOS guide is a tutorial that helps you understand how to set up Buildkite macOS hosted agents to run a Buildkite pipeline that creates a basic iOS app for deployment. ##### Before you start To complete this tutorial, you'll need to have done the following: - Run through the [Getting started with Pipelines](/docs/pipelines/getting-started) tutorial, to familiarize yourself with the basics of Buildkite Pipelines. - Make your own copy or fork of the [FlappyKite](https://github.com/buildkite/FlappyKite) repository within your own GitHub account. ##### Set up your hosted agent You can use [macOS hosted agents](/docs/agent/buildkite-hosted/macos) to build iOS apps, which you can get up and running by following the procedure in this section. > 📘 Already running an agent > If you already have a Buildkite hosted queue for macOS hosted agents, skip to the [next step on creating a pipeline](#create-a-pipeline). You can create the first [Buildkite hosted agent](/docs/agent/buildkite-hosted) for [macOS](/docs/agent/buildkite-hosted/macos) within a Buildkite organization for a two-week free trial, after which a usage cost (based on the agent's capacity) is charged per minute. To create your macOS hosted agent: 1. Follow the [Create a Buildkite hosted queue](/docs/agent/queues/managing#create-a-buildkite-hosted-queue) > [Using the Buildkite interface](/docs/agent/queues/managing#create-a-buildkite-hosted-queue-using-the-buildkite-interface) instructions to begin creating your hosted agent within its own queue. As part of this process: * Give this queue an intuitive **key** and **description**, for example, **macos** and **Buildkite macOS hosted queue**, respectively. * In the **Select your agent infrastructure** section, select **Hosted**. * Select **macOS** as the **Machine type** and **Medium** for the **Capacity**. 1. Make your pipelines use your new macOS hosted agent by default, by ensuring its queue is the _default queue_. This should be indicated by **(default)** after the queue's key on the cluster's **Queues** page. If this is not the case and another queue is marked **(default)**: 1. On the cluster's **Queues** page, select the queue with the hosted agent you just created. 1. On the queue's **Overview** page, select the **Settings** tab to open this page. 1. In the **Queue Management** section, select **Set as Default Queue**. Your Buildkite macOS hosted agent, as the new default queue, is now ready to use. ##### Create a pipeline Next, you'll create a new pipeline to build the example [FlappyKite Swift application](https://github.com/buildkite/FlappyKite) (app). This simple example of a mobile app starts with an initial blank screen, and a plus (**+**) button at its top. Each time you tap this button, a new timestamp is generated successively down the screen. The source code for this app contains the Buildkite pipeline in its `.buildkite` folder. This pipeline: - Runs two iOS emulators (one each for the iPhone 16 and 16 Pro models) to test the app, which in turn, takes screenshots of the app after the **+** button is tapped a few times as part of a UI test. - Leverages [fastlane](https://fastlane.tools/) to automate deployments and releases. Learn more about fastlane from the [fastlane documentation](https://docs.fastlane.tools/). To create the new Buildkite pipeline for this app: 1. [Add a new pipeline](https://buildkite.com/new) in your Buildkite organization, select your GitHub account from the **Any account** dropdown, and specify [your copy or fork of the 'FlappyKite' repository](#before-you-start) for the **Git Repository** value. 1. On the **New Pipeline** page, select the cluster you [created the hosted agent for macOS](#set-up-your-hosted-agent) in. 1. If necessary, provide a **Name** for your new pipeline. 1. Select the **Cluster** of the [agent you had previously set up](#set-up-your-hosted-agent). 1. If your Buildkite organization already has the [teams feature enabled](/docs/platform/team-management/permissions#manage-teams-and-permissions), choose the **Team** who will have access to this pipeline. 1. Leave all other fields with their pre-filled default values, and select **Create Pipeline**. This associates the example repository with your new pipeline, and adds a step to upload the full pipeline definition from the repository. 1. On the next page showing your pipeline name, select **New Build**. In the resulting dialog, create a build using the pre-filled details. 1. In the **Message** field, enter a short description for the build. For example, **My first build**. 1. Select **Create Build**. 1. After a few minutes, and when the pipeline has completed its build, expand the **screenshots** job. 1. Select the **Artifacts** tab to reveal the two screenshots taken (one from each iOS emulator) after the UI tests 'tap' the **+** button three times. 1. Select each screenshot to view the results, such as the following from the main screen of the app run by the pipeline in an iPhone 16 Pro emulator. ##### Next steps That's it! You've successfully configured a Buildkite hosted macOS agent, built an iOS app, and checked its functionality using emulators run by the build. 🎉 Learn more about how to deploy apps like FlappyKite to the iOS App Store, which you can integrate into your pipeline builds, from the following resources: - The [fastlane documentation on iOS App Store deployment](https://docs.fastlane.tools/getting-started/ios/appstore-deployment/), as well as [fastlane's Code Signing Guide Guide](https://docs.fastlane.tools/codesigning/getting-started/), and Buildkite's own [fastlane troubleshooting guide](/docs/agent/buildkite-hosted/macos/troubleshooting-fastlane). - The [Submit your iOS apps to the App Store](https://developer.apple.com/ios/submit/) page of the Apple Developer site. --- ### Troubleshooting fastlane URL: https://buildkite.com/docs/agent/buildkite-hosted/macos/troubleshooting-fastlane #### Troubleshooting fastlane This guide is for troubleshooting some common [fastlane](https://fastlane.tools/) issues in iOS development, specifically when [using Buildkite Pipelines to build iOS apps](/docs/agent/buildkite-hosted/macos/getting-started-with-ios). ##### Essential debugging steps When fastlane fails, start with these troubleshooting steps: 1. Enable verbose logging for detailed error information: ```bash fastlane [lane] --verbose ``` 1. Upload fastlane logs as build artifacts for analysis: * Configure the [build artifacts](/docs/pipelines/configure/artifacts) in your pipeline to upload your fastlane or xcodebuild logs. * Look for actual errors around fastlane's simplified error messages. When examining the verbose logs, you will often find the actual errors around the parts where fastlane reports its simplified error messages. For code signing errors specifically, look for messages containing "codesign", "security", or "provisioning profile". * For code signing issues, check `$HOME/Library/Logs/gym/*`. Learn more about fastlane's code signing errors in fastlane's documentation on [Debugging codesigning issues](https://docs.fastlane.tools/codesigning/troubleshooting/). 1. Verify your environment with these diagnostic commands: ```bash # Check code signing certificates security find-identity -v -p codesigning # Verify keychain configuration security list-keychains # List provisioning profiles ls -la ~/Library/MobileDevice/Provisioning\ Profiles/ ``` ##### Errors and resolutions for fastlane This section covers some of the fastlane errors you may encounter when using Buildkite Pipelines to build iOS apps, and ways to troubleshoot those errors. ###### CocoaPods sandbox error **Error message:** ``` The sandbox is not in sync with the Podfile.lock. Run 'pod install' or update your CocoaPods installation. ``` The sandbox is the `Pods` directory that contains your project's installed dependencies (pods). This error occurs when the installed dependencies don't match the pods and versions specified in the `Podfile.lock` file. It is best practice _not_ to commit the `Pods` directory to your repository, but only commit the `Podfile` and `Podfile.lock` files, and rebuild the dependencies during CI builds. **Resolution:** To resolve the error, run a standard Pod installation command: ```ruby lane :build do # This will run pod install cocoapods() # Rest of your lane... end ``` If this doesn't resolve the issue, try rebuilding the entire `Pods` directory: ```ruby lane :build do # This will delete and rebuild your entire Pods directory cocoapods(clean_install: true) # Rest of your lane... end ``` If both of these solutions still don't resolve the issue, ensure a consistent environment: - Run `bundle install` before calling fastlane to ensure all Ruby gems are installed based on the `Gemfile.lock`, since CocoaPods is also a Ruby gem. - Execute fastlane using `bundle exec fastlane` to use the versions of gems specified in the `Gemfile.lock`. ###### Ruby gem dependency error **Error message:** ``` bundler: failed to load command: fastlane (/opt/homebrew/lib/ruby/gems/3.4.0/bin/fastlane) /opt/homebrew/lib/ruby/gems/3.4.0/gems/fastlane-2.187.0/fastlane/lib/fastlane/cli_tools_distributor.rb:125:in 'Fastlane::CLIToolsDistributor.take_off': uninitialized constant FastlaneCore::UpdateChecker (NameError) [⠋] 🚀 /opt/homebrew/lib/ruby/gems/3.4.0/gems/httpclient-2.8.3/lib/httpclient/auth.rb:11: warning: mutex_m was loaded from the standard library, but is not part of the default gems starting from Ruby 3.4.0. You can add mutex_m to your Gemfile or gemspec to silence this warning. /opt/homebrew/lib/ruby/gems/3.4.0/gems/json-2.2.0/lib/json/generic_object.rb:2: warning: ostruct was loaded from the standard library, but will no longer be part of the default gems starting from Ruby 3.5.0. You can add ostruct to your Gemfile or gemspec to silence this warning. [⠙] 🚀 /opt/homebrew/lib/ruby/gems/3.4.0/gems/highline-2.0.3/lib/highline.rb:17: warning: abbrev was loaded from the standard library, but is not part of the default gems starting from Ruby 3.4.0. You can add abbrev to your Gemfile or gemspec to silence this warning. ``` **Resolution:** Buildkite agents hosted on macOS have Ruby 3.4+ installed via Homebrew. In Ruby 3.4+, the gems `mutex_m` and `abbrev` are no longer the default gems. In Ruby 3.5+, `ostruct` will no longer be a default gem, causing fastlane to fail. To fix this discrepancy, you need to add the following gems to the `Gemfile`: ```ruby gem 'mutex_m' gem 'ostruct' gem 'abbrev' ``` ###### Code signing failure **Error message:** ``` The following build commands failed: CodeSign /Users/agent/buildkite/builds/... Exit status: 65 ``` This error occurs during the code signing process. Code signing requires several components to be set up correctly: - Certificate and private keys: * Certificate issued by Apple to verify the developer's identity. * Private key available in the keychain. * Both must be properly imported into a keychain. - Provisioning profile: * A `.mobileprovision` file installed in `~/Library/MobileDevice/Provisioning Profiles/` * Must match the app's bundle identifier. * Must include the certificate being used to sign. * Must contain the app's entitlements (for example, push notification support). * Must not be expired. - Keychain access: * Keychain needs to be unlocked during the build. * Keychain should not be the default `login.keychain-db`. - Xcode build settings: * **Signing identity**: The certificate from the keychain. * **Provisioning profile**: Valid `.mobileprovision` file that matches the app bundle ID. * **Matching team ID**: Apple Developer Team ID must match between certificate and profile. Example Fastfile build configuration: ```ruby build_app( scheme: "AppName", workspace: "AppName.xcworkspace", # Code signing configuration export_method: "app-store", export_options: { provisioningProfiles: { "com.company.appname" => "AppName Distribution Profile" }, teamID: "ABCD12345E" }, codesigning_identity: "iPhone Distribution: Company Name (ABCD12345E)" ) ``` ##### Using fastlane match The fastlane platform offers the [match](https://docs.fastlane.tools/actions/match/) tool, which handles tasks ranging from creating and storing certificates and profiles, setting up code signing on a new machine, and handling multiple teams' keys and profiles through Git. If you're using fastlane match, most code signing is automated: ```ruby lane :build do # Match handles certificates and profiles automatically match(type: "appstore") build_app( scheme: "AppName", workspace: "AppName.xcworkspace" ) end ``` If you're experiencing issues with fastlane match, look for issues in the Matchfile and check the fastlane output logs. For troubleshooting match: - Verify your `Matchfile` configuration. - Check match repository access permissions. - Review match output logs for specific errors. --- ### Code access URL: https://buildkite.com/docs/agent/buildkite-hosted/code-access #### Buildkite hosted agents code access Buildkite hosted agents can access private repositories in GitHub natively, by authorizing Buildkite to access these GitHub repositories. To access private repositories from another provider, the [Git SSH Checkout](https://buildkite.com/resources/plugins/buildkite-plugins/git-ssh-checkout-buildkite-plugin/) plugin is available to provide this capability. To learn more about changes that may need to be completed at an individual pipeline level, see [Pipeline migration](/docs/agent/buildkite-hosted/pipeline-migration). ##### GitHub private repositories To use a private GitHub repository with Buildkite hosted agents, you need to authorize Buildkite to access your repository. This process can only be performed by [Buildkite organization administrators](/docs/platform/team-management/permissions#manage-teams-and-permissions-organization-level-permissions). 1. Select **Settings** in the global navigation to access the [**Organization Settings**](https://buildkite.com/organizations/~/settings) page. 1. In the **Integrations** section, select **Repository Providers**. 1. Select the **GitHub** option. 1. Follow the prompts to authorize the services on your GitHub account. You can restrict access to specific repositories during setup. ###### GitHub access token caching Buildkite hosted agents provides a feature for temporarily caching access tokens issued by GitHub whenever Buildkite requests one as part of interacting with a private repository. This interaction is established as part of configuring the Buildkite platform as a [GitHub App](https://docs.github.com/en/apps/overview) in your GitHub project or organization. Buildkite caches these GitHub access tokens for 50 minutes, where they remain encrypted on the Buildkite platform. This feature allows your hosted agents to use these GitHub access tokens and avoid hitting your GitHub rate limit, since these tokens can be re-used in subsequent builds. There's no need to configure this access token caching feature, as it's provided by default as part of [Buildkite hosted agents](/docs/agent/buildkite-hosted). ##### Public repositories Buildkite does not require any special permissions to access public repositories. ##### Private repositories with other providers Using Buildkite hosted agents with a private repository on provider other than GitHub, has the following two requirements: 1. Add an SSH key as a secret to the Buildkite hosted agent cluster. 1. Add the [Git SSH Checkout](https://buildkite.com/resources/plugins/buildkite-plugins/git-ssh-checkout-buildkite-plugin/) plugin to the initial pipeline steps, and any further steps within the uploaded pipeline. ###### Add the SSH key secret Navigate to **Agents** from the top menu, and open the **Cluster** for Buildkite hosted agents. In the left-hand side navigation, there will be a **Secrets** option to follow. Clicking the **New Secret** button will open a modal to capture the new secret. This secret should contain the full private key (including the header and footer) that will be used to access the repository. If there are multiple distinct keys to be used throughout the cluster, make sure to name them appropriately so they can each be used at their correct times. ###### Add a new pipeline With the secret now available, you can add a new pipeline to use it and access the Git repository. The availability of the secret allows the creation of a new pipeline to utilize it and access the Git repository. Once the secret is available, a new pipeline can be set up to use it and enable Git repository access. Create a new pipeline following the **Create a new pipeline without provider integration** link on the **New pipeline** page. Complete the form with the basic details about the new pipeline, including the Git URL. At this time, the **Steps** can also be updated to include the plugin usage. To illustrate an example, if we assume a secret named `GIT_SSH_CHECKOUT_PLUGIN_SSH_KEY` now exists we can set our **Steps** value accordingly. ```yaml steps: - label: "\:pipeline\: Upload" command: "buildkite-agent pipeline upload" plugins: - git-ssh-checkout#v0.4.1: ``` This base step content uses the new plugin with the default values to complete the Git checkout. Once created, a screen is presented about setting up Webhooks. If the Git provider being used supports the GitHub format of webhook communication, the details shown can be used to complete the integration. If not, you can use the **Skip Webhook Setup** button to skip this step. This will mean that builds will require manual triggering. At the completion of the pipeline creation process, a build can now be triggered that will use the SSH key from the secret to clone the Git repository. --- ### Pipeline migration URL: https://buildkite.com/docs/agent/buildkite-hosted/pipeline-migration #### Hosted agents pipeline migration To migrate an existing pipeline to use a hosted agents queue, you must first ensure: - Your pipeline is in the same cluster as all the hosted agent queues you wish to target. - Each step in the pipeline targets the required hosted agent queue. - Source control settings have been updated to allow code access. An additional process is required for private private repositories, see below for the relevant instructions. ##### Private repository To set your pipeline to use the **GitHub** service: 1. Ensure you have followed the instructions in [GitHub private repositories](/docs/agent/buildkite-hosted/code-access#github-private-repositories) (on the [Hosted agents code access](/docs/agent/buildkite-hosted/code-access) page) for your pipeline's GitHub repository. 1. Navigate to your pipeline settings. 1. Select GitHub from the left menu. 1. Remove the existing repository, or select the **choose another repository or URL** link. 1. Select the GitHub account. 1. Select the repository. 1. Select **Save Repository**. ##### All repositories When accessing any repository (public or private) from a Buildkite hosted agent, you must also ensure the repository is checked out using HTTPS. 1. Navigate to your pipeline settings. 1. Select **GitHub** from the left menu. 1. Change the **Checkout using** to **HTTPS**. --- ### Terminal access URL: https://buildkite.com/docs/agent/buildkite-hosted/terminal-access #### Hosted agents terminal access The Buildkite hosted agents feature provides you with _terminal/console access_ to jobs running on hosted agents. This feature is useful in allowing you to: - Understand what components are installed, as you set up your pipeline. - Test the behavior of different scripts (because they may not be well-documented). - Debug issues that are not reproducible in your local environment. This can be useful when migrating your pipelines across to [queues](/docs/agent/queues/managing) on Buildkite hosted agents. ##### Use terminal access on hosted agents Assuming that [terminal access is active across your Buildkite organization](#deactivate-and-reactivate-terminal-access-on-hosted-agents), you can access this terminal access feature from a currently building pipeline, when the job of the relevant step is being built. The terminal access feature is available to users who have/are any of the following: - build permissions on the pipeline that created the job - a [maintainer of the cluster](/docs/pipelines/security/clusters/manage#manage-maintainers-on-a-cluster) containing this pipeline - a Buildkite organization administrator of this cluster As a pipeline is being built, expand the relevant step and as its job is being built, select its **Open Terminal** button. A new browser window will open with terminal you can use to execute commands to investigate your hosted agent's environment, test script behavior and debug other issues. To extend the terminal session time, it is recommended that you include a `sleep` [command](/docs/pipelines/configure/step-types/command-step) within your job steps. This can help maintain an active terminal connection and prevent the session from timing out too quickly, allowing you to debug your job or investigate the environment the job is running in. In the example below, the job will pause for 10 minutes before continuing. Adjust the sleep duration according to your specific needs. ```yml steps: - label: "Extend Terminal Session" command: | echo "Starting job..." sleep 600 # Sleep for 10 minutes echo "Job complete." ``` ##### Deactivate and reactivate terminal access on hosted agents By default, the terminal access feature for Buildkite hosted agents is active. If this feature is not active, you can reactivate it for all hosted agents across all clusters within your Buildkite organization. Reactivating or deactivating the terminal access feature requires Buildkite organization administrator permissions. To deactivate or reactivate the hosted agent terminal access feature: 1. Select **Settings** in the global navigation to access the [**Organization Settings**](https://buildkite.com/organizations/~/settings) page. 1. Select **Pipelines** > **Settings** to access your organization's [**Pipeline Settings**](https://buildkite.com/organizations/~/pipeline-settings) page. 1. Scroll down to the **Hosted Agents Terminal Access** and to: * _Deactivate this feature_, select the **Disable Terminal Access** button, followed by **Disable Hosted Agents Terminal Access** in the confirmation message. * _Reactivate this feature_, select the **Enable Terminal Access** button, followed by **Enable Hosted Agents Terminal Access** in the confirmation message. Terminal access will now be either removed or made available to all Buildkite hosted agents across all clusters within your Buildkite organization. When this feature is active, be aware that users require either: - Build permissions on relevant pipelines to use this feature on these pipelines' jobs. - Cluster maintainer permissions on the cluster the pipeline belongs to, or Buildkite organization administrator permissions. --- ### Cache volumes URL: https://buildkite.com/docs/agent/buildkite-hosted/cache-volumes #### Cache volumes _Cache volumes_ (also known as _volumes_) are external volumes attached to Buildkite hosted agent instances, and are scoped to specific [Buildkite clusters](/docs/pipelines/security/clusters). These volumes are attached on a best-effort basis depending on their locality, expiration and current usage, and therefore, should not be relied upon as durable data storage. Volumes are useful if your pipeline builds on Buildkite hosted agents have jobs that make use of build dependencies, use Docker images, which can be stored in [container cache volumes](#container-cache-volumes), or Git mirrors, which can be stored in [Git mirror volumes](#git-mirror-volumes). Managing build dependencies, Docker images, and Git mirrors in volumes can greatly speed up the duration of your overall pipeline builds. > 📘 Pro and Enterprise plan feature > The cache volumes feature is only available to Buildkite customers on [Pro or Enterprise](https://buildkite.com/pricing) plans. If you don't have access to this feature, please contact support@buildkite.com to get it activated. By default, volumes: - Are disabled, although you can enable them by providing a list of paths containing files and data to temporarily store in these volumes at the pipeline- or step-level. - Are scoped to a pipeline and are shared between all steps in the pipeline. Volumes act as regular disks, and have the following properties on Linux: - They use NVMe storage, delivering high performance. - They are formatted as a regular Linux filesystem (for example, ext4)—therefore, these volumes support any Linux use-cases. Volumes on macOS are a little different, with [sparse bundle disk images](https://en.wikipedia.org/wiki/Sparse_image#Sparse_bundle_disk_images) being utilized, as opposed to the bind mount volumes used by Linux. However, macOS volumes are managed in the same way as they are for Linux volumes. > 📘 Volume retention > Volumes are retained for up to 14 days maximum from their last use. Note that 14 days is not a guaranteed retention duration and that the volumes may be removed before this period ends. > Design your workflows to handle volume misses, as volumes are designed for temporary data storage. ##### Volume configuration Volume paths can be [defined in your `pipeline.yml`](/docs/pipelines/configure/defining-steps) file using the `cache` key at either the root level of your pipeline YAML, or as an [attribute on a step](/docs/pipelines/configure/step-types). Defining paths for the `cache` key in your pipeline YAML or attribute on a step will implicitly create a volume for the pipeline. When volume paths are defined, the volume is mounted under `/cache/bkcache` in the agent instance. The agent links sub-directories of the volume into the paths specified in the configuration. For example, defining `cache: "node_modules"` in your `pipeline.yml` file will link `./node_modules` to `/cache/bkcache/node_modules` in your agent instance. Volumes can be created by specifying a name for the volume, which allows you to use multiple volumes in a single pipeline, or have multiple pipelines share a single volume. Note that it is not possible to share a volume across multiple pipelines. When requesting a volume, you can specify a size. The volume provided will have a minimum available storage equal to the specified size. In the case of a volume hit (most of the time), the actual volume size is: last used volume size + the specified size. Defining a top-level volume configuration (using the `cache` key at the root level of your pipeline YAML) sets the default volume for all steps in the pipeline. Any volume defined within a step will be merged with the top-level volume configuration, with step-level volume size taking precedence when the same volume name is specified at both levels. Paths from both levels will be available when using the same volume name. ###### Example ```yaml cache: paths: - "node_modules" size: "100g" steps: - command: "yarn run build" cache: ".build" - command: "yarn run test" cache: - ".build" - command: "rspec" cache: paths: - "vendor/bundle" size: 20g name: "bundle-volume" ``` ###### Required attributes | `paths` | A list of paths to volume. Paths are relative to the working directory of the step. Absolute references can be provided in the `cache` paths configuration relative to the root of the instance. _Example:_ `- ".volume"` `- "/tmp/volume"` Be aware that if you do not need to include other [optional attributes](#volume-configuration-optional-attributes) and you only need to define a single path for your volume, you can omit this `paths` attribute, and simply add your path to the end of the `cache` attribute or key. _Example:_ `cache: ".volume"` > 📘 > On [macOS hosted agents](/docs/agent/buildkite-hosted/macos), the instance is a full macOS snapshot, including the standard file system structure. Volume paths cannot be specified on reserved paths, such as `/tmp` and `/private`. However, sub-paths such as `/tmp/volume` are acceptable. ###### Optional attributes | `name` | A name for the volume. This allows you to use multiple volumes in a single pipeline. If no `name` is specified, the value of this attribute defaults to the pipeline slug. _Example:_ `"node-modules-volume"` | `size` | The size of the volume. The default size is 20 gigabytes, which is also the minimum volume size that can be requested. Units are in gigabytes, specified as `Ng`, where `N` is the size in gigabytes, and `g` indicates gigabytes. _Example:_ `"20g"` ##### Lifecycle At any point in time, multiple versions of a volume may be used by different jobs. The first request creates the first version of the volume, which is used as the parent of subsequent _forks_ until a new parent version is committed. A _fork_ in this context is a "moment", or a readable/writable "snapshot", version of the volume in time. When requesting a volume, a fork of the previous volume version is attached to the agent instance. This is the case for all volumes, except for the first request, which starts empty, with no volumes attached. Each job gets its own private copy of the volume, as it existed at the time of the last committed volume version. Version commits follow a "last write" model—whenever a job terminates successfully (that is, exits with exit code `0`), volumes attached to that job have a new parent committed—the final flushed volume of the exiting agent instance. Whenever a job fails, the volume versions attached to the agent instance are abandoned. ###### Non-deterministic nature Volumes, by their very nature, only provide _non-deterministic_ access to their data. This means that when you issue a command in a Buildkite pipeline to retrieve data or an image from a volume (for example, a previously built Docker image in the [container cache volume](#container-cache-volumes) with a `docker pull` command), then the command may instead retrieve the data or image from a different source, such as the [remote Docker builder's](/docs/agent/buildkite-hosted/linux/remote-docker-builders) [local storage/file system](/docs/agent/buildkite-hosted/linux/remote-docker-builders#benefits-of-using-remote-docker-builders-improved-cache-hit-rates-and-reproducibility), which could be very fast, or Docker Hub, which could be very slow by comparison due to bandwidth limitations. This behavior results from a volume's data availability, which depends on the following factors: - How often the volume is used. - How often the data on the volume is changed. If a volume is used more frequently by pipelines, and the volume's data (for example, Docker images) remains relatively static, then the availability of the volume and its data (that is, its volume hit rate) to commands in your Buildkite pipeline, such as `docker pull`, is likely to be higher, resulting in a greater chance that the required data is sourced from the volume. If, however, the volume is used less frequently and its data is relatively dynamic, then the volume hit rate is likely to be lower, meaning that the data will be sourced from other sources and external repositories. > 📘 > If you need _deterministic_ storage for [Open Container Initiative (OCI)](https://opencontainers.org/) images, such as Docker images, you can use your [internal container registry](/docs/agent/buildkite-hosted/internal-container-registry) instead of a cache volume. ##### Container cache volumes Container cache volumes are types of volumes used to cache Docker images between builds. > 📘 > This feature is only available to [Linux hosted agents](/docs/agent/buildkite-hosted/linux). ###### Enabling container cache volumes To enable container cache volumes feature for Buildkite hosted agents on your cluster: 1. Select **Agents** in the global navigation to access the **Clusters** page. 1. Select the Buildkite cluster in which to enable the container cache volumes feature. 1. Select **Cache Storage**, then select the **Settings** tab. 1. Select **Enable container caching**, then select **Save cache settings** to enable Git mirrors for the selected hosted cluster. Once enabled, container cache volumes will be used for all Buildkite hosted agent jobs in that cluster. A separate volume is created for each pipeline, and is done so upon the pipeline being built for the first time. A container cache volume's name is based on your pipeline's slug followed by a slash, then "container-cache". For example, **pipeline-slug/container-cache**. You can view all of your current cluster's volumes through its **Cached Storage** > **Volumes** page. ##### Git mirror volumes Git mirror volumes are specialized types of volumes designed to accelerate Git operations by caching the Git repository between builds. This is useful for large repositories that are slow to clone. ###### Enabling Git mirror volumes To enable Git mirror volumes feature for Buildkite hosted agents on your cluster: 1. Select **Agents** in the global navigation to access the **Clusters** page. 1. Select the Buildkite cluster in which to enable the Git mirror volumes feature. 1. Select **Cache Storage**, then select the **Settings** tab. 1. Select **Enable Git mirror**, then select **Save cache settings** to enable Git mirrors for the selected hosted cluster. Once enabled, Git mirror volumes will be used for all Buildkite hosted agent jobs using Git repositories in that cluster. A separate volume is created for each repository, and is done so upon the first pipeline (whose source is the repository) being built for the first time. A Git mirror volume's name is based on your cloud-based Git service's account and repository name, and begins with "buildkite-git-mirror-". For example, **buildkite-git-mirror-my-account-my-repository**. You can view all of your current cluster's volumes through its **Cached Storage** > **Volumes** page. ##### Configuring cache operation concurrency When saving or restoring multiple cache volumes, the agent processes them concurrently. Control the number of concurrent operations using the `BUILDKITE_CACHE_CONCURRENCY` environment variable. The default is `2`. Increase this value to reduce overall cache operation time for pipelines that use many small cache volumes: ```yaml steps: - command: "your-build-command" env: BUILDKITE_CACHE_CONCURRENCY: 4 cache: paths: - "node_modules" - ".build" - "vendor/bundle" ``` Setting `BUILDKITE_CACHE_CONCURRENCY` to `0` or a negative value causes the agent to use the number of available CPU cores as the concurrency limit. ##### Viewing and deleting volumes Deleting a [container cache](#container-cache-volumes) or [Git mirror](#git-mirror-volumes) volume, or any additional [local builder volume](/docs/agent/buildkite-hosted/linux/remote-docker-builders#additional-volumes) (also listed on the **Cached Storage** > **Volumes** page) may affect the build time for the associated pipelines until the new volume is established. To view a list of volumes and delete one: 1. Select **Agents** in the global navigation to access the **Clusters** page. 1. Select the Buildkite cluster whose volume is to be deleted. 1. Select **Cache Storage**, then select the **Volumes** tab to view a list of all existing container cache and Git mirror volumes. 1. Select **Delete** for the volume you wish to remove. 1. Confirm the deletion by selecting **Delete Cache Volume**. --- ### Internal container registry URL: https://buildkite.com/docs/agent/buildkite-hosted/internal-container-registry #### Internal container registry The _internal container registry_ is a feature of [Buildkite hosted agents](/docs/agent/buildkite-hosted), which allows you to house Docker images built by your pipelines. > 📘 Enterprise plan feature > The internal container registry feature is only available to Buildkite customers on [Enterprise](https://buildkite.com/pricing) plans. ##### Internal container registry overview Once a [Buildkite cluster has been set up](/docs/pipelines/security/clusters/manage#setting-up-clusters), and its first [hosted queue](/docs/agent/queues/managing#create-a-buildkite-hosted-queue) has been created, an internal container registry is created for this cluster, which you can use to manage [Open Container Initiative (OCI)](https://opencontainers.org/) images built by your pipelines on Buildkite hosted agents. To use the internal container registry, you'll need to reference the pre-defined environment variable `$BUILDKITE_HOSTED_REGISTRY_URL` for the registry in Docker commands you use in your pipelines. The value of this environment variable defines the location for your cluster's internal container registry. The main advantage of using your internal container registry over [cache volumes](/docs/agent/buildkite-hosted/cache-volumes) is that unlike cache volumes, the internal container registry's storage is _deterministic_, which means that any commands you use in your pipelines to interact with this registry will interact directly with the relevant data stored in this registry. This is in contrast to the [non-deterministic nature of cache volumes](/docs/agent/buildkite-hosted/cache-volumes#lifecycle-non-deterministic-nature), where commands to retrieve data from your cache volume may instead retrieve it from a different source. You can use built-in tools in your Buildkite hosted agents, such as [Docker Engine](https://docs.docker.com/engine/), as well as those you can include in an [agent image](/docs/agent/buildkite-hosted/linux#agent-images) through a Dockerfile for Linux hosted agents, such as [Crane](https://michaelsauter.github.io/crane/index.html) or [skopeo](https://github.com/containers/skopeo), or to interact with your internal container registry. ##### Using your internal container registry The following example pipeline demonstrates how build and push a custom Docker image (customized using a `.buildkite/Dockerfile.build` file) to your internal container registry. Once the built image has been pushed up to this registry, the pipeline then uses this image as the base image for its next step, [parallelized](/docs/pipelines/best-practices/parallel-builds#parallel-jobs) into three jobs. ```yaml agents: # Must run on a hosted queue queue: "linux-small" steps: - key: create_custom_base_image label: "\:docker\: Create custom base image" # Optionally only build on main branch # if: build.branch == "main" if_changed: - ".buildkite/Dockerfile.build" - ".buildkite/pipeline.yml" # Use the agent image specified in the queue settings for this step # Build and push a new image to the internal registry # Optionally add --no-cache to rebuild from scratch # without using cached layers command: | docker buildx build \ --file .buildkite/Dockerfile.build \ --build-arg BUILDKITE_BUILD_NUMBER="$$BUILDKITE_BUILD_NUMBER" \ --platform linux/amd64 \ --tag "${BUILDKITE_HOSTED_REGISTRY_URL}/base:latest" \ --progress plain \ --push . - key: use_custom_base_image label: ":package: Use custom base image" # Use the latest custom built image from the internal registry image: "${BUILDKITE_HOSTED_REGISTRY_URL}/base:latest" parallelism: 3 depends_on: create_custom_base_image command: | echo "Using ${BUILDKITE_HOSTED_REGISTRY_URL}/base:latest built from Build #$(cat /build-number-marker)" ``` --- ### Network security URL: https://buildkite.com/docs/agent/buildkite-hosted/network-security #### Network security This page provides guidelines on how secure the network in which your Buildkite hosted agents operate, which includes network communications between the Buildkite hosted agents platform, the Buildkite platform itself, and other services external to these platforms. The primary recommendation is to secure these communications with [OIDC](/docs/pipelines/security/oidc), since OIDC tokens issued by Buildkite hosted agents, obtained through the [`buildkite-agent oidc` command](/docs/agent/cli/reference/oidc), can be used to verify that network communications originate from those agents, which in turn are associated with a specific Buildkite organization, pipeline, or metadata associated with a pipeline's job. Using OIDC tokens to secure these communications means that communication can be done securely over the public internet with HTTPS, without the need for VPNs. Companies with VPN requirements typically use IP allowlists to control network access, where IP allowlists can be used as an alternative to securing these communications instead of OIDC tokens. The rest of this page provides details on how to obtain relevant IP addresses, which you can use to configure IP allowlists for your firewall and VPNs, to secure your Buildkite hosted agents environment, as well as other network security [considerations](#considerations) and [best practices for build infrastructure segmentation](#considerations-build-infrastructure-segmentation-best-practices). ##### Buildkite hosted agent IP address ranges While [Buildkite hosted agents are ephemeral by nature](/docs/agent/buildkite-hosted#how-buildkite-hosted-agents-work), they connect to the Buildkite platform through an IP address range, which you can use to configure allowlist settings in your network configurations. ###### Viewing your hosted agents' IP addresses To access your hosted agent IP addresses, you can do so from the Buildkite interface: 1. Select **Agents** in the global navigation to access the **Clusters** page. 1. Select the cluster whose Buildkite hosted queues have the hosted agents whose IP addresses you wish to view. 1. Select **Networking** to open the **Network Ranges** page. The IP address range of each Buildkite hosted queue's hosted agents are listed on separate lines, which you can copy for your own networking configurations. > 📘 > Be aware that these IP address ranges are not strictly static, and on rare occasions, these address ranges could change. On such occasions, however, Buildkite will aim to inform you of such events ahead of time, so that you can be prepared to update your network configurations accordingly. > If you do require dedicated static IP addresses for your hosted agents' IP addresses, contact Support at support@buildkite.com. ##### Buildkite platform IP addresses The Buildkite platform itself has a number of public egress IP addresses, which you may need to configure on your firewall's IP allowlist. Be aware that these egress IP addresses are different from the [IP address ranges of your Buildkite hosted agents](#buildkite-hosted-agent-ip-address-ranges), which originate from a different platform. To obtain these public egress IP addresses, use the [Meta API endpoint](/docs/apis/rest-api/meta) to obtain their values. ##### Considerations When using Buildkite hosted agents, be aware of the following network security considerations: - Since the infrastructure of Buildkite hosted agents is shared across Buildkite customers, the IP address ranges for Buildkite hosted agents originate from a common source, and could be shared between different customers' configured hosted agents. - Buildkite agents (regardless of whether they are part of a [Buildkite hosted or self-hosted environment and architecture](/docs/pipelines/architecture)) connect to the Buildkite platform over regular public internet connections using HTTPS. - All communications use TLS encryption for data in transit. - If you've configured webhooks and allowlists for [source control management (SCM) systems](/docs/pipelines/source-control), such as [GitHub Enterprise Server](/docs/pipelines/source-control/github-enterprise) or similar, set the [Buildkite platform's IP addresses](#buildkite-platform-ip-addresses) in these allowlists for status updates, and allow your SCM to post webhooks to `webhook.buildkite.com`. Alternatively, restrict the Buildkite platform to only accept webhooks from your outbound NAT IP addresses. - When configuring Buildkite hosted agents to connect to internal services, many customers typically allowlist [Buildkite platform egress IP addresses](#buildkite-platform-ip-addresses) to reach internal Git systems, artifact stores, scanners (for example, static code analysis tools). ###### Build infrastructure segmentation best practices Buildkite hosted agents are capable of providing a secure build environment that is suitable for building most customers' products, as hosted agents can be more convenient, less expensive to manage, and more secure than [self-hosted agents](/docs/pipelines/architecture#self-hosted-hybrid-architecture), especially for customers without dedicated security teams. For organizations building products where a zero-trust build environment and infrastructure is required, the recommendation is to use self-hosted agents to build these products. Therefore, for the sake of convenience, cost and security, your organization may require a blended build environment, where some products are built using Buildkite hosted agents, and other products (where zero-trust build infrastructure segmentation is required) are built using [Buildkite agents](/docs/agent) configured in your own self-hosted environments. Such a setup allows you to: - Control network security rules directly. - Implement dedicated VPN connections if required. - Maintain network boundaries protected by your own security controls. While Buildkite agents themselves do not require VPN software (because the agents communicate with the Buildkite platform over HTTPS), your internal systems can be protected behind VPN or firewall rules that only allow connections from allowlisted IP ranges. If you are running self-hosted agents inside your network, run these Buildkite agents on subnets behind your VPN or in your virtual private clouds (VPCs). Buildkite agents only make outbound requests using HTTPS to the `agent.buildkite.com` address, and hence, there is no need to configure inbound connections for such communication. This helps keep code, secrets and internal traffic maintained within your local environments. --- ### Overview URL: https://buildkite.com/docs/agent/queues #### Queues overview Each [pipeline](/docs/pipelines/configure) has the ability to separate its jobs (define by the pipeline's steps) using queues. This allows you to isolate a set of jobs or agents, or both, making sure that only specific agents will run jobs that are intended for them. Buildkite Pipelines allows you to configure two types of queues: - [Self-hosted](/docs/agent/queues/managing#create-a-self-hosted-queue), where you manage Buildkite agents in your own infrastructure. - [Buildkite hosted](/docs/agent/queues/managing#create-a-buildkite-hosted-queue) queues, where Buildkite manages the agents for you as a fully-managed platform. Learn more about how to create and manage queues in [Managing queues](/docs/agent/queues/managing). Common use cases for queues include deployment agents, and pools of agents for specific pipelines or teams. ##### Targeting a queue from a pipeline Target specific queues (either [self-hosted](/docs/agent/queues/managing#create-a-self-hosted-queue) or [Buildkite hosted](/docs/agent/queues/managing#create-a-buildkite-hosted-queue) ones) using the `agents` attribute on your pipeline steps, or at the root level for the entire pipeline. For example, the following pipeline would run on the `priority` queue as determined by the root level `agents` attribute (and ignores the agents running the `default` queue). The `tests.sh` build step matches only agents running on the `linux-medium-x86` queue. ```yaml agents: queue: "priority" steps: - command: echo "hello" - command: tests.sh agents: queue: "linux-medium-x86" ``` ###### Alternative methods [Branch patterns](/docs/pipelines/configure/workflows/branch-configuration) are another way to control what work is done. You can use branch patterns to determine which pipelines and steps run based on the branch name. ##### Assigning a self-hosted agent to a queue A self-hosted agent can be assigned to a [self-hosted queue](/docs/agent/queues/managing#create-a-self-hosted-queue) using the [`tag` flag when starting the agent](/docs/agent/cli/reference/start#setting-tags), where the `tag` flag's value must contain a [`queue` tag](/docs/agent/cli/reference/start#the-queue-tag). The queue tag's value is the _key_ of the queue you're assigning this self-hosted agent to, where this key was defined when the [self-hosted queue was created](/docs/agent/queues/managing#create-a-self-hosted-queue). For example, the `--tags` flag of the `buildkite-agent start` command is used to configure this agent to listen on the `linux-medium-x86` self-hosted queue, which is part of a _testing_ cluster: ``` buildkite-agent start --token "TESTING-CLUSTERS-AGENT-TOKEN-VALUE" --tags "queue=linux-medium-x86" ``` This configuration can be set at the [command line when starting the agent](/docs/agent/cli/reference/start), the agent's [configuration file](/docs/agent/self-hosted/configure), or through an environment variable. Agents can only be assigned to a single self-hosted queue within a cluster. > 📘 Ensure you have already configured your cluster's agent tokens and queues > Your [clusters](/docs/pipelines/security/clusters/manage) and [queues](/docs/agent/queues/managing) should already be configured before starting your agents to target these queues. By default, a pipeline's jobs run on the first available agent within a queue, ordered by how recently an agent within that queue successfully completed a job. You can, however, alter this behavior by changing the _priority_ for some or all of your agents. Learn more about this in [Buildkite agent prioritization](/docs/agent/self-hosted/prioritization). ###### The default self-hosted queue If you don't assign a self-hosted agent to a [self-hosted queue](/docs/agent/queues/managing#create-a-self-hosted-queue) by [setting](/docs/agent/cli/reference/start#setting-tags) the agent's [queue tag](/docs/agent/cli/reference/start#the-queue-tag) (for example, `queue=linux-medium-x86`) when it is started, the agent will automatically be assigned to your configured _default_ self-hosted queue (for example, `queue=default`) and accept jobs from that queue. > 📘 Clusters without a default self-hosted queue configured > If you start a self-hosted agent without explicitly specifying an existing self-hosted queue in your cluster _and_ a default [self-hosted queue](/docs/agent/queues/managing#create-a-self-hosted-queue) is not configured in this cluster, or your default queue is set to a [Buildkite hosted queue](/docs/agent/queues/managing#create-a-buildkite-hosted-queue), then your agent will fail to connect to the Buildkite platform. > You must either explicitly specify an existing self-hosted queue within in your [cluster](/docs/pipelines/security/clusters/manage) when starting the agent, or have a default self-hosted queue already configured in this cluster for the agent to connect successfully. ##### Setting up queues for unclustered agents > 🚧 This section documents a deprecated Buildkite feature > Learn more about unclustered agents and their tokens in [Working with unclustered agent tokens](/docs/agent/self-hosted/tokens#working-with-unclustered-agent-tokens). For unclustered agents, queues are configured when starting a Buildkite agent. An unclustered agent can listen on a single queue or on multiple queues. For multiple queues, add as many extra `queue` tags as are required. In the following example, the `--tags` flag of the `buildkite-agent start` command is used to configure this unclustered agent to listen on both the `development` and `testing` queues: ``` buildkite-agent start --token "UNCLUSTERED-AGENT-TOKEN-VALUE" --tags "queue=development,queue=testing" ``` --- ### Managing URL: https://buildkite.com/docs/agent/queues/managing #### Managing queues This page provides details on how to manage queues within a [cluster](/docs/pipelines/security/clusters/manage) of your Buildkite organization. ##### Setting up queues A [_queue_](/docs/pipelines/glossary#queue) defines and manages [Buildkite agents](/docs/agent) within a cluster. When a new Buildkite organization is created, along with the automatically created [default cluster](/docs/pipelines/security/clusters/manage#setting-up-clusters) (named **Default cluster**), a default queue (named **default-queue**) within this cluster is also created. A cluster can be configured with multiple queues. Each queue can be used for workload routing to specific combinations of your [build/agent infrastructure](#agent-infrastructure), based on: - Architecture (x86-64, arm64, Apple silicon, etc.) - Size of agents (small, medium, large, extra large) - Type of machine (macOS, Linux, Windows, etc.) For example, you can set up dedicated queues for `linux_medium_x86`, `mac_large_silicon`, etc. Breaking down your infrastructure into individual queues like this makes it easier to scale groups of similar agents and get meaningful metrics from Buildkite. ##### Agent infrastructure Buildkite provides support for managing [Buildkite agents](/docs/agent) either in your own self-hosted infrastructure, or [Buildkite's own hosted infrastructure](/docs/agent/buildkite-hosted). When setting up a queue, you can choose between configuring it with Buildkite agents running in either of these types of infrastructure. Learn more about how to set up and create a queue using either self-hosted agents (known as a [self-hosted queue](#create-a-self-hosted-queue)) or Buildkite hosted agents (known as a [Buildkite hosted queue](#create-a-buildkite-hosted-queue)). Be aware that it is not possible to create a queue that uses a mix of self-hosted and Buildkite hosted agents. If you do need to use a combination of these different agent types for your pipeline builds, create separate self-hosted and Buildkite hosted queues for these agents and use [agent or queue tags](/docs/agent/queues#assigning-a-self-hosted-agent-to-a-queue), or a combination of both, to target the appropriate queues. Furthermore, once a queue has been created, it is not possible to change its type from a self-hosted to a Buildkite hosted queue, or vice versa. If you do need to change your type of agent infrastructure, use a queue with the appropriate hosted queue type, or create a new queue to suit your new agent infrastructure. ##### Create a self-hosted queue Self-hosted queues use [Buildkite agents installed in your own infrastructure](/docs/agent/self-hosted/install) to run your pipeline builds. New self-hosted queues can be created by a [cluster maintainer](/docs/pipelines/security/clusters/manage#manage-maintainers-on-a-cluster) or Buildkite organization administrator using the [Buildkite interface](#create-a-self-hosted-queue-using-the-buildkite-interface), as well as Buildkite's [REST API](#create-a-self-hosted-queue-using-the-rest-api) or [GraphQL API](#create-a-self-hosted-queue-using-the-graphql-api). For these API requests, the _cluster ID_ value submitted in the request is the target cluster the queue will be created in. When you [create a new cluster](/docs/pipelines/security/clusters/manage#create-a-cluster) through the [Buildkite interface](/docs/pipelines/security/clusters/manage#create-a-cluster-using-the-buildkite-interface), this cluster automatically has an initial **default** queue. Multiple self-hosted agents can connect to your self-hosted queue by ensuring that the agent is configured to use both of the following: - The [cluster's agent token](/docs/agent/self-hosted/tokens#using-and-storing-tokens) - The [agent tag](/docs/agent/queues#assigning-a-self-hosted-agent-to-a-queue) targeting your self-hosted queue ###### Using the Buildkite interface To create a new self-hosted agent queue using the Buildkite interface: 1. Select **Agents** in the global navigation to access the **Clusters** page. 1. Select the cluster in which to create the new queue. 1. On the **Queues** page, select **New Queue** to open the **Create a new Queue** page. 1. In the **Create a key** field, enter a unique _key_ for the queue, which can only contain letters, numbers, hyphens, and underscores, as valid characters. 1. Select the **Add description** checkbox to enter an optional longer description for the queue. This description appears under the queue's key, which is listed on the **Queues** page, as well as when viewing the queue's details. 1. In the **Select your agent infrastructure** section, select **Self hosted** for your agent infrastructure. **Note:** In the **Retry Agent Affinity** section, leave the default **Prefer Warmest Agent** setting unchanged. To learn more about this setting, see [Retry agent affinity](/docs/agent/self-hosted/prioritization#retry-agent-affinity). You can always change this setting later through your self-hosted queue's **Settings** tab. 1. Select **Create Queue**. The new queue's details are displayed, indicating the queue's key and its description (if configured) underneath this key. Select **Queues** on the interface again to list all configured queues in your cluster. > 📘 > A `key` can have a maximum length of 100 characters. ###### Using the REST API To [create a new self-hosted agent queue](/docs/apis/rest-api/clusters/queues#create-a-self-hosted-queue) using the [REST API](/docs/apis/rest-api), run the following example `curl` command: ```curl curl -H "Authorization: Bearer $TOKEN" \ -X POST "https://api.buildkite.com/v2/organizations/{org.slug}/clusters/{cluster.id}/queues" \ -H "Content-Type: application/json" \ -d '{ "key": "linux_small_amd", "description": "A small self-hosted AMD64 Linux agent." }' ``` where: - `$TOKEN` is an [API access token](https://buildkite.com/user/api-access-tokens) scoped to the relevant **Organization** and **REST API Scopes** that your request needs access to in Buildkite. - `{org.slug}` can be obtained: * From the end of your Buildkite URL, after accessing **Pipelines** in the global navigation of your organization in Buildkite. * By running the [List organizations](/docs/apis/rest-api/organizations#list-organizations) REST API query to obtain this value from `slug` in the response. For example: ```bash curl -H "Authorization: Bearer $TOKEN" \ -X GET "https://api.buildkite.com/v2/organizations" ``` - `{cluster.id}` can be obtained: * From the **Cluster Settings** page of your target cluster. To do this: 1. Select **Agents** (in the global navigation) > the specific cluster > **Settings**. 1. Once on the **Cluster Settings** page, copy the `id` parameter value from the **GraphQL API Integration** section, which is the `{cluster.id}` value. * By running the [List clusters](/docs/apis/rest-api/clusters#clusters-list-clusters) REST API query and obtain this value from the `id` in the response associated with the name of your target cluster (specified by the `name` value in the response). For example: ```bash curl -H "Authorization: Bearer $TOKEN" \ -X GET "https://api.buildkite.com/v2/organizations/{org.slug}/clusters" ``` - `key` (required) is displayed on the cluster's **Queues** pages, and this value can only contain letters, numbers, hyphens, and underscores, as valid characters. - `description` (optional) is a longer description for the queue, which appears under the queue's key, when listed on the **Queues** page, as well as when viewing the queue's details. ###### Using the GraphQL API To [create a new self-hosted agent queue](/docs/apis/graphql/cookbooks/clusters#create-a-self-hosted-queue) using the [GraphQL API](/docs/apis/graphql-api), run the following example mutation: ```graphql mutation { clusterQueueCreate( input: { organizationId: "organization-id" clusterId: "cluster-id" key: "linux_small_amd" description: "A small self-hosted AMD64 Linux agent." } ) { clusterQueue { id uuid key description dispatchPaused hosted createdBy { id uuid name email avatar { url } } } } } ``` where: - `organizationId` (required) can be obtained: * From the **GraphQL API Integration** section of your **Organization Settings** page, accessed by selecting **Settings** in the global navigation of your organization in Buildkite. * By running a `getCurrentUsersOrgs` GraphQL API query to obtain the organization slugs for the current user's accessible organizations, followed by a [getOrgId](/docs/apis/graphql/schemas/query/organization) query, to obtain the organization's `id` using the organization's slug. For example: Step 1. Run `getCurrentUsersOrgs` to obtain the organization slug values in the response for the current user's accessible organizations: ```graphql query getCurrentUsersOrgs { viewer { organizations { edges { node { name slug } } } } } ``` Step 2. Run `getOrgId` with the appropriate slug value above to obtain this organization's `id` in the response: ```graphql query getOrgId { organization(slug: "organization-slug") { id uuid slug } } ``` **Note:** The `organization-slug` value can also be obtained from the end of your Buildkite URL, by selecting **Pipelines** in the global navigation of your organization in Buildkite. - `clusterId` (required) can be obtained: * From the **Cluster Settings** page of your target cluster. To do this: 1. Select **Agents** (in the global navigation) > the specific cluster > **Settings**. 1. Once on the **Cluster Settings** page, copy the `cluster` parameter value from the **GraphQL API Integration** section, which is the `cluster.id` value. * By running the [List clusters](/docs/apis/graphql/cookbooks/clusters#list-clusters) GraphQL API query and obtain this value from the `id` in the response associated with the name of your target cluster (specified by the `name` value in the response). For example: ```graphql query getClusters { organization(slug: "organization-slug") { clusters(first: 10) { edges { node { id name uuid color description } } } } } ``` - `key` (required) is displayed on the cluster's **Queues** pages, and this value can only contain letters, numbers, hyphens, and underscores, as valid characters. - `description` (optional) is a longer description for the queue, which appears under the queue's key, when listed on the **Queues** page, as well as when viewing the queue's details. ##### Create a Buildkite hosted queue Buildkite hosted queues use [Buildkite's hosted agent infrastructure](/docs/agent/buildkite-hosted) to run your pipeline builds. New Buildkite hosted queues can be created by a [cluster maintainer](/docs/pipelines/security/clusters/manage#manage-maintainers-on-a-cluster) or Buildkite organization administrator using the [Buildkite interface](#create-a-buildkite-hosted-queue-using-the-buildkite-interface), as well as Buildkite's [REST API](#create-a-buildkite-hosted-queue-using-the-rest-api) or [GraphQL API](#create-a-buildkite-hosted-queue-using-the-graphql-api). When you create a Buildkite hosted queue, you can choose the machine type (Linux or macOS) and the capacity (small, medium, large, or extra large), known as the _instance shape_, of the Buildkite hosted agents that will run your builds. Only one instance shape can be configured on a Buildkite hosted queue. However, depending on your pipeline's requirements, multiple Buildkite hosted agents of the queue's configured instance shape can be spawned automatically by Buildkite. ###### Using the Buildkite interface To create a new Buildkite hosted queue using the Buildkite interface: 1. Select **Agents** in the global navigation to access the **Clusters** page. 1. Select the cluster in which to create the new queue. 1. On the **Queues** page, select **New Queue** to open the **Create a new Queue** page. 1. In the **Create a key** field, enter a unique _key_ for the queue, which can only contain letters, numbers, hyphens, and underscores, as valid characters. 1. Select the **Add description** checkbox to enter an optional longer description for the queue. This description appears under the queue's key, which is listed on the **Queues** page, as well as when viewing the queue's details. 1. In the **Select your agent infrastructure** section, select **Hosted** for your agent infrastructure. 1. In the new **Configure your hosted agent infrastructure** section, select your **Machine type** ([**Linux**](/docs/agent/buildkite-hosted/linux) or [**macOS**](/docs/agent/buildkite-hosted/macos)). 1. If you selected **Linux**, within **Architecture**, you can choose between **AMD64** (the default and recommended) or **ARM64** architectures for the Linux machines running as hosted agents. To switch to **ARM64**, select **Change**, followed by **ARM64 (AArch64)**. 1. Select the appropriate **Capacity** for your hosted agent machine type (**Small**, **Medium** or **Large**). Take note of the additional information provided in the new **Hosted agents trial** section, which changes based on your selected **Capacity**. 1. Select **Create Queue**. The new queue's details are displayed, indicating the queue's key and its description (if configured) underneath this key. Select **Queues** on the interface again to list all configured queues in your cluster. ###### Using the REST API To [create a new Buildkite hosted queue](/docs/apis/rest-api/clusters/queues#create-a-buildkite-hosted-queue) using the [REST API](/docs/apis/rest-api), run the following example `curl` command: ```curl curl -H "Authorization: Bearer $TOKEN" \ -X POST "https://api.buildkite.com/v2/organizations/{org.slug}/clusters/{cluster.id}/queues" \ -H "Content-Type: application/json" \ -d '{ "key": "mac_silicon", "description": "macOS agents running on Apple silicon architecture.", "hostedAgents": { "instanceShape": "MACOS_ARM64_M4_6X28" } }' ``` where: - `$TOKEN` is an [API access token](https://buildkite.com/user/api-access-tokens) scoped to the relevant **Organization** and **REST API Scopes** that your request needs access to in Buildkite. - `{org.slug}` can be obtained: * From the end of your Buildkite URL, after accessing **Pipelines** in the global navigation of your organization in Buildkite. * By running the [List organizations](/docs/apis/rest-api/organizations#list-organizations) REST API query to obtain this value from `slug` in the response. For example: ```bash curl -H "Authorization: Bearer $TOKEN" \ -X GET "https://api.buildkite.com/v2/organizations" ``` - `{cluster.id}` can be obtained: * From the **Cluster Settings** page of your target cluster. To do this: 1. Select **Agents** (in the global navigation) > the specific cluster > **Settings**. 1. Once on the **Cluster Settings** page, copy the `id` parameter value from the **GraphQL API Integration** section, which is the `{cluster.id}` value. * By running the [List clusters](/docs/apis/rest-api/clusters#clusters-list-clusters) REST API query and obtain this value from the `id` in the response associated with the name of your target cluster (specified by the `name` value in the response). For example: ```bash curl -H "Authorization: Bearer $TOKEN" \ -X GET "https://api.buildkite.com/v2/organizations/{org.slug}/clusters" ``` - `key` (required) is displayed on the cluster's **Queues** pages, and this value can only contain letters, numbers, hyphens, and underscores, as valid characters. - `description` (optional) is a longer description for the queue, which appears under the queue's key, when listed on the **Queues** page, as well as when viewing the queue's details. - `hostedAgents` (required) an object that configures this queue to use [Buildkite hosted agents](/docs/agent/buildkite-hosted), which makes this a _Buildkite hosted queue_, and defines the instance shape (within its `instanceShape` parameter) for this queue's [Linux-](#create-a-buildkite-hosted-queue-instance-shape-values-for-linux) or [macOS-](#create-a-buildkite-hosted-queue-instance-shape-values-for-macos)based Buildkite hosted agent. For example: ```json "hostedAgents": { "instanceShape": "LINUX_AMD64_2X4" } ``` ###### Using the GraphQL API To [create a new Buildkite hosted queue](/docs/apis/graphql/cookbooks/hosted-agents#create-a-buildkite-hosted-queue) using the [GraphQL API](/docs/apis/graphql-api), run the following example mutation: ```graphql mutation { clusterQueueCreate( input: { organizationId: "organization-id" clusterId: "cluster-id" key: "mac_silicon" description: "macOS agents running on Apple silicon architecture." hostedAgents: { instanceShape: MACOS_ARM64_M4_6X28 } } ) { clusterQueue { id uuid key description dispatchPaused hosted hostedAgents { instanceShape { name size vcpu memory } } createdBy { id uuid name email avatar { url } } } } } ``` where: - `organizationId` (required) can be obtained: * From the **GraphQL API Integration** section of your **Organization Settings** page, accessed by selecting **Settings** in the global navigation of your organization in Buildkite. * By running a `getCurrentUsersOrgs` GraphQL API query to obtain the organization slugs for the current user's accessible organizations, followed by a [getOrgId](/docs/apis/graphql/schemas/query/organization) query, to obtain the organization's `id` using the organization's slug. For example: Step 1. Run `getCurrentUsersOrgs` to obtain the organization slug values in the response for the current user's accessible organizations: ```graphql query getCurrentUsersOrgs { viewer { organizations { edges { node { name slug } } } } } ``` Step 2. Run `getOrgId` with the appropriate slug value above to obtain this organization's `id` in the response: ```graphql query getOrgId { organization(slug: "organization-slug") { id uuid slug } } ``` **Note:** The `organization-slug` value can also be obtained from the end of your Buildkite URL, by selecting **Pipelines** in the global navigation of your organization in Buildkite. - `clusterId` (required) can be obtained: * From the **Cluster Settings** page of your target cluster. To do this: 1. Select **Agents** (in the global navigation) > the specific cluster > **Settings**. 1. Once on the **Cluster Settings** page, copy the `cluster` parameter value from the **GraphQL API Integration** section, which is the `cluster.id` value. * By running the [List clusters](/docs/apis/graphql/cookbooks/clusters#list-clusters) GraphQL API query and obtain this value from the `id` in the response associated with the name of your target cluster (specified by the `name` value in the response). For example: ```graphql query getClusters { organization(slug: "organization-slug") { clusters(first: 10) { edges { node { id name uuid color description } } } } } ``` - `key` (required) is displayed on the cluster's **Queues** pages, and this value can only contain letters, numbers, hyphens, and underscores, as valid characters. - `description` (optional) is a longer description for the queue, which appears under the queue's key, when listed on the **Queues** page, as well as when viewing the queue's details. - `hostedAgents` (required) an object that configures this queue to use [Buildkite hosted agents](/docs/agent/buildkite-hosted), which makes this a _Buildkite hosted queue_, and defines the instance shape (within its `instanceShape` field) for this queue's [Linux-](#create-a-buildkite-hosted-queue-instance-shape-values-for-linux) or [macOS-](#create-a-buildkite-hosted-queue-instance-shape-values-for-macos) based Buildkite hosted agent. For example: ```graphql hostedAgents: { instanceShape: LINUX_AMD64_2X4 } ``` ###### Instance shape values for Linux Specify the appropriate **Instance shape** for the `instanceShape` value in your API call. | Instance shape | Size | Architecture | vCPU | Memory | Disk space | `LINUX_AMD64_2X4` | Small | AMD64 | 2 | 4 GB | 47 GB | `LINUX_AMD64_4X16` | Medium | AMD64 | 4 | 16 GB | 95 GB | `LINUX_AMD64_8X32` | Large | AMD64 | 8 | 32 GB | 158 GB | `LINUX_AMD64_16X64` | Extra Large | AMD64 | 16 | 64 GB | 284 GB | `LINUX_ARM64_2X4` | Small | ARM642 | 4 GB | 47 GB | `LINUX_ARM64_4X16` | Medium | ARM64 | 4 | 16 GB | 95 GB | `LINUX_ARM64_8X32` | Large | ARM64 | 8 | 32 GB | 158 GB | `LINUX_ARM64_16X64` | Extra Large | ARM64 | 16 | 64 GB | 284 GB ###### Instance shape values for macOS Specify the appropriate **Instance shape** for the `instanceShape` value in your API call. | Instance shape | Size | vCPU | Memory | Disk space | `MACOS_ARM64_M4_6X28` | Medium | 6 | 28 GB | 182 GB | `MACOS_ARM64_M4_12X56` | Large | 12 | 56 GB | 294 GB **Note:** Shapes `MACOS_M2_4X7`, `MACOS_M2_6X14`, `MACOS_M2_12X28`, `MACOS_M4_12X56` were deprecated and removed on July 1, 2025. ##### Pause and resume a queue You can pause a queue to prevent any jobs of the cluster's pipelines from being dispatched to agents associated with this queue. To pause a queue: 1. Select **Agents** in the global navigation to access the **Clusters** page. 1. Select the cluster with the queue to pause. 1. On the **Queues** page, select the queue to pause. 1. On the queue's details page, select **Pause Queue**. 1. Enter an optional note in the confirmation dialog, and select **Pause Queue** to pause the queue. **Note:** Use this note to explain why you're pausing the queue. The note will be displayed on the queue's details page and on any affected builds. Jobs _already_ dispatched to agents in the queue before pausing will continue to run. New jobs that target the paused queue will wait until the queue is resumed. Since [trigger steps](/docs/pipelines/configure/step-types/trigger-step) do not rely on agents, these steps will run, unless they have dependencies waiting on the paused queue. The behavior of the triggered jobs depends on their configuration: - If a triggered job targets a paused queue, the job will wait until the queue is resumed. - If a triggered job does not target the paused queue, the job will run as usual. To resume a queue: 1. Select **Agents** in the global navigation to access the **Clusters** page. 1. Select the cluster with the queue to resume. 1. On the **Queues** page, select the queue to resume. 1. On the queue's details page, select **Resume Queue**. Jobs will resume being dispatched to the resumed queue as usual, including any jobs waiting to run. ###### Pause and resume an individual agent You can also pause an agent to prevent any jobs of the cluster's pipelines from being dispatched to that particular agent. Learn more in [Pausing and resuming an agent](/docs/agent/self-hosted/pausing-and-resuming). ##### Queue connection status Self-hosted queues served by a [stack](/docs/apis/agent-api/stacks) — an orchestration system such as the [Buildkite Agent Stack for Kubernetes](/docs/agent/self-hosted/agent-stack-k8s) or the [Buildkite Elastic CI Stack for AWS](/docs/agent/self-hosted/aws/elastic-ci-stack) — display a **Connected** or **Disconnected** status badge in the Buildkite Pipelines interface. - **Connected**: The stack serving this queue is running and actively communicating with Buildkite. - **Disconnected**: The stack has stopped reporting in. This can occur if the stack has been shut down, has lost connectivity to Buildkite, or has encountered an error. If no badge is displayed, the queue has no stack registered against it. This is the case when agents are started manually rather than through a stack-based orchestration system. ##### Queue metrics Clusters provides additional, easy to access queue metrics that are available only for queues within a cluster. Learn more in [Queue metrics in clusters](/docs/pipelines/insights/queue-metrics). --- ### Agent lifecycle URL: https://buildkite.com/docs/agent/lifecycle #### Agent lifecycle The Buildkite agent goes through several stages during its operation: starting up, registering with Buildkite, receiving and running jobs, and shutting down. This page covers how the agent [receives jobs](#receiving-jobs), [handles signals](#signal-handling), the [exit codes](#exit-codes) it reports, and how to [troubleshoot](#troubleshooting) common lifecycle issues. ##### Receiving jobs The methods by which agents receive jobs differs, depending on whether you are using [self-hosted](/docs/agent/self-hosted) or [Buildkite hosted](/docs/agent/buildkite-hosted) agents: - For self-hosted agents, agents receive jobs either by polling Buildkite Pipelines (the Buildkite platform) for jobs, or by having jobs pushed to them from Buildkite Pipelines (through the _streaming job dispatch_ feature). See [Job dispatch](/docs/agent/self-hosted/configure/job-dispatch). - For Buildkite hosted agents, Buildkite handles the job dispatch processes internally. ##### Signal handling When a build's job is canceled, the agent will send that job process a `SIGTERM` signal to allow it to exit gracefully. If the process does not exit within the 10s grace period it will be forcefully terminated with a `SIGKILL` signal. If you require a longer grace period, it can be customized on [self-hosted agents](/docs/agent/self-hosted) using the [cancel-grace-period](/docs/agent/self-hosted/configure#configuration-settings) agent configuration option. The agent also accepts the following two signals directly: - `SIGTERM` - Instructs the agent to gracefully disconnect, after completing any job that it may be running. - `SIGQUIT` - Instructs the agent to forcefully disconnect, canceling any job that it may be running. ##### Exit codes The agent reports its activity to Buildkite using exit codes. The most common exit codes and their descriptions can be found in the table below. Exit code | Description ------------------- | ------------------------------------------------------------------- 0 | The job exited with a status of 0 (success) 1 | The job exited with a status of 1 (most common error status) 94 | The checkout timed out waiting for a Git mirrors lock 128 + signal number | The job was terminated by a signal (see note below) 255 | The agent was gracefully terminated -1 | Buildkite lost contact with the agent or it stopped reporting to us > 📘 Jobs terminated by signals > When a job is terminated by a signal, the exit code will be set to 128 + the signal number. For more information about how shells manage commands terminated by signals, see the Wiki page on [Exit Signals](https://en.wikipedia.org/wiki/Exit_status#Shell_and_scripts). Exit codes for common signals: Exit code | Signal | Name | Description --------- | ------ | ------- | -------------------------------------------- 130 | 2 | SIGINT | Terminal interrupt signal 137 | 9 | SIGKILL | Kill (cannot be caught or ignored) 139 | 11 | SIGSEGV | Segmentation fault; Invalid memory reference 141 | 13 | SIGPIPE | Write on a pipe with no one to read it 143 | 15 | SIGTERM | Termination signal (graceful) ###### Job exit codes and hooks The final exit code reported for a job depends on which phase of the [job lifecycle](/docs/agent/hooks#job-lifecycle-hooks) failed. The agent tracks exit codes through two environment variables as the job progresses: - `BUILDKITE_COMMAND_EXIT_STATUS`: Set after the command phase is completed. Contains the exit code from the command or `command`-related hook. This value is available to `post-command` and `pre-exit` hooks. - `BUILDKITE_LAST_HOOK_EXIT_STATUS`: Set after each hook is completed. Contains the exit code of the most recently executed hook. The final exit code reported to Buildkite Pipelines is determined as follows: - If a `pre-command` hook or earlier hook fails, its exit code becomes the job exit code. The command does not run. - If the command fails but all `post-command` and `pre-exit` hooks pass, the command's exit code (from `BUILDKITE_COMMAND_EXIT_STATUS`) becomes the job exit code. - If a `post-command` or `pre-exit` hook fails with a non-zero exit code, the hook's exit code **overrides** the job exit code. This is true even if the command also failed with a different exit code. For example, if a command exits with code `4` and then a `pre-exit` hook exits with code `6`, the final job exit code reported to Buildkite Pipelines is `6`, not `4`. The original command exit code is still available in the `BUILDKITE_COMMAND_EXIT_STATUS` environment variable. > 🚧 Pre-exit hooks can change the job exit code > If your `pre-exit` hook can fail, be aware that its exit code will replace the command's exit code as the final job result. This can affect automatic [retry](/docs/pipelines/configure/retry) rules that match on specific exit codes. To avoid this, ensure your `pre-exit` hook exits with code `0`, or handle errors within the hook itself. ##### Troubleshooting One issue you sometimes need to troubleshoot is when Buildkite loses contact with an agent, resulting in a `-1` exit code. After registering with the Buildkite API, an agent regularly sends heartbeat updates to indicate that it is operational. If the Buildkite API does not receive any heartbeat requests from an agent for three consecutive minutes, that agent is marked as lost within the next 60 seconds, and will not be assigned any further jobs. Various factors can cause an agent to fail to send heartbeat updates. Common reasons include networking issues and resource constraints, such as CPU, memory, or I/O limitations on the infrastructure hosting the agent. In such cases, check the agent logs and examine metrics related to networking, CPU, memory, and I/O to help identify the cause of the failed heartbeat updates. If the agents run on the [Elastic CI Stack for AWS](/docs/agent/self-hosted/aws/elastic-ci-stack) with spot instances, the abrupt termination of spot instances can also result in marking agents as lost. To investigate this issue, you can use the [log collector script](https://github.com/buildkite/elastic-ci-stack-for-aws?tab=readme-ov-file#collect-logs-via-script) to gather all relevant logs and metrics from the Elastic CI Stack for AWS. ###### Timeouts Occasionally, a job may time out if it exceeds the maximum allowed [command step timeout](/docs/pipelines/configure/build-timeouts). Depending on the `cancel-grace-period` set on the agent, the job may not complete gracefully, resulting in an unexpected exit code (`-1`). --- ### Hooks URL: https://buildkite.com/docs/agent/hooks #### Buildkite agent hooks An agent goes through different phases in its [lifecycle](/docs/agent/lifecycle), including starting up, shutting down, and checking out code. Hooks let you extend or override the behavior of agents at different stages of its lifecycle. You "hook into" the agent at a particular stage. ##### What's a hook? A hook is a script executed or sourced by the Buildkite agent at a specific point in the job lifecycle. You can use hooks to extend or override the built-in behavior of an agent. Hooks are generally shell scripts, which the agent then executes or sources. The Buildkite agent v3.47.0 or later can run hooks written in any programming language that your development teams use. See the [polyglot hooks](#polyglot-hooks) section for more information. > 📘 > Unless otherwise indicated, all sections and content covered on this page are applicable to both [self-hosted](/docs/agent/self-hosted) and [Buildkite hosted](/docs/agent/buildkite-hosted) agents. ##### Hook scopes You can define hooks in the following locations: - In the file system of the agent machine (called _agent hooks_, or more rarely _global hooks_). - In your pipeline's repository (called _repository hooks_, or more rarely _local hooks_). - In [plugins](/docs/pipelines/integrations/plugins) applied to steps. For example, you could define an agent-wide `checkout` hook that spins up a fresh `git clone` on a new build machine, a repository `pre-command` hook that sets up repository-specific environment variables, or a plugin `environment` hook that fetches API keys from a secrets storage service. There are two categories of hooks: - Agent lifecycle (applicable to [self-hosted agents](/docs/agent/self-hosted) only) - Job lifecycle Agent lifecycle hooks are _executed_ by the Buildkite agent as part of the agent's lifecycle. For example, the `pre-bootstrap` hook (self-hosted agents only) is executed before starting a job's bootstrap process, and the `agent-shutdown` hook is executed before the agent process terminates. Job lifecycle hooks are _sourced_ (see "A note on sourcing" for specifics) by the Buildkite bootstrap in the different job phases. They run in a per-job shell environment, and any exported environment variables are carried to the job's subsequent phases and hooks. For example, the `environment` hook can modify or export new environment variables for the job's subsequent checkout and command phases. Shell options set by individual hooks, such as set `-e -o pipefail`, are not carried over to other phases or hooks. ** 📝 A note on sourcing ** We use the word "sourcing" on this page, but it's not strictly correct. Instead, the agent uses a process called ["the scriptwrapper"](https://github.com/buildkite/agent/blob/1a5f05029cc363a984188c441f938dd316dedd16/hook/scriptwrapper.go) to run hooks. This process notes down the environment variables before a hook run, sources that hook, and compares the environment variables after the hook run to the environment variables before the hook run. Any environment variables added, changed, or removed are then exported to the subsequent phases and hooks. Functionally, this is very similar to how `source` would work, but it's not quite the same. If you're relying on some very specific pieces of shellscripting functionality, you might find that things don't work quite as you expect. We do this because there's no shared bash environment between two different hooks on the same job. Functionally, each hook runs in its own shell, orchestrated through the agent's Go code. This means that if you set an environment variable in one hook, it wouldn't be available in the next hook without this scriptwrapper process. ##### Hook locations You can define hooks in the following locations: - **Agent hooks:** These exist in a pre-configured directory on the agent file system. For [self-hosted agents](/docs/agent/self-hosted), this directory created by your agent installer, and can be configured by the [`hooks-path`](/docs/agent/self-hosted/configure#hooks-path) setting. You can define both agent lifecycle hooks (self-hosted agents only) and job lifecycle hooks in the agent hooks location. Job lifecycle hooks defined here will run for every job the agent receives from any pipeline. **Note:** For [Buildkite hosted agents](/docs/agent/buildkite-hosted), agent hooks are supported on [Linux hosted agents](/docs/agent/buildkite-hosted/linux/custom-base-images#create-an-agent-image-using-agent-hooks) only. Agent hooks are not available on [macOS hosted agents](/docs/agent/buildkite-hosted/macos). - **Repository hooks:** These exist in your pipeline repository's `.buildkite/hooks` directory and can define job lifecycle hooks. Job lifecycle hooks defined here will run for every pipeline that uses the repository. In scenarios where the current working directory is modified as part of the command or a post-command hook, this modification will cause these hooks to fail as the `.buildkite/hooks` directory can no longer be found in its new directory path. Ensure that the working directory is not modified to avoid these issues. - **Plugin hooks:** These are provided by [plugins](/docs/pipelines/integrations/plugins) you've included in your pipeline steps and can define job lifecycle hooks. Job lifecycle hooks defined by a plugin will only run for the step that includes them. Plugins can be *vendored* (if they are already present in the repository and included using a relative path) or *non-vendored* (when they are included from elsewhere), which affects the order they are run in. ###### Agent hooks When an agent is set up, it creates a hooks directory: - For [self-hosted agents](/docs/agent/self-hosted), you can find the location of your agent hooks directory in its relevant [installation](/docs/agent/self-hosted/install/) documentation. Self-hosted agents are provided with a number of sample hooks within this directory. To get started with one of these agent hooks, copy the relevant example script and remove the `.sample` file extension. - For [Linux hosted agents](/docs/agent/buildkite-hosted/linux/custom-base-images#create-an-agent-image-using-agent-hooks), the agents hooks directory is `/buildkite/agent/hooks`. Currently, [Buildkite hosted agents for macOS](/docs/agent/buildkite-hosted/macos) do not support agent hooks. Instead, use either [repository](#hook-locations-repository-hooks)- or [plugin](#hook-locations-plugin-hooks)-based hooks with these types of agents. See [agent lifecycle hooks](#agent-lifecycle-hooks) (self-hosted agents only) and [job lifecycle hooks](#job-lifecycle-hooks) for the hook types that you can define in the agent hooks directory. ###### Repository hooks Repository hooks allow you to execute repository-specific scripts. Repository hooks live alongside your repository's source code under the `.buildkite/hooks` directory. To get started, create a shell script in `.buildkite/hooks` named `post-checkout`. It will be sourced and run after your repository has been checked out as part of every job for any pipeline that uses this repository. You can define any of the [job lifecycle hooks](#job-lifecycle-hooks) whose `Order` includes *Repository*. ###### Plugin hooks Plugin hooks allow plugins you've defined in your Pipeline Steps to override the default behavior. See the [plugin documentation](/docs/pipelines/integrations/plugins) for how to implement plugin hooks and [job lifecycle hooks](#job-lifecycle-hooks) for the list of hook types that a plugin can define. ##### Polyglot hooks Buildkite agent versions prior to v3.85.0 require hooks to be shell scripts. However, with the Buildkite agent version v3.85.0 or later, hooks are significantly more flexible and can be written in the programming language of your choice. In addition to the regular shell script hooks, polyglot hooks enable you to run two more types of hooks: - **Interpreted hooks:** Hooks that are run by an interpreter, such as Python, Ruby, or Node.js. These hooks are run in the same way as shell script hooks, but are executed by the appropriate interpreter instead of by the shell. These hooks _must_ have a valid [shebang](https://en.wikipedia.org/wiki/Shebang_(Unix)) as the first line of the hook. For example, `#!/usr/bin/python3` or `#!/usr/bin/env ruby`. - **Binary hooks:** Binary executables produced by compiled languages such as Go, Rust, or C++. These hooks are run in the same way as shell script hooks, but are executed directly by the operating system. These hooks must be compiled for the correct operating system and architecture, and be executable by the agent user. > 🚧 Windows support > Interpreted hooks are not supported on Windows agents. Polyglot hooks are run transparently by the agent, and are not distinguished from shell script hooks in the logs or the Buildkite dashboard. The agent will automatically detect the type of hook–whether it's a shell script, an interpreted hook, or a binary–and run it appropriately. All you need to do is place your hook in the correct location and ensure it's executable. ###### Extra environment variables When polyglot hooks are called, the following extra environment variables are set: - `BUILDKITE_HOOK_PHASE`: The lifecycle phase of the hook being run. For example, `environment` or `post-checkout`. See [job lifecycle hooks](#job-lifecycle-hooks) for the full list of phases. This enables the hook to determine the phase it's running in, allowing you to use the same hook for multiple phases. - `BUILDKITE_HOOK_PATH`: The path to the hook being run. For example, `/path/to/my-hook`. - `BUILDKITE_HOOK_SCOPE`: The scope of the hook being run. For example, `global`, `local`, or `plugin`. > 📘 Modifying environment variable values > Be aware that when an agent is running a job, you can modify the values of these, as well as other environment variables within the agent, using its [internal job API](/docs/apis/agent-api/internal-job). ###### Caveats Polyglot hook usage comes with the following caveats: - Interpreted hooks are not supported on Windows. - Hooks must not have a file extension–except on Windows, where binary hooks must have the `.exe` extension. - For interpreted hooks, the specified interpreter must already be installed on the agent machine. The agent won't install the interpreter or any package dependencies for you. - Unlike shell hooks, environment variable changes are not automatically captured from polyglot hooks. If you want to modify the job's environment, you'll have to use the [Job API](/docs/agent/self-hosted/configure/experiments#promoted-experiments-job-api). ##### Agent lifecycle hooks Agent lifecycle hooks are only available to [self-hosted agents](/docs/agent/self-hosted), and not [Buildkite hosted agents](/docs/agent/buildkite-hosted). | Hook | Location Order | Description | | ---------------- | -------------- | ----------- | | `agent-startup` | Agent | Executed at agent startup, immediately prior to the agent being registered with Buildkite. Useful for initialising resources that will be used by all jobs that an agent runs, outside of the job lifecycle. Supported from agent version 3.42.0 and above. | | `agent-shutdown` | Agent | Executed when the agent shuts down. Useful for performing cleanup tasks for the entire agent, outside of the job lifecycle. | ###### Creating agent lifecycle hooks For [self-hosted agents](/docs/agent/self-hosted), the Buildkite agent executes agent lifecycle hooks. These hooks can only be defined in the [agent `hooks-path`](#hook-locations-agent-hooks) directory. Agent lifecycle hooks can be executables written in any programming language. On Unix-like systems (such as Linux and macOS), hooks must be files that are executable by the user the agent is running as. Use agent lifecycle hooks to prepare for or clean up after all jobs that may run. For example, use `pre-bootstrap` to block unwanted jobs from running or use `agent-shutdown` to tear down a service after all jobs are finished. If your hook uses details about any individual job to run, prefer [job lifecycle hooks](#job-lifecycle-hooks) for those tasks instead. The agent exports few environment variables to agent lifecycle hooks. Read the [agent lifecycle hooks table](#agent-lifecycle-hooks) for details on the interface between the agent and each hook type. ##### Job lifecycle hooks Job lifecycle hooks are available to both [self-hosted](/docs/agent/self-hosted) and [Buildkite hosted](/docs/agent/buildkite-hosted) agents. The following is a complete list of available job hooks, and the order in which they are run as part of each job: | Hook | Location Order | Description | | --------------- | -------------- | ----------- | | `pre-bootstrap` (Self-hosted agents only) | Agent | Executed before any job is started. Useful for [adding strict checks](/docs/agent/self-hosted/security#restrict-access-by-the-buildkite-agent-controller-strict-checks-using-a-pre-bootstrap-hook) before jobs are permitted to run. This specific hook is only applicable to self-hosted agents. The proposed job command and environment is written to a file and the path to this file provided in the `BUILDKITE_ENV_FILE` environment variable. Use the contents of this file to determine whether to permit the job to run on this agent. If the `pre-bootstrap` hook terminates with an exit code of `0`, the job is permitted to run. Any other exit code results in the job being rejected, and job failure being reported to the Buildkite API. | | `environment` | Agent Plugin (non-vendored) | Runs before all other hooks. Useful for [exporting secret keys](/docs/pipelines/security/secrets/managing#without-a-secrets-storage-service-exporting-secrets-with-environment-hooks). | | `pre-checkout` | Agent Plugin (non-vendored) | Runs before checkout. | | `checkout` | Plugin (non-vendored) Agent | Overrides the default git checkout behavior. (See [Hook exceptions](#job-lifecycle-hooks-hook-exceptions).) | | `post-checkout` | Agent Repository Plugin (non-vendored) | Runs after checkout. | | `environment` | Plugin (vendored) | Unlike other plugins, environment hooks for vendored plugins run after checkout. | | `pre-command` | Agent Repository Plugin (non-vendored) Plugin (vendored) | Runs before the build command | | `command` | Plugin (non-vendored) Plugin (vendored) Repository Agent | Overrides the default command running behavior. (See [Hook exceptions](#job-lifecycle-hooks-hook-exceptions).) | | `post-command` | Agent Repository Plugin (non-vendored) Plugin (vendored) | Runs after the command. | | `pre-artifact` | Agent Repository Plugin (non-vendored) Plugin (vendored) | Runs before artifacts are uploaded, if an artifact upload pattern was defined for the job. | | `post-artifact` | Agent Repository Plugin (non-vendored) Plugin (vendored) | Runs after artifacts have been uploaded, if an artifact upload pattern was defined for the job. | | `pre-exit` | Agent Repository Plugin (non-vendored) Plugin (vendored) | Runs before the job finishes. Useful for performing cleanup tasks. | Each `command` job defined in a pipeline's `pipeline.yml` file runs independently of one another. Therefore, each defined hook will run for every one of these `command` jobs. When defining multiple command items in a step using the `commands` attribute, such as the `pipeline.yml` example in [Command step attributes](/docs/pipelines/configure/step-types/command-step#command-step-attributes), then each item in the `commands` list is concatenated together and run as a single command. Therefore, a given hook will only run once for a given `commands` job consisting of multiple command items. ###### Hook failure behavior When a pipeline's job runs, the first point of failure causes the entire job to fail and terminate. In the table above, if any of the hooks above `command` (from `pre-bootstrap` to `pre-command`, inclusive) fails with a non-zero exit code, then the `command` phase of the pipeline job will not run. Since all the hooks below `command` (from `post-command` to `pre-exit`, inclusive) run _after_ the `command` phase of the pipeline job, then any non-zero exit code failure in these hooks would still fail the entire job. Be aware, however, that any actions in the `command` phase of the pipeline job would have already run successfully. > 🚧 Pre-exit hooks can change the job exit code > If your `pre-exit` hook can fail, be aware that its exit code will replace the command's exit code as the final job result. This can affect automatic [retry](/docs/pipelines/configure/retry) rules that match on specific exit codes. To avoid this, ensure your `pre-exit` hook exits with code `0`, or handle errors within the hook itself. ###### Hook exceptions Typically, if there are multiple hooks of the same type, all of them will be run (in the order shown in the table). As of the Buildkite agent version 3.15.0, if multiple `checkout` or `command` hooks are found, only the first (of each type) will be run. This does not apply to other hook types. However, for legacy compatibility, there is an exception with *plugins*. All `checkout` or `command` hooks provided by plugins will run in the order the plugins are specified, meaning multiple `checkout` and `command` hooks can run. Note that `checkout` hooks and `command` hooks provided by plugins will prevent any repository or agent hooks of the same type from running. ###### Creating job lifecycle hooks Job lifecycle hooks are sourced for every job an agent accepts. Use job lifecycle hooks to prepare for jobs, override the default behavior, or clean up after jobs that have finished. For example, use the `environment` hook to set a job's environment variables or the `pre-exit` hook to delete temporary files and remove containers. If your hook is related to the startup or shutdown of the agent, consider [agent lifecycle hooks](#agent-lifecycle-hooks) for those tasks instead. Job lifecycle hooks have access to all the standard [Buildkite environment variables](/docs/pipelines/configure/environment-variables). Job lifecycle hooks are copied to `$TMPDIR` directory and *sourced* by the agent's default shell. This has a few implications: - `$BASH_SOURCE`: contains the location the hook is sourced from. - `$0`: contains the location of the copy of the script that is running from `$TMPDIR`. >🚧 "Permission denied" error when trying to execute hooks > If your hooks don't execute, and throw a `Permission denied` error, it might mean that they were copied to a temporary directory on the agent that isn't executable. Configure the directory that hooks are copied to before execution using the `$TMPDIR` environment variable on the Buildkite agent, or make sure the existing directory is marked as executable. To write job lifecycle hooks in another programming language, you need to execute them from within the shell script, and explicitly pass any Buildkite environment variables you need to the script when you call it. The following is an example of an `environment` hook which exports a GitHub API key for the pipeline's release build step: ```bash set -eu echo '--- \:house_with_garden\: Setting up the environment' export GITHUB_RELEASE_ACCESS_KEY='xxx' ``` ##### Job hooks on Windows Jook hooks on Windows are only available to [self-hosted agents](/docs/agent/self-hosted), and not [Buildkite hosted agents](/docs/agent/buildkite-hosted). Buildkite defaults to using the Batch shell on Windows. Buildkite agents running on Windows require that either: - The hooks files have a `.bat` extension, and be written in [Windows Batch](https://en.wikipedia.org/wiki/Batch_file), or - The agent `shell` option points to the PowerShell or PowerShell Core executable, and the hooks files are written in PowerShell. PowerShell hooks are supported in Buildkite agent version 3.32.3 and above. An example of a Windows `environment.bat` hook: ```batch @ECHO OFF ECHO "--- \:house_with_garden\: Setting up the environment" SET GITHUB_RELEASE_ACCESS_KEY='xxx' ``` ##### Hooks on Buildkite Agent Stack for Kubernetes The Buildkite Agent Stack for Kubernetes is a [self-hosted agents](/docs/agent/self-hosted) feature, and therefore, is not applicable to [Buildkite hosted agents](/docs/agent/buildkite-hosted). The hook execution flow for jobs created by the Buildkite Agent Stack for Kubernetes controller is operationally different. The reason for this is that hooks are executed from within separate containers for checkout and command phases of the job's lifecycle. This means that any environment variables exported during the execution of hooks with the `checkout` container will _not_ be available to the command container(s). The main differences arise with the `checkout` container and user-defined `command` containers: - The `environment` hook is executed multiple times, once within the `checkout` container, and once within each of the user-defined `command` containers. - Checkout-related hooks (`pre-checkout`, `checkout`, `post-checkout`) are only executed within the `checkout` container. - Command-related hooks (`pre-command`, `command`, `post-command`) are only executed within the `command` container(s). See the dedicated [Using agent hooks and plugins](/docs/agent/self-hosted/agent-stack-k8s/agent-hooks-and-plugins) page for the detailed information on how agent hooks function when using the Buildkite Agent Stack for Kubernetes controller. --- ### Overview URL: https://buildkite.com/docs/agent/cli/reference #### Command-line reference overview The agent has a command line interface (CLI) that lets you interact with and control the agent through the command line. The comprehensive command set lets you interact with Buildkite Pipelines, manage agent configuration, control job execution, and manipulate artifacts. These commands are essential for managing your build infrastructure, automating tasks, and troubleshooting issues. The agent CLI has the following commands and built-in help. Select a linked command to see more detailed help about it. `$ buildkite-agent --help Usage: buildkite-agent [options...] Available commands are: [start](/docs/agent/cli/reference/start) Starts a Buildkite agent acknowledgements Prints the licenses and notices of open source software incorporated into this software. [tool](/docs/agent/cli/reference/tool) Utilities for working with the Buildkite Agent help, h Shows a list of commands or help for one command Commands that can be run within a Buildkite job: [annotate](/docs/agent/cli/reference/annotate) Annotate the build page in the Buildkite UI with information from within a Buildkite job [annotation](/docs/agent/cli/reference/annotation) Make changes to annotations on the currently running build [artifact](/docs/agent/cli/reference/artifact) Upload/download artifacts from Buildkite jobs [build](/docs/agent/cli/reference/build) Interact with a Buildkite build [job](/docs/agent/cli/reference/job) Interact with a Buildkite job [env](/docs/agent/cli/reference/env) Interact with the environment of the currently running build [lock](/docs/agent/cli/reference/lock) Lock or unlock resources for the currently running build [redactor](/docs/agent/cli/reference/redactor) Redact sensitive information from logs [meta-data](/docs/agent/cli/reference/meta-data) Get/set metadata from Buildkite jobs [oidc](/docs/agent/cli/reference/oidc) Interact with Buildkite OpenID Connect (OIDC) [pause](/docs/agent/cli/reference/pause) Pause the agent [pipeline](/docs/agent/cli/reference/pipeline) Make changes to the pipeline of the currently running build [resume](/docs/agent/cli/reference/resume) Resume the agent [secret](/docs/agent/cli/reference/secret) Interact with Pipelines Secrets [step](/docs/agent/cli/reference/step) Get or update an attribute of a build step, or cancel unfinished jobs for a step [stop](/docs/agent/cli/reference/stop) Stop the agent Internal commands, not intended to be run by users: [bootstrap](/docs/agent/cli/reference/bootstrap) Harness used internally by the agent to run jobs as subprocesses kubernetes-bootstrap Harness used internally by the agent to run jobs on Kubernetes git-credentials-helper Internal process used by hosted compute jobs to authenticate with Github Use "buildkite-agent --help" for more information about a command. ` --- ### start URL: https://buildkite.com/docs/agent/cli/reference/start #### buildkite-agent start The Buildkite agent's `start` command is used to manually start an agent and register it with Buildkite. ##### Starting an agent ###### Usage `buildkite-agent start [options...]` ###### Description When a job is ready to run it will call the "bootstrap-script" and pass it all the environment variables required for the job to run. This script is responsible for checking out the code, and running the actual build script defined in the pipeline. The agent will run any jobs within a PTY (pseudo terminal) if available. ###### Example ```shell $ buildkite-agent start --token xxx ``` ###### Options | `--no-color ` [#](#no-color) | Don't show colors in logging (default: false) **Environment variable**: `$BUILDKITE_AGENT_NO_COLOR` | `--debug ` [#](#debug) | Enable debug mode. Synonym for `--log-level debug`. Takes precedence over `--log-level` (default: false) **Environment variable**: `$BUILDKITE_AGENT_DEBUG` | `--log-level value` [#](#log-level) | Set the log level for the agent, making logging more or less verbose. Defaults to notice. Allowed values are: debug, info, error, warn, fatal (default: "notice") **Environment variable**: `$BUILDKITE_AGENT_LOG_LEVEL` | `--experiment value` [#](#experiment) | Enable experimental features within the buildkite-agent **Environment variable**: `$BUILDKITE_AGENT_EXPERIMENT` | `--profile value` [#](#profile) | Enable a profiling mode, either cpu, memory, mutex or block **Environment variable**: `$BUILDKITE_AGENT_PROFILE` | `--config value` [#](#config) | Path to a configuration file **Environment variable**: `$BUILDKITE_AGENT_CONFIG` | `--name value` [#](#name) | The name of the agent **Environment variable**: `$BUILDKITE_AGENT_NAME` | `--priority value` [#](#priority) | The priority of the agent (higher priorities are assigned work first) **Environment variable**: `$BUILDKITE_AGENT_PRIORITY` | `--acquire-job value` [#](#acquire-job) | Start this agent and only run the specified job, disconnecting after it's finished **Environment variable**: `$BUILDKITE_AGENT_ACQUIRE_JOB` | `--reflect-exit-status ` [#](#reflect-exit-status) | When used with --acquire-job, causes the agent to exit with the same exit status as the job (default: false) **Environment variable**: `$BUILDKITE_AGENT_REFLECT_EXIT_STATUS` | `--disconnect-after-job ` [#](#disconnect-after-job) | Disconnect the agent after running exactly one job. When used in conjunction with the `--spawn` flag, each worker booted will run exactly one job (default: false) **Environment variable**: `$BUILDKITE_AGENT_DISCONNECT_AFTER_JOB` | `--disconnect-after-idle-timeout value` [#](#disconnect-after-idle-timeout) | The maximum idle time in seconds to wait for a job before disconnecting. The default of 0 means no timeout (default: 0) **Environment variable**: `$BUILDKITE_AGENT_DISCONNECT_AFTER_IDLE_TIMEOUT` | `--disconnect-after-uptime value` [#](#disconnect-after-uptime) | The maximum uptime in seconds before the agent stops accepting new jobs and shuts down after any running jobs complete. The default of 0 means no timeout (default: 0) **Environment variable**: `$BUILDKITE_AGENT_DISCONNECT_AFTER_UPTIME` | `--cancel-grace-period value` [#](#cancel-grace-period) | The number of seconds a canceled or timed out job is given to gracefully terminate and upload its artifacts (default: 10) **Environment variable**: `$BUILDKITE_CANCEL_GRACE_PERIOD` | `--enable-job-log-tmpfile ` [#](#enable-job-log-tmpfile) | Store the job logs in a temporary file `BUILDKITE_JOB_LOG_TMPFILE` that is accessible during the job and removed at the end of the job (default: false) **Environment variable**: `$BUILDKITE_ENABLE_JOB_LOG_TMPFILE` | `--job-log-path value` [#](#job-log-path) | Location to store job logs created by configuring `enable-job-log-tmpfile`, by default job log will be stored in TempDir **Environment variable**: `$BUILDKITE_JOB_LOG_PATH` | `--write-job-logs-to-stdout ` [#](#write-job-logs-to-stdout) | Writes job logs to the agent process' stdout. This simplifies log collection if running agents in Docker (default: false) **Environment variable**: `$BUILDKITE_WRITE_JOB_LOGS_TO_STDOUT` | `--shell value` [#](#shell) | The shell command used to interpret build commands, e.g /bin/bash -e -c (default: "/bin/bash -e -c") **Environment variable**: `$BUILDKITE_SHELL` | `--hooks-shell value` [#](#hooks-shell) | The shell command used to interpret hooks commands, e.g pwsh -Command **Environment variable**: `$BUILDKITE_HOOKS_SHELL` | `--queue value` [#](#queue) | The queue the agent will listen to for jobs. If not set, the agent will use the default queue. Overwrites the queue tag in the agent's tags **Environment variable**: `$BUILDKITE_AGENT_QUEUE` | `--tags value` [#](#tags) | A comma-separated list of tags for the agent (for example, "linux" or "mac,xcode=8") **Environment variable**: `$BUILDKITE_AGENT_TAGS` | `--tags-from-host ` [#](#tags-from-host) | Include tags from the host (hostname, machine-id, os) (default: false) **Environment variable**: `$BUILDKITE_AGENT_TAGS_FROM_HOST` | `--tags-from-ec2-meta-data value` [#](#tags-from-ec2-meta-data) | Include the default set of host EC2 meta-data as tags (instance-id, instance-type, ami-id, and instance-life-cycle) **Environment variable**: `$BUILDKITE_AGENT_TAGS_FROM_EC2_META_DATA` | `--tags-from-ec2-meta-data-paths value` [#](#tags-from-ec2-meta-data-paths) | Include additional tags fetched from EC2 meta-data using tag & path suffix pairs, e.g "tag_name=path/to/value" **Environment variable**: `$BUILDKITE_AGENT_TAGS_FROM_EC2_META_DATA_PATHS` | `--tags-from-ec2-tags ` [#](#tags-from-ec2-tags) | Include the host's EC2 tags as tags (default: false) **Environment variable**: `$BUILDKITE_AGENT_TAGS_FROM_EC2_TAGS` | `--tags-from-ecs-meta-data ` [#](#tags-from-ecs-meta-data) | Include the host's ECS meta-data as tags (container-name, image, and task-arn) (default: false) **Environment variable**: `$BUILDKITE_AGENT_TAGS_FROM_ECS_META_DATA` | `--tags-from-gcp-meta-data value` [#](#tags-from-gcp-meta-data) | Include the default set of host Google Cloud instance meta-data as tags (instance-id, machine-type, preemptible, project-id, region, and zone) **Environment variable**: `$BUILDKITE_AGENT_TAGS_FROM_GCP_META_DATA` | `--tags-from-gcp-meta-data-paths value` [#](#tags-from-gcp-meta-data-paths) | Include additional tags fetched from Google Cloud instance meta-data using tag & path suffix pairs, e.g "tag_name=path/to/value" **Environment variable**: `$BUILDKITE_AGENT_TAGS_FROM_GCP_META_DATA_PATHS` | `--tags-from-gcp-labels ` [#](#tags-from-gcp-labels) | Include the host's Google Cloud instance labels as tags (default: false) **Environment variable**: `$BUILDKITE_AGENT_TAGS_FROM_GCP_LABELS` | `--wait-for-ec2-tags-timeout value` [#](#wait-for-ec2-tags-timeout) | The amount of time to wait for tags from EC2 before proceeding (default: 10s) **Environment variable**: `$BUILDKITE_AGENT_WAIT_FOR_EC2_TAGS_TIMEOUT` | `--wait-for-ec2-meta-data-timeout value` [#](#wait-for-ec2-meta-data-timeout) | The amount of time to wait for meta-data from EC2 before proceeding (default: 10s) **Environment variable**: `$BUILDKITE_AGENT_WAIT_FOR_EC2_META_DATA_TIMEOUT` | `--wait-for-ecs-meta-data-timeout value` [#](#wait-for-ecs-meta-data-timeout) | The amount of time to wait for meta-data from ECS before proceeding (default: 10s) **Environment variable**: `$BUILDKITE_AGENT_WAIT_FOR_ECS_META_DATA_TIMEOUT` | `--wait-for-gcp-labels-timeout value` [#](#wait-for-gcp-labels-timeout) | The amount of time to wait for labels from GCP before proceeding (default: 10s) **Environment variable**: `$BUILDKITE_AGENT_WAIT_FOR_GCP_LABELS_TIMEOUT` | `--skip-checkout ` [#](#skip-checkout) | Skip the git checkout phase entirely **Environment variable**: `$BUILDKITE_SKIP_CHECKOUT` | `--git-checkout-flags value` [#](#git-checkout-flags) | Flags to pass to "git checkout" command (default: "-f") **Environment variable**: `$BUILDKITE_GIT_CHECKOUT_FLAGS` | `--git-clone-flags value` [#](#git-clone-flags) | Flags to pass to "git clone" command (default: "-v") **Environment variable**: `$BUILDKITE_GIT_CLONE_FLAGS` | `--git-clean-flags value` [#](#git-clean-flags) | Flags to pass to "git clean" command (default: "-ffxdq") **Environment variable**: `$BUILDKITE_GIT_CLEAN_FLAGS` | `--git-fetch-flags value` [#](#git-fetch-flags) | Flags to pass to "git fetch" command (default: "-v --prune") **Environment variable**: `$BUILDKITE_GIT_FETCH_FLAGS` | `--git-clone-mirror-flags value` [#](#git-clone-mirror-flags) | Flags to pass to "git clone" command when mirroring (default: "-v") **Environment variable**: `$BUILDKITE_GIT_CLONE_MIRROR_FLAGS` | `--git-mirrors-path value` [#](#git-mirrors-path) | Path to where mirrors of git repositories are stored **Environment variable**: `$BUILDKITE_GIT_MIRRORS_PATH` | `--git-mirror-checkout-mode value` [#](#git-mirror-checkout-mode) | Changes how clones of a mirror are made; available modes are [dissociate reference]. In `dissociate` mode, clones from a mirror uses the git clone `--dissociate` flag, which copies underlying objects from the mirror, making the clone robust to changes in the mirror such as garbage collection, at the expense of additional disk usage and setup time. `reference` mode does not pass `--dissociate`, which causes the clone to directly use objects from the mirror, which is more fragile and can cause the clone to break under entirely normal operation of the mirror, but is slightly faster to clone and uses less disk space. (default: "reference") **Environment variable**: `$BUILDKITE_GIT_MIRROR_CHECKOUT_MODE` | `--git-mirrors-lock-timeout value` [#](#git-mirrors-lock-timeout) | Seconds to lock a git mirror during clone, should exceed your longest checkout (default: 300) **Environment variable**: `$BUILDKITE_GIT_MIRRORS_LOCK_TIMEOUT` | `--git-mirrors-skip-update ` [#](#git-mirrors-skip-update) | Skip updating the Git mirror (default: false) **Environment variable**: `$BUILDKITE_GIT_MIRRORS_SKIP_UPDATE` | `--git-submodule-clone-config value` [#](#git-submodule-clone-config) | Comma separated key=value git config pairs applied before git submodule clone commands such as `update --init`. If the config is needed to be applied to all git commands, supply it in a global git config file for the system that the agent runs in instead **Environment variable**: `$BUILDKITE_GIT_SUBMODULE_CLONE_CONFIG` | `--git-skip-fetch-existing-commits ` [#](#git-skip-fetch-existing-commits) | Skip git fetch if the commit already exists in the local git directory (default: false) **Environment variable**: `$BUILDKITE_GIT_SKIP_FETCH_EXISTING_COMMITS` | `--checkout-attempts value` [#](#checkout-attempts) | Number of checkout attempts (including the initial attempt). Failed attempts are retried with exponential backoff (factor of 2, starting at 1s: 1s, 2s, 4s, ...) (default: 6) **Environment variable**: `$BUILDKITE_CHECKOUT_ATTEMPTS` | `--bootstrap-script value` [#](#bootstrap-script) | The command that is executed for bootstrapping a job, defaults to the bootstrap sub-command of this binary **Environment variable**: `$BUILDKITE_BOOTSTRAP_SCRIPT_PATH` | `--build-path value` [#](#build-path) | Path to where the builds will run from **Environment variable**: `$BUILDKITE_BUILD_PATH` | `--hooks-path value` [#](#hooks-path) | Directory where the hook scripts are found **Environment variable**: `$BUILDKITE_HOOKS_PATH` | `--additional-hooks-paths value` [#](#additional-hooks-paths) | Additional directories to look for agent hooks **Environment variable**: `$BUILDKITE_ADDITIONAL_HOOKS_PATHS` | `--sockets-path value` [#](#sockets-path) | Directory where the agent will place sockets (default: "$HOME/.buildkite-agent/sockets") **Environment variable**: `$BUILDKITE_SOCKETS_PATH` | `--plugins-path value` [#](#plugins-path) | Directory where the plugins are saved to **Environment variable**: `$BUILDKITE_PLUGINS_PATH` | `--no-ansi-timestamps ` [#](#no-ansi-timestamps) | Do not insert ANSI timestamp codes at the start of each line of job output (default: false) **Environment variable**: `$BUILDKITE_NO_ANSI_TIMESTAMPS` | `--timestamp-lines ` [#](#timestamp-lines) | Prepend timestamps on each line of job output. Has no effect unless --no-ansi-timestamps is also used (default: false) **Environment variable**: `$BUILDKITE_TIMESTAMP_LINES` | `--health-check-addr value` [#](#health-check-addr) | Start an HTTP server on this addr:port that returns whether the agent is healthy, disabled by default **Environment variable**: `$BUILDKITE_AGENT_HEALTH_CHECK_ADDR` | `--no-pty ` [#](#no-pty) | Do not run jobs within a pseudo terminal (default: false) **Environment variable**: `$BUILDKITE_NO_PTY` | `--no-ssh-keyscan ` [#](#no-ssh-keyscan) | Don't automatically run ssh-keyscan before checkout (default: false) **Environment variable**: `$BUILDKITE_NO_SSH_KEYSCAN` | `--no-command-eval ` [#](#no-command-eval) | Don't allow this agent to run arbitrary console commands, including plugins (default: false) **Environment variable**: `$BUILDKITE_NO_COMMAND_EVAL` | `--no-plugins ` [#](#no-plugins) | Don't allow this agent to load plugins (default: false) **Environment variable**: `$BUILDKITE_NO_PLUGINS` | `--no-plugin-validation ` [#](#no-plugin-validation) | Don't validate plugin configuration and requirements (default: true) **Environment variable**: `$BUILDKITE_NO_PLUGIN_VALIDATION` | `--plugins-always-clone-fresh ` [#](#plugins-always-clone-fresh) | Always make a new clone of plugin source, even if already present (default: false) **Environment variable**: `$BUILDKITE_PLUGINS_ALWAYS_CLONE_FRESH` | `--no-local-hooks ` [#](#no-local-hooks) | Don't allow local hooks to be run from checked out repositories (default: false) **Environment variable**: `$BUILDKITE_NO_LOCAL_HOOKS` | `--no-git-submodules ` [#](#no-git-submodules) | Don't automatically checkout git submodules (default: false) [$BUILDKITE_NO_GIT_SUBMODULES, $BUILDKITE_DISABLE_GIT_SUBMODULES] **Environment variable**: `$BUILDKITE_NO_GIT_SUBMODULES` | `--no-feature-reporting ` [#](#no-feature-reporting) | Disables sending a list of enabled features back to the Buildkite mothership. We use this information to measure feature usage, but if you're not comfortable sharing that information then that's totally okay :) (default: false) **Environment variable**: `$BUILDKITE_AGENT_NO_FEATURE_REPORTING` | `--allowed-repositories value` [#](#allowed-repositories) | A comma-separated list of regular expressions representing repositories the agent is allowed to clone (for example, "^git@github.com:buildkite/.*" or "^https://github.com/buildkite/.*") **Environment variable**: `$BUILDKITE_ALLOWED_REPOSITORIES` | `--enable-environment-variable-allowlist ` [#](#enable-environment-variable-allowlist) | Only run jobs where all environment variables are allowed by the allowed-environment-variables option, or have been set by Buildkite (default: false) **Environment variable**: `$BUILDKITE_ENABLE_ENVIRONMENT_VARIABLE_ALLOWLIST` | `--allowed-environment-variables value` [#](#allowed-environment-variables) | A comma-separated list of regular expressions representing environment variables the agent will pass to jobs (for example, "^MYAPP_.*$"). Environment variables set by Buildkite will always be allowed. Requires --enable-environment-variable-allowlist to be set **Environment variable**: `$BUILDKITE_ALLOWED_ENVIRONMENT_VARIABLES` | `--allowed-plugins value` [#](#allowed-plugins) | A comma-separated list of regular expressions representing plugins the agent is allowed to use (for example, "^buildkite-plugins/.*$" or "^/var/lib/buildkite-plugins/.*") **Environment variable**: `$BUILDKITE_ALLOWED_PLUGINS` | `--metrics-datadog ` [#](#metrics-datadog) | Send metrics to DogStatsD for Datadog (default: false) **Environment variable**: `$BUILDKITE_METRICS_DATADOG` | `--metrics-datadog-host value` [#](#metrics-datadog-host) | The dogstatsd instance to send metrics to using udp (default: "127.0.0.1:8125") **Environment variable**: `$BUILDKITE_METRICS_DATADOG_HOST` | `--metrics-datadog-distributions ` [#](#metrics-datadog-distributions) | Use Datadog Distributions for Timing metrics (default: false) **Environment variable**: `$BUILDKITE_METRICS_DATADOG_DISTRIBUTIONS` | `--log-format value` [#](#log-format) | The format to use for the logger output (default: "text") **Environment variable**: `$BUILDKITE_LOG_FORMAT` | `--spawn value` [#](#spawn) | The number of agents to spawn in parallel (mutually exclusive with --spawn-per-cpu) (default: 1) **Environment variable**: `$BUILDKITE_AGENT_SPAWN` | `--spawn-per-cpu value` [#](#spawn-per-cpu) | The number of agents to spawn per cpu in parallel (mutually exclusive with --spawn) (default: 0) **Environment variable**: `$BUILDKITE_AGENT_SPAWN_PER_CPU` | `--spawn-with-priority ` [#](#spawn-with-priority) | Assign priorities to every spawned agent (when using --spawn or --spawn-per-cpu) equal to the agent's index (default: false) **Environment variable**: `$BUILDKITE_AGENT_SPAWN_WITH_PRIORITY` | `--cancel-signal value` [#](#cancel-signal) | The signal to use for cancellation (default: "SIGTERM") **Environment variable**: `$BUILDKITE_CANCEL_SIGNAL` | `--signal-grace-period-seconds value` [#](#signal-grace-period-seconds) | The number of seconds given to a subprocess to handle being sent `cancel-signal`. After this period has elapsed, SIGKILL will be sent. Negative values are taken relative to `cancel-grace-period`. The default value (-1) means that the effective signal grace period is equal to `cancel-grace-period` minus 1. (default: -1) **Environment variable**: `$BUILDKITE_SIGNAL_GRACE_PERIOD_SECONDS` | `--tracing-backend value` [#](#tracing-backend) | Enable tracing for build jobs by specifying a backend, "datadog" or "opentelemetry" **Environment variable**: `$BUILDKITE_TRACING_BACKEND` | `--tracing-propagate-traceparent ` [#](#tracing-propagate-traceparent) | Enable accepting traceparent context from Buildkite control plane (only supported for OpenTelemetry backend) (default: false) **Environment variable**: `$BUILDKITE_TRACING_PROPAGATE_TRACEPARENT` | `--tracing-service-name value` [#](#tracing-service-name) | Service name to use when reporting traces. (default: "buildkite-agent") **Environment variable**: `$BUILDKITE_TRACING_SERVICE_NAME` | `--verification-jwks-file value` [#](#verification-jwks-file) | Path to a file containing a JSON Web Key Set (JWKS), used to verify job signatures. **Environment variable**: `$BUILDKITE_AGENT_VERIFICATION_JWKS_FILE` | `--signing-jwks-file value` [#](#signing-jwks-file) | Path to a file containing a signing key. Passing this flag enables pipeline signing for all pipelines uploaded by this agent. For hmac-sha256, the raw file content is used as the shared key. When using Docker containers to upload pipeline steps dynamically, use environment variable propagation (for example, "docker run -e BUILDKITE_AGENT_JWKS_FILE") to allow all steps within the pipeline to be signed. **Environment variable**: `$BUILDKITE_AGENT_SIGNING_JWKS_FILE` | `--signing-jwks-key-id value` [#](#signing-jwks-key-id) | The JWKS key ID to use when signing the pipeline. If omitted, and the signing JWKS contains only one key, that key will be used. **Environment variable**: `$BUILDKITE_AGENT_SIGNING_JWKS_KEY_ID` | `--signing-aws-kms-key value` [#](#signing-aws-kms-key) | The KMS KMS key ID, or key alias used when signing and verifying the pipeline. **Environment variable**: `$BUILDKITE_AGENT_SIGNING_AWS_KMS_KEY` | `--signing-gcp-kms-key value` [#](#signing-gcp-kms-key) | The GCP KMS key resource name used when signing and verifying the pipeline. Format: projects/*/locations/*/keyRings/*/cryptoKeys/*/cryptoKeyVersions/* **Environment variable**: `$BUILDKITE_AGENT_SIGNING_GCP_KMS_KEY` | `--debug-signing ` [#](#debug-signing) | Enable debug logging for pipeline signing. This can potentially leak secrets to the logs as it prints each step in full before signing. Requires debug logging to be enabled (default: false) **Environment variable**: `$BUILDKITE_AGENT_DEBUG_SIGNING` | `--verification-failure-behavior value` [#](#verification-failure-behavior) | The behavior when a job is received without a valid verifiable signature (without a signature, with an invalid signature, or with a signature that fails verification). One of: [block warn]. Defaults to block (default: "block") **Environment variable**: `$BUILDKITE_AGENT_JOB_VERIFICATION_NO_SIGNATURE_BEHAVIOR` | `--disable-warnings-for value` [#](#disable-warnings-for) | A list of warning IDs to disable **Environment variable**: `$BUILDKITE_AGENT_DISABLE_WARNINGS_FOR` | `--ping-mode value` [#](#ping-mode) | Selects available protocols for dispatching work to this agent. One of auto (default, prefer streaming, but fall back to polling when necessary), poll-only, or stream-only. (default: "auto") **Environment variable**: `$BUILDKITE_AGENT_PING_MODE` | `--token value` [#](#token) | Your account agent token **Environment variable**: `$BUILDKITE_AGENT_TOKEN` | `--endpoint value` [#](#endpoint) | The Agent API endpoint (default: "https://agent-edge.buildkite.com/v3") **Environment variable**: `$BUILDKITE_AGENT_ENDPOINT` | `--no-http2 ` [#](#no-http2) | Disable HTTP2 when communicating with the Agent API (default: false) **Environment variable**: `$BUILDKITE_NO_HTTP2` | `--debug-http ` [#](#debug-http) | Enable HTTP debug mode, which dumps all request and response bodies to the log (default: false) **Environment variable**: `$BUILDKITE_AGENT_DEBUG_HTTP` | `--trace-http ` [#](#trace-http) | Enable HTTP trace mode, which logs timings for each HTTP request. Timings are logged at the debug level unless a request fails at the network level in which case they are logged at the error level (default: false) **Environment variable**: `$BUILDKITE_AGENT_TRACE_HTTP` | `--kubernetes-exec ` [#](#kubernetes-exec) | This is intended to be used only by the Buildkite k8s stack (github.com/buildkite/agent-stack-k8s); it enables a Unix socket for transporting logs and exit statuses between containers in a pod (default: false) **Environment variable**: `$BUILDKITE_KUBERNETES_EXEC` | `--kubernetes-container-start-timeout value` [#](#kubernetes-container-start-timeout) | Timeout for waiting for all containers to start in a Kubernetes pod (default: 5m) (default: 5m0s) **Environment variable**: `$BUILDKITE_KUBERNETES_CONTAINER_START_TIMEOUT` | `--redacted-vars value` [#](#redacted-vars) | Pattern of environment variable names containing sensitive values (default: "*_PASSWORD", "*_SECRET", "*_TOKEN", "*_PRIVATE_KEY", "*_ACCESS_KEY", "*_SECRET_KEY", "*_CONNECTION_STRING", "*_API_KEY") **Environment variable**: `$BUILDKITE_REDACTED_VARS` | `--strict-single-hooks ` [#](#strict-single-hooks) | Enforces that only one checkout hook, and only one command hook, can be run (default: false) **Environment variable**: `$BUILDKITE_STRICT_SINGLE_HOOKS` | `--trace-context-encoding value` [#](#trace-context-encoding) | Sets the inner encoding for BUILDKITE_TRACE_CONTEXT. Must be either json or gob (default: "gob") **Environment variable**: `$BUILDKITE_TRACE_CONTEXT_ENCODING` | `--no-multipart-artifact-upload ` [#](#no-multipart-artifact-upload) | For Buildkite-hosted artifacts, disables the use of multipart uploads. Has no effect on uploads to other destinations such as custom cloud buckets (default: false) **Environment variable**: `$BUILDKITE_NO_MULTIPART_ARTIFACT_UPLOAD` | `--kubernetes-log-collection-grace-period value` [#](#kubernetes-log-collection-grace-period) | Deprecated, do not use (default: 50s) **Environment variable**: `$BUILDKITE_KUBERNETES_LOG_COLLECTION_GRACE_PERIOD` | `--tags-from-ec2 ` [#](#tags-from-ec2) | Include the host's EC2 meta-data as tags (instance-id, instance-type, and ami-id) **Environment variable**: `$BUILDKITE_AGENT_TAGS_FROM_EC2` | `--tags-from-gcp ` [#](#tags-from-gcp) | Include the host's Google Cloud instance meta-data as tags (instance-id, machine-type, preemptible, project-id, region, and zone) **Environment variable**: `$BUILDKITE_AGENT_TAGS_FROM_GCP` ##### Setting tags Each agent has tags (in 2.x we called this metadata) which can be used to group and target the agents in your build pipelines. This way you're free to dynamically scale your agents and target them based on their capabilities rather than maintaining a static list. To set an agent's tags you can set it in the configuration file (`buildkite-agent.cfg`): ``` tags="docker=true,ruby2=true" ``` or with the `--tags` command line flag: ``` buildkite-agent start --tags "docker=true" --tags "ruby2=true" ``` or with the `BUILDKITE_AGENT_TAGS` an environment variable: ``` env BUILDKITE_AGENT_TAGS="docker=true,ruby2=true" buildkite-agent start ``` ##### Agent targeting Once you've started agents with [tags](#setting-tags) you can target them in the build pipeline using agent query rules. Here's an example of targeting agents that are running with the tag `postgres` and value of `1.9.4`: ```yaml steps: - command: "script.sh" agents: postgres: "1.9.4" ``` You can also match for any agent with a `postgres` tag by omitting the value after the `=` sign, or by using `*`, for example: ```yaml steps: - command: "script.sh" agents: postgres: '*' ``` Partial wildcard matching (for example, `postgres=1.9*` or `postgres=*1.9`) is not yet supported. > 📘 Setting agent defaults > Use a top-level `agents` block to [set defaults](/docs/pipelines/configure/defining-steps#step-defaults) for all steps in a pipeline. If you specify multiple tags, your build will only run on agents that have **all** the specified tags. ##### The queue tag The `queue` tag works differently from other tags, and can be used for isolating jobs and agents. See the [Queues overview](/docs/agent/queues) page for more information about using queues. If you specify a `queue` and [agent `tags`](#agent-targeting), your build will only run on agents that match **all** of the specified criteria. For example, if a job has the following agent targeting rules, an agent with both `queue=test` and `postgres=1.9.4` should be present. Otherwise, the job will not dispatch to an agent. ```yaml steps: - command: "script.sh" agents: postgres: '1.9.4' queue: test ``` ##### Sourcing tags from Amazon Web Services You can load an Agent's tags from the underlying Amazon EC2 instance using `--tags-from-ec2-tags` for the instance tags and `--tags-from-ec2` to load the EC2 metadata (for example, instance name and machine type). ##### Sourcing tags from Google Cloud You can load an Agent's tags from the underlying Google Cloud metadata using `--tags-from-gcp`. ##### Run a job on the agent that uploaded it (also known as node affinity) You can configure your agent and your pipeline steps so that the steps run on the same agent that performed `pipeline upload`. This is sometimes referred to as "node affinity", but note that what we describe here does not involve Kubernetes (where the term is more widely used). > 📘 Normally, we recommend against doing this. The usual practice is to allow jobs to run on whichever agent is available, or to target according to specific criteria (for example, you might want certain jobs to run on a particular operating system). Targeting a specific agent can cause reliability issues (the job can't run if the agent is offline), and can result in work being unevenly distributed between agents (which is inefficient). First, set the agent hostname tag. You can do this when starting the agent. This uses the system hostname: ```sh buildkite-agent start --tags "hostname=`hostname`" ``` Or you can add it to the agent config file, along with any other tags: ```txt tags="hostname=`hostname`" ``` Then, make sure you are using `pipeline upload` to upload a `pipeline.yml`. In Buildkite's YAML steps editor: ```yaml steps: - command: "buildkite-agent pipeline upload" ``` Finally, in your `pipeline.yml`, set `hostname: "$BUILDKITE_AGENT_META_DATA_HOSTNAME"` on any commands that you want to stick to the agent that uploaded the `pipeline.yml`. For example: ```yaml: steps: - command: echo "I will stick!" agents: hostname: "$BUILDKITE_AGENT_META_DATA_HOSTNAME" - command: echo "I might not" ``` When Buildkite uploads the pipeline, `$BUILDKITE_AGENT_META_DATA_HOSTNAME` is replaced with the agent's hostname tag value. In effect, the previous example becomes: ```yaml steps: - command: echo "I will stick!" agents: hostname: "agents-computer-hostname" - command: echo "I might not" ``` This means the first step in the example can only run on an agent with the hostname "agents-computer-hostname". This is the hostname of the agent that uploaded the job. The second step may run on the same agent, or a different one. ##### Run a single job `--acquire-job value` allows you to start an agent and only run the specified job, stopping the agent after it's finished. This means that when you start the agent, instead of it waiting for work, it sends a request to Buildkite to check if it can acquire (self-assign and accept) the job. Once the agent acquires the job, it will run the job, then the agent will be stopped when the job is complete. Jobs acquired via this method will ignore agent tags configured on a job. ###### Getting the job ID for a single job `value` is the job ID. There are several ways to find it: * Using the Build API's [Get a build](/docs/apis/rest-api/builds#get-a-build) endpoint. This returns build information, including all jobs in the build. * Through the [GraphQL API](/docs/apis/graphql-api). * The `BUILDKITE_JOB_ID` build environment variable. * In outbound [job event webhooks](/docs/apis/webhooks/pipelines/job_events). * Using the GUI: select a job, and the job ID is the final value in the URL. ###### When to use Normally, you don't set up an agent to run a specific job. Instead, you'll have a pool of agents running, waiting for Buildkite to send jobs to them. `--acquire-job` is useful if you want to create your own scheduler to run a specific job. --- ### annotate URL: https://buildkite.com/docs/agent/cli/reference/annotate #### buildkite-agent annotate The Buildkite agent's `annotate` command allows you to add additional information to Buildkite build pages using CommonMark Markdown. Learn more about how to use this command in [Annotations](/docs/pipelines/configure/annotations). ##### Creating an annotation The `buildkite-agent annotate` command creates an annotation associated with the current build. Options for the `annotate` command can be found in the `buildkite-agent` cli help: ###### Usage `buildkite-agent annotate [body] [options...]` ###### Description Build annotations allow you to customize the Buildkite build interface to show information that may surface from your builds. Some examples include: - Links to artifacts generated by your jobs - Test result summaries - Graphs that include analysis about your codebase - Helpful information for team members about what happened during a build Annotations are written in CommonMark-compliant Markdown, with "GitHub Flavored Markdown" extensions. The annotation body can be supplied as a command line argument, or by piping content into the command. The maximum size of each annotation body is 1MiB. You can update an existing annotation's body by running the annotate command again and provide the same context as the one you want to update. Or if you leave context blank, it will use the default context. You can also update only the style of an existing annotation by omitting the body entirely and providing a new style value. ###### Example ```shell $ buildkite-agent annotate "All tests passed! :your-emoji: like :rocket:" $ cat annotation.md | buildkite-agent annotate --style "warning" $ buildkite-agent annotate --style "success" --context "junit" $ ./script/dynamic_annotation_generator | buildkite-agent annotate --style "success" ``` ###### Options | `--no-color ` [#](#no-color) | Don't show colors in logging (default: false) **Environment variable**: `$BUILDKITE_AGENT_NO_COLOR` | `--debug ` [#](#debug) | Enable debug mode. Synonym for `--log-level debug`. Takes precedence over `--log-level` (default: false) **Environment variable**: `$BUILDKITE_AGENT_DEBUG` | `--log-level value` [#](#log-level) | Set the log level for the agent, making logging more or less verbose. Defaults to notice. Allowed values are: debug, info, error, warn, fatal (default: "notice") **Environment variable**: `$BUILDKITE_AGENT_LOG_LEVEL` | `--experiment value` [#](#experiment) | Enable experimental features within the buildkite-agent **Environment variable**: `$BUILDKITE_AGENT_EXPERIMENT` | `--profile value` [#](#profile) | Enable a profiling mode, either cpu, memory, mutex or block **Environment variable**: `$BUILDKITE_AGENT_PROFILE` | `--agent-access-token value` [#](#agent-access-token) | The access token used to identify the agent **Environment variable**: `$BUILDKITE_AGENT_ACCESS_TOKEN` | `--endpoint value` [#](#endpoint) | The Agent API endpoint (default: "https://agent-edge.buildkite.com/v3") **Environment variable**: `$BUILDKITE_AGENT_ENDPOINT` | `--no-http2 ` [#](#no-http2) | Disable HTTP2 when communicating with the Agent API (default: false) **Environment variable**: `$BUILDKITE_NO_HTTP2` | `--debug-http ` [#](#debug-http) | Enable HTTP debug mode, which dumps all request and response bodies to the log (default: false) **Environment variable**: `$BUILDKITE_AGENT_DEBUG_HTTP` | `--trace-http ` [#](#trace-http) | Enable HTTP trace mode, which logs timings for each HTTP request. Timings are logged at the debug level unless a request fails at the network level in which case they are logged at the error level (default: false) **Environment variable**: `$BUILDKITE_AGENT_TRACE_HTTP` | `--context value` [#](#context) | The context of the annotation used to differentiate this annotation from others. This value has a limit of 100 characters. **Environment variable**: `$BUILDKITE_ANNOTATION_CONTEXT` | `--style value` [#](#style) | The style of the annotation (`success`, `info`, `warning` or `error`) **Environment variable**: `$BUILDKITE_ANNOTATION_STYLE` | `--append ` [#](#append) | Append to the body of an existing annotation (default: false) **Environment variable**: `$BUILDKITE_ANNOTATION_APPEND` | `--priority value` [#](#priority) | The priority of the annotation (`1` to `10`). Annotations with a priority of `10` are shown first, while annotations with a priority of `1` are shown last. (default: 3) **Environment variable**: `$BUILDKITE_ANNOTATION_PRIORITY` | `--job value` [#](#job) | Which job should the annotation come from **Environment variable**: `$BUILDKITE_JOB_ID` | `--scope value` [#](#scope) | The scope of the annotation, which will control where the annotation is displayed in the Buildkite UI. One of 'build', 'job' (default: "build") **Environment variable**: `$BUILDKITE_ANNOTATION_SCOPE` | `--redacted-vars value` [#](#redacted-vars) | Pattern of environment variable names containing sensitive values (default: "*_PASSWORD", "*_SECRET", "*_TOKEN", "*_PRIVATE_KEY", "*_ACCESS_KEY", "*_SECRET_KEY", "*_CONNECTION_STRING", "*_API_KEY") **Environment variable**: `$BUILDKITE_REDACTED_VARS` ##### Removing an annotation Annotations can be removed using [the `buildkite-agent annotation remove` command](/docs/agent/cli/reference/annotation). --- ### annotation URL: https://buildkite.com/docs/agent/cli/reference/annotation #### buildkite-agent annotation The Buildkite agent's `annotation` command allows manipulating existing build annotations. Learn more about how to use this command in [Annotations](/docs/pipelines/configure/annotations). Annotations are added using [the `buildkite-agent annotate` command](/docs/agent/cli/reference/annotate). ##### Removing an annotation The `buildkite-agent annotation remove` command removes an existing annotation associated with the current build. Options for the `annotation remove` command can be found in the `buildkite-agent` cli help: ###### Usage `buildkite-agent annotation remove [arguments...]` ###### Description Remove an existing annotation which was previously published using the buildkite-agent annotate command. If you leave context blank, it will use the default context. ###### Example ```shell $ buildkite-agent annotation remove $ buildkite-agent annotation remove --context "remove-me" ``` ###### Options | `--no-color ` [#](#no-color) | Don't show colors in logging (default: false) **Environment variable**: `$BUILDKITE_AGENT_NO_COLOR` | `--debug ` [#](#debug) | Enable debug mode. Synonym for `--log-level debug`. Takes precedence over `--log-level` (default: false) **Environment variable**: `$BUILDKITE_AGENT_DEBUG` | `--log-level value` [#](#log-level) | Set the log level for the agent, making logging more or less verbose. Defaults to notice. Allowed values are: debug, info, error, warn, fatal (default: "notice") **Environment variable**: `$BUILDKITE_AGENT_LOG_LEVEL` | `--experiment value` [#](#experiment) | Enable experimental features within the buildkite-agent **Environment variable**: `$BUILDKITE_AGENT_EXPERIMENT` | `--profile value` [#](#profile) | Enable a profiling mode, either cpu, memory, mutex or block **Environment variable**: `$BUILDKITE_AGENT_PROFILE` | `--agent-access-token value` [#](#agent-access-token) | The access token used to identify the agent **Environment variable**: `$BUILDKITE_AGENT_ACCESS_TOKEN` | `--endpoint value` [#](#endpoint) | The Agent API endpoint (default: "https://agent-edge.buildkite.com/v3") **Environment variable**: `$BUILDKITE_AGENT_ENDPOINT` | `--no-http2 ` [#](#no-http2) | Disable HTTP2 when communicating with the Agent API (default: false) **Environment variable**: `$BUILDKITE_NO_HTTP2` | `--debug-http ` [#](#debug-http) | Enable HTTP debug mode, which dumps all request and response bodies to the log (default: false) **Environment variable**: `$BUILDKITE_AGENT_DEBUG_HTTP` | `--trace-http ` [#](#trace-http) | Enable HTTP trace mode, which logs timings for each HTTP request. Timings are logged at the debug level unless a request fails at the network level in which case they are logged at the error level (default: false) **Environment variable**: `$BUILDKITE_AGENT_TRACE_HTTP` | `--context value` [#](#context) | The context of the annotation used to differentiate this annotation from others (default: "default") **Environment variable**: `$BUILDKITE_ANNOTATION_CONTEXT` | `--scope value` [#](#scope) | The scope of the annotation to remove. One of either 'build' or 'job' (default: "build") **Environment variable**: `$BUILDKITE_ANNOTATION_SCOPE` | `--job value` [#](#job) | Which job is removing the annotation **Environment variable**: `$BUILDKITE_JOB_ID` --- ### artifact URL: https://buildkite.com/docs/agent/cli/reference/artifact #### buildkite-agent artifact The Buildkite agent's `artifact` command provides support for uploading and downloading of build artifacts, allowing you to share binary data between build steps no matter the machine or network. See the [Using build artifacts](/docs/pipelines/configure/artifacts) guide for a step-by-step example. ##### Uploading artifacts You can use this command in your build scripts to store artifacts. Artifacts are accessible using the web interface and can be downloaded by future build steps. Artifacts can be stored in the Buildkite-managed artifact store, or your own storage location, depending on how you have configured your Buildkite agent. Be aware that the Buildkite-managed artifact store has an upload size limit of 5Gb per file/artifact. For documentation on configuring a custom storage location, see: - [Using your private AWS S3 bucket](#using-your-private-aws-s3-bucket) - [Using your private Google Cloud bucket](#using-your-private-google-cloud-bucket) - [Using your private Azure Blob container](#using-your-private-azure-blob-container) - [Using your Artifactory instance](#using-your-artifactory-instance) You can also configure the agent to automatically upload artifacts after your step's command has completed based on a file pattern (see the [Using build artifacts guide](/docs/pipelines/configure/artifacts) for details). ###### Usage `buildkite-agent artifact upload [options] [destination]` ###### Description Uploads files to a job as artifacts. You need to ensure that the paths are surrounded by quotes otherwise the built-in shell path globbing will provide the files, which is currently not supported. You can specify an alternate destination on Amazon S3, Google Cloud Storage or Artifactory as per the examples below. This may be specified in the 'destination' argument, or in the 'BUILDKITE_ARTIFACT_UPLOAD_DESTINATION' environment variable. Otherwise, artifacts are uploaded to a Buildkite-managed Amazon S3 bucket, where they’re retained for six months. ###### Example ```shell $ buildkite-agent artifact upload "log/**/*.log" ``` You can also upload directly to Amazon S3 if you'd like to host your own artifacts: ```shell $ export BUILDKITE_S3_ACCESS_KEY_ID=xxx $ export BUILDKITE_S3_SECRET_ACCESS_KEY=yyy $ export BUILDKITE_S3_DEFAULT_REGION=eu-central-1 # default is us-east-1 $ export BUILDKITE_S3_ACL=private # default is public-read $ buildkite-agent artifact upload "log/**/*.log" s3://name-of-your-s3-bucket/$BUILDKITE_JOB_ID ``` You can use Amazon IAM assumed roles by specifying the session token: ```shell $ export BUILDKITE_S3_SESSION_TOKEN=zzz ``` Or upload directly to Google Cloud Storage: ```shell $ export BUILDKITE_GS_ACL=private $ buildkite-agent artifact upload "log/**/*.log" gs://name-of-your-gs-bucket/$BUILDKITE_JOB_ID ``` Or upload directly to Artifactory: ```shell $ export BUILDKITE_ARTIFACTORY_URL=http://my-artifactory-instance.com/artifactory $ export BUILDKITE_ARTIFACTORY_USER=carol-danvers $ export BUILDKITE_ARTIFACTORY_PASSWORD=xxx $ buildkite-agent artifact upload "log/**/*.log" rt://name-of-your-artifactory-repo/$BUILDKITE_JOB_ID ``` By default, symlinks to directories will not be explored when resolving the glob, but symlinks to files will be uploaded as the linked files. To ignore symlinks to files use: ```shell $ buildkite-agent artifact upload --upload-skip-symlinks "log/**/*.log" ``` Note: uploading symlinks to files without following them is not supported. If you need to preserve them in a directory, we recommend creating a tar archive: ```shell $ tar -cvf log.tar log/**/* $ buildkite-agent upload log.tar ``` ###### Options | `--no-color ` [#](#no-color) | Don't show colors in logging (default: false) **Environment variable**: `$BUILDKITE_AGENT_NO_COLOR` | `--debug ` [#](#debug) | Enable debug mode. Synonym for `--log-level debug`. Takes precedence over `--log-level` (default: false) **Environment variable**: `$BUILDKITE_AGENT_DEBUG` | `--log-level value` [#](#log-level) | Set the log level for the agent, making logging more or less verbose. Defaults to notice. Allowed values are: debug, info, error, warn, fatal (default: "notice") **Environment variable**: `$BUILDKITE_AGENT_LOG_LEVEL` | `--experiment value` [#](#experiment) | Enable experimental features within the buildkite-agent **Environment variable**: `$BUILDKITE_AGENT_EXPERIMENT` | `--profile value` [#](#profile) | Enable a profiling mode, either cpu, memory, mutex or block **Environment variable**: `$BUILDKITE_AGENT_PROFILE` | `--agent-access-token value` [#](#agent-access-token) | The access token used to identify the agent **Environment variable**: `$BUILDKITE_AGENT_ACCESS_TOKEN` | `--endpoint value` [#](#endpoint) | The Agent API endpoint (default: "https://agent-edge.buildkite.com/v3") **Environment variable**: `$BUILDKITE_AGENT_ENDPOINT` | `--no-http2 ` [#](#no-http2) | Disable HTTP2 when communicating with the Agent API (default: false) **Environment variable**: `$BUILDKITE_NO_HTTP2` | `--debug-http ` [#](#debug-http) | Enable HTTP debug mode, which dumps all request and response bodies to the log (default: false) **Environment variable**: `$BUILDKITE_AGENT_DEBUG_HTTP` | `--trace-http ` [#](#trace-http) | Enable HTTP trace mode, which logs timings for each HTTP request. Timings are logged at the debug level unless a request fails at the network level in which case they are logged at the error level (default: false) **Environment variable**: `$BUILDKITE_AGENT_TRACE_HTTP` | `--job value` [#](#job) | Which job should the artifacts be uploaded to **Environment variable**: `$BUILDKITE_JOB_ID` | `--content-type value` [#](#content-type) | A specific Content-Type to set for the artifacts (otherwise detected) **Environment variable**: `$BUILDKITE_ARTIFACT_CONTENT_TYPE` | `--literal ` [#](#literal) | Disables parsing of the upload paths as glob patterns; each path will be treated as a single literal file path (default: false) **Environment variable**: `$BUILDKITE_AGENT_ARTIFACT_LITERAL` | `--delimiter value` [#](#delimiter) | Changes the delimiter used to split the upload paths into multiple paths; it can be more than 1 character. When set to the empty string, no splitting occurs (default: ";") **Environment variable**: `$BUILDKITE_AGENT_ARTIFACT_DELIMITER` | `--glob-resolve-follow-symlinks ` [#](#glob-resolve-follow-symlinks) | Follow symbolic links to directories while resolving globs. Note: this will not prevent symlinks to files from being uploaded. Use --upload-skip-symlinks to do that (default: false) **Environment variable**: `$BUILDKITE_AGENT_ARTIFACT_GLOB_RESOLVE_FOLLOW_SYMLINKS` | `--upload-skip-symlinks ` [#](#upload-skip-symlinks) | After the glob has been resolved to a list of files to upload, skip uploading those that are symlinks to files (default: false) **Environment variable**: `$BUILDKITE_ARTIFACT_UPLOAD_SKIP_SYMLINKS` | `--follow-symlinks --glob-resolve-follow-symlinks` [#](#follow-symlinks) | Follow symbolic links while resolving globs. Note this argument is deprecated. Use --glob-resolve-follow-symlinks instead (default: false) **Environment variable**: `$BUILDKITE_AGENT_ARTIFACT_SYMLINKS` | `--no-multipart-artifact-upload ` [#](#no-multipart-artifact-upload) | For Buildkite-hosted artifacts, disables the use of multipart uploads. Has no effect on uploads to other destinations such as custom cloud buckets (default: false) **Environment variable**: `$BUILDKITE_NO_MULTIPART_ARTIFACT_UPLOAD` ###### Artifact upload examples Uploading a specific file: ```bash buildkite-agent artifact upload log/test.log ``` Uploading all the jpegs and pngs, in all folders and subfolders: ```bash buildkite-agent artifact upload "*/**/*.jpg;*/**/*.jpeg;*/**/*.png" ``` Uploading all the log files in the log folder: ```bash buildkite-agent artifact upload "log/*.log" ``` Uploading all the files and folders inside the `coverage` directory: ```bash buildkite-agent artifact upload "coverage/**/*" ``` Uploading a file name with special characters, for example, `hello??.html`: ```bash buildkite-agent artifact upload "hello\?\?.html" ``` ###### Artifact upload glob syntax Glob path patterns are used throughout Buildkite for specifying artifact uploads. The source path you supply to the upload command will be replicated exactly at the destination. If you run: ```bash buildkite-agent artifact upload log/test.log ``` Buildkite will store the file at `log/test.log`. If you want it to be stored as `test.log` without the full path, then you'll need to change into the file's directory before running your upload command: ```bash cd log buildkite-agent artifact upload test.log ``` Learn more about Buildkite's glob syntax from the [Glob pattern syntax](/docs/pipelines/configure/glob-pattern-syntax) page. ##### Downloading artifacts Use this command in your build scripts to download artifacts. ###### Usage `buildkite-agent artifact download [options] ` ###### Description Downloads artifacts matching from Buildkite to directory on the local machine. Note: You need to ensure that your search query is surrounded by quotes if using a wild card as the built-in shell path globbing will expand the wild card and break the query. If the last path component of matches the first path component of your , the last component of is dropped from the final path. For example, a query of 'app/logs/*' with a destination of 'foo/app' will write any matched artifact files to 'foo/app/logs/', relative to the current working directory. You can also change working directory to the intended destination and use a of '.' to always create a directory hierarchy matching the artifact paths. ###### Example ```shell $ buildkite-agent artifact download "pkg/*.tar.gz" . --build xxx ``` This will search across all the artifacts for the build with files that match that part. The first argument is the search query, and the second argument is the download destination. If you're trying to download a specific file, and there are multiple artifacts from different jobs, you can target the particular job you want to download the artifact from: ```shell $ buildkite-agent artifact download "pkg/*.tar.gz" . --step "tests" --build xxx ``` You can also use the step's jobs id (provided by the environment variable $BUILDKITE_JOB_ID) ###### Options | `--no-color ` [#](#no-color) | Don't show colors in logging (default: false) **Environment variable**: `$BUILDKITE_AGENT_NO_COLOR` | `--debug ` [#](#debug) | Enable debug mode. Synonym for `--log-level debug`. Takes precedence over `--log-level` (default: false) **Environment variable**: `$BUILDKITE_AGENT_DEBUG` | `--log-level value` [#](#log-level) | Set the log level for the agent, making logging more or less verbose. Defaults to notice. Allowed values are: debug, info, error, warn, fatal (default: "notice") **Environment variable**: `$BUILDKITE_AGENT_LOG_LEVEL` | `--experiment value` [#](#experiment) | Enable experimental features within the buildkite-agent **Environment variable**: `$BUILDKITE_AGENT_EXPERIMENT` | `--profile value` [#](#profile) | Enable a profiling mode, either cpu, memory, mutex or block **Environment variable**: `$BUILDKITE_AGENT_PROFILE` | `--agent-access-token value` [#](#agent-access-token) | The access token used to identify the agent **Environment variable**: `$BUILDKITE_AGENT_ACCESS_TOKEN` | `--endpoint value` [#](#endpoint) | The Agent API endpoint (default: "https://agent-edge.buildkite.com/v3") **Environment variable**: `$BUILDKITE_AGENT_ENDPOINT` | `--no-http2 ` [#](#no-http2) | Disable HTTP2 when communicating with the Agent API (default: false) **Environment variable**: `$BUILDKITE_NO_HTTP2` | `--debug-http ` [#](#debug-http) | Enable HTTP debug mode, which dumps all request and response bodies to the log (default: false) **Environment variable**: `$BUILDKITE_AGENT_DEBUG_HTTP` | `--trace-http ` [#](#trace-http) | Enable HTTP trace mode, which logs timings for each HTTP request. Timings are logged at the debug level unless a request fails at the network level in which case they are logged at the error level (default: false) **Environment variable**: `$BUILDKITE_AGENT_TRACE_HTTP` | `--step value` [#](#step) | Scope the search to a particular step. Can be the step's key or label, or a Job ID | `--build value` [#](#build) | The build that the artifacts were uploaded to **Environment variable**: `$BUILDKITE_BUILD_ID` | `--include-retried-jobs ` [#](#include-retried-jobs) | Include artifacts from retried jobs in the search (default: false) **Environment variable**: `$BUILDKITE_AGENT_INCLUDE_RETRIED_JOBS` ###### Artifact download examples Downloading a specific file into the current directory: ```bash buildkite-agent artifact download build.zip . ``` Downloading a specific file into a specific directory (note the trailing slash): ```bash buildkite-agent artifact download build.zip tmp/ ``` Downloading all the files uploaded to `log` (including all subdirectories) into a local `log` directory (note that local directories will be created to match the uploaded file paths): ```bash buildkite-agent artifact download "log/*" . ``` Downloading all the files uploaded to `coverage` (including all subdirectories) into a local `tmp/coverage` directory (note that local directories are created to match the uploaded file path): ```bash buildkite-agent artifact download "coverage/*" tmp/ ``` Downloading all images (from any directory) into a local `images/` directory (note that local directories are created to match the uploaded file path, and that you can run multiple download commands into the same directory): ```bash buildkite-agent artifact download "*.jpg" images/ buildkite-agent artifact download "*.jpeg" images/ buildkite-agent artifact download "*.png" images/ ``` ###### Artifact download pattern syntax Artifact downloads support pattern-matching using the `*` character. Unlike artifact upload glob patterns, these operate over the entire path and not just between separator characters. For example, a download path pattern of `log/*` matches all files under the log directory and all subdirectories. There is no need to escape characters such as `?`, `[` and `]`. ##### Downloading artifacts outside a running build The `buildkite-agent artifact download` command relies on environment variables that are set by the agent while a build is running. For example, executing the `buildkite-agent artifact download` command on your local machine would return an error about missing environment variables. However, when this command is executed as part of a build, the agent has set the required variables, and the command will be able to run. If you want to download an artifact from outside a build, you can use the [Artifact Download API](/docs/apis/rest-api/artifacts#download-an-artifact). ##### Searching artifacts Return a list of artifacts that match a query. ###### Usage `buildkite-agent artifact search [options] ` ###### Description Searches for build artifacts specified by on Buildkite Note: You need to ensure that your search query is surrounded by quotes if using a wild card as the built-in shell path globbing will provide files, which will break the search. ###### Example ```shell $ buildkite-agent artifact search "pkg/*.tar.gz" --build xxx ``` This will search across all uploaded artifacts in a build for files that match that query. The first argument is the search query. If you're trying to find a specific file, and there are multiple artifacts from different jobs, you can target the particular job you want to search the artifacts from using --step: ```shell $ buildkite-agent artifact search "pkg/*.tar.gz" --step "tests" --build xxx ``` You can also use the step's job id (provided by the environment variable $BUILDKITE_JOB_ID) Output formatting can be altered with the --format flag as follows: ```shell $ buildkite-agent artifact search "*" --format "%p\n" ``` The above will return a list of filenames separated by newline. ###### Options | `--no-color ` [#](#no-color) | Don't show colors in logging (default: false) **Environment variable**: `$BUILDKITE_AGENT_NO_COLOR` | `--debug ` [#](#debug) | Enable debug mode. Synonym for `--log-level debug`. Takes precedence over `--log-level` (default: false) **Environment variable**: `$BUILDKITE_AGENT_DEBUG` | `--log-level value` [#](#log-level) | Set the log level for the agent, making logging more or less verbose. Defaults to notice. Allowed values are: debug, info, error, warn, fatal (default: "notice") **Environment variable**: `$BUILDKITE_AGENT_LOG_LEVEL` | `--experiment value` [#](#experiment) | Enable experimental features within the buildkite-agent **Environment variable**: `$BUILDKITE_AGENT_EXPERIMENT` | `--profile value` [#](#profile) | Enable a profiling mode, either cpu, memory, mutex or block **Environment variable**: `$BUILDKITE_AGENT_PROFILE` | `--agent-access-token value` [#](#agent-access-token) | The access token used to identify the agent **Environment variable**: `$BUILDKITE_AGENT_ACCESS_TOKEN` | `--endpoint value` [#](#endpoint) | The Agent API endpoint (default: "https://agent-edge.buildkite.com/v3") **Environment variable**: `$BUILDKITE_AGENT_ENDPOINT` | `--no-http2 ` [#](#no-http2) | Disable HTTP2 when communicating with the Agent API (default: false) **Environment variable**: `$BUILDKITE_NO_HTTP2` | `--debug-http ` [#](#debug-http) | Enable HTTP debug mode, which dumps all request and response bodies to the log (default: false) **Environment variable**: `$BUILDKITE_AGENT_DEBUG_HTTP` | `--trace-http ` [#](#trace-http) | Enable HTTP trace mode, which logs timings for each HTTP request. Timings are logged at the debug level unless a request fails at the network level in which case they are logged at the error level (default: false) **Environment variable**: `$BUILDKITE_AGENT_TRACE_HTTP` | `--step value` [#](#step) | Scope the search to a particular step by using either its name or job ID | `--build value` [#](#build) | The build that the artifacts were uploaded to **Environment variable**: `$BUILDKITE_BUILD_ID` | `--include-retried-jobs ` [#](#include-retried-jobs) | Include artifacts from retried jobs in the search (default: false) **Environment variable**: `$BUILDKITE_AGENT_INCLUDE_RETRIED_JOBS` | `--allow-empty-results ` [#](#allow-empty-results) | By default, searches exit 1 if there are no results. If this flag is set, searches will exit 0 with an empty set (default: false) | `--format value` [#](#format) | Output formatting of results. See below for listing of available format specifiers. (default: "%j %p %c\\n") Format specifiers: %i UUID of the artifact %p Artifact path %c Artifact creation time (an ISO 8601 / RFC-3339 formatted UTC timestamp) %j UUID of the job that uploaded the artifact, helpful for subsequent artifact downloads %s File size of the artifact in bytes %S SHA1 checksum of the artifact %T SHA256 checksum of the artifact %u Download URL for the artifact, though consider using 'buildkite-agent artifact download' instead ##### Parallelized steps Currently, Buildkite does not support collating artifacts from parallelized steps under a single key. Thus using the `--step` option with a parallelized step key will return only artifacts from the last completed step. If you are trying to collate artifacts from parallelized steps, it is best to upload these files with a unique path or name and omit the `--step` flag. ```bash buildkite-agent artifact "artifacts/path/*" . --build $BUILDKITE_BUILD_ID ``` ##### Fetching the SHA of an artifact Use this command in your build scripts to verify downloaded artifacts against the original SHA-1 of the file. ###### Usage `buildkite-agent artifact shasum [options...]` ###### Description Prints the SHA-1 or SHA-256 hash for the single artifact specified by a search query. The hash is fetched from Buildkite's API, having been generated client-side by the agent during artifact upload. A search query that does not match exactly one artifact results in an error. Note: You need to ensure that your search query is surrounded by quotes if using a wild card as the built-in shell path globbing will provide files, which will break the download. ###### Example ```shell $ buildkite-agent artifact shasum "pkg/release.tar.gz" --build xxx ``` This will search for all files in the build with path "pkg/release.tar.gz", and if exactly one match is found, the SHA-1 hash generated during upload is printed. If you would like to target artifacts from a specific build step, you can do so by using the --step argument. ```shell $ buildkite-agent artifact shasum "pkg/release.tar.gz" --step "release" --build xxx ``` You can also use the step's job ID (provided by the environment variable $BUILDKITE_JOB_ID) The `--sha256` argument requests SHA-256 instead of SHA-1; this is only available for artifacts uploaded since SHA-256 support was added to the agent. ###### Options | `--no-color ` [#](#no-color) | Don't show colors in logging (default: false) **Environment variable**: `$BUILDKITE_AGENT_NO_COLOR` | `--debug ` [#](#debug) | Enable debug mode. Synonym for `--log-level debug`. Takes precedence over `--log-level` (default: false) **Environment variable**: `$BUILDKITE_AGENT_DEBUG` | `--log-level value` [#](#log-level) | Set the log level for the agent, making logging more or less verbose. Defaults to notice. Allowed values are: debug, info, error, warn, fatal (default: "notice") **Environment variable**: `$BUILDKITE_AGENT_LOG_LEVEL` | `--experiment value` [#](#experiment) | Enable experimental features within the buildkite-agent **Environment variable**: `$BUILDKITE_AGENT_EXPERIMENT` | `--profile value` [#](#profile) | Enable a profiling mode, either cpu, memory, mutex or block **Environment variable**: `$BUILDKITE_AGENT_PROFILE` | `--agent-access-token value` [#](#agent-access-token) | The access token used to identify the agent **Environment variable**: `$BUILDKITE_AGENT_ACCESS_TOKEN` | `--endpoint value` [#](#endpoint) | The Agent API endpoint (default: "https://agent-edge.buildkite.com/v3") **Environment variable**: `$BUILDKITE_AGENT_ENDPOINT` | `--no-http2 ` [#](#no-http2) | Disable HTTP2 when communicating with the Agent API (default: false) **Environment variable**: `$BUILDKITE_NO_HTTP2` | `--debug-http ` [#](#debug-http) | Enable HTTP debug mode, which dumps all request and response bodies to the log (default: false) **Environment variable**: `$BUILDKITE_AGENT_DEBUG_HTTP` | `--trace-http ` [#](#trace-http) | Enable HTTP trace mode, which logs timings for each HTTP request. Timings are logged at the debug level unless a request fails at the network level in which case they are logged at the error level (default: false) **Environment variable**: `$BUILDKITE_AGENT_TRACE_HTTP` | `--sha256 ` [#](#sha256) | Request SHA-256 instead of SHA-1, errors if SHA-256 not available (default: false) | `--step value` [#](#step) | Scope the search to a particular step by its name or job ID | `--build value` [#](#build) | The build that the artifact was uploaded to **Environment variable**: `$BUILDKITE_BUILD_ID` | `--include-retried-jobs ` [#](#include-retried-jobs) | Include artifacts from retried jobs in the search (default: false) **Environment variable**: `$BUILDKITE_AGENT_INCLUDE_RETRIED_JOBS` ##### Using your private AWS S3 bucket You can configure the `buildkite-agent artifact` command to store artifacts in your private Amazon S3 bucket. To do so, you'll need to export some artifact environment variables. Environment Variable | Required | Default Value | Description --- | --- | --- | --- `BUILDKITE_ARTIFACT_UPLOAD_DESTINATION` | Yes | N/A | An S3 scheme URL for the bucket and path prefix, for example, s3://your-bucket/path/prefix/ `BUILDKITE_S3_DEFAULT_REGION` | No | N/A | Which AWS Region to use to locate your S3 bucket, if absent or blank `buildkite-agent` will also consult `AWS_REGION`, `AWS_DEFAULT_REGION`, and finally the EC2 instance metadata service. `BUILDKITE_S3_ACL` | No | `public-read` | The S3 Object ACL to apply to uploads, one of `private`, `public-read`, `public-read-write`, `authenticated-read`, `bucket-owner-read`, `bucket-owner-full-control`. `BUILDKITE_S3_SSE_ENABLED` | No | `false` | If `true`, bucket uploads request AES256 server side encryption. `BUILDKITE_S3_ACCESS_URL` | No | `https://$bucket.s3.amazonaws.com` | If set, overrides the base URL used for the artifact's location stored with the Buildkite API. `BUILDKITE_S3_ENDPOINT` | No | N/A | URL of the self-hosted S3 compatible endpoint, for example, `https://instance_public_ip:port`. Note that setting this environment variable still requires setting the `BUILDKITE_ARTIFACT_UPLOAD_DESTINATION` environment variable value. However, the `BUILDKITE_ARTIFACT_UPLOAD_DESTINATION` value is ignored during the artifacts upload process, and artifacts will be uploaded to the respective S3 compatible endpoint. You can set these environment variables from a variety of places. Exporting them from an [environment hook](/docs/agent/hooks#job-lifecycle-hooks) defined in your [agent `hooks-path` directory](/docs/agent/hooks#hook-locations-agent-hooks) ensures they are applied to all jobs: ```bash export BUILDKITE_ARTIFACT_UPLOAD_DESTINATION="s3://name-of-your-s3-bucket/$BUILDKITE_PIPELINE_ID/$BUILDKITE_BUILD_ID/$BUILDKITE_JOB_ID" export BUILDKITE_S3_DEFAULT_REGION="eu-central-1" # default: us-east-1 ``` ###### Uploading artifacts to multiple AWS S3 buckets in different regions To upload artifacts to multiple AWS S3 buckets in different regions within a single pipeline, configure the `BUILDKITE_ARTIFACT_UPLOAD_DESTINATION` and `BUILDKITE_S3_DEFAULT_REGION` environment variables at the step level. Defining these variables per step ensures that each upload uses the correct bucket and region. For example, one step can target a bucket in `us-east-1`, while another targets a bucket in `eu-central-1`: ```bash steps: - label: "Upload to us-east-1 bucket" command: - echo "hello world" > test1.txt - buildkite-agent artifact upload test1.txt env: BUILDKITE_S3_DEFAULT_REGION: "us-east-1" BUILDKITE_ARTIFACT_UPLOAD_DESTINATION: "s3://my-bucket-east/" - label: "Upload to eu-central-1 bucket" command: - echo "hello world" > test2.txt - buildkite-agent artifact upload test2.txt env: BUILDKITE_S3_DEFAULT_REGION: "eu-central-1" BUILDKITE_ARTIFACT_UPLOAD_DESTINATION: "s3://my-bucket-central/" ``` ###### IAM permissions Make sure your agent instances have the following IAM policy to read and write objects in the bucket, for example: ```json { "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": [ "s3:GetObject", "s3:GetObjectAcl", "s3:GetObjectVersion", "s3:GetObjectVersionAcl", "s3:ListBucket", "s3:PutObject", "s3:PutObjectAcl", "s3:PutObjectVersionAcl" ], "Resource": [ "arn\:aws\:s3:::my-s3-bucket", "arn\:aws\:s3:::my-s3-bucket/*" ] } ] } ``` If you are using the Elastic CI Stack for AWS, provide your bucket name in the `ArtifactsBucket` template parameter for an appropriate IAM policy to be included in the instance's IAM role. ###### Credentials `buildkite-agent artifact upload` will use the first available AWS credentials from the following locations: - Buildkite environment variables, `BUILDKITE_S3_ACCESS_KEY_ID`, `BUILDKITE_S3_SECRET_ACCESS_KEY`, `BUILDKITE_S3_SESSION_TOKEN` - AWS environment variables, `AWS_ACCESS_KEY_ID`, `AWS_SECRET_ACCESS_KEY`, `AWS_SESSION_TOKEN` - Web Identity environment variables, `AWS_ROLE_ARN`, `AWS_ROLE_SESSION_NAME`, `AWS_WEB_IDENTITY_TOKEN_FILE` - EC2 or ECS role, your EC2 instance or ECS task's IAM Role If your agents are running on an AWS EC2 Instance, adding the policy above to the instance's [IAM Role](http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/iam-roles-for-amazon-ec2.html) and using the instance profile credentials is the most secure option as there are no long lived credentials to manage. If your agents are running outside of AWS, or you're unable to use an instance profile, you can export [long lived credentials](http://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_access-keys.html) belonging to an IAM user using one of the environment variable groups listed above. See the [Managing pipeline secrets](/docs/pipelines/security/secrets/managing) documentation for how to securely set up these environment variables. ###### Access control By default the agent will create objects with the [`public-read` ACL](https://docs.aws.amazon.com/AmazonS3/latest/userguide/acl-overview.html#canned-acl). This allows the artifact links in the Buildkite web interface to show the S3 object directly in the browser. You can set this to `private` instead, exporting a value for the `BUILDKITE_S3_ACL` environment variable: ```bash export BUILDKITE_S3_ACL="private" ``` If you set your S3 ACL to `private` you won't be able to click through to the artifacts in the Buildkite web interface. You can use an authenticating S3 proxy such as [aws-s3-proxy](https://github.com/pottava/aws-s3-proxy) to provide web access protected by HTTP Basic authentication, which will allow you to view embedded assets such as HTML pages with images. To set the access URL for your artifacts, export a value for the `BUILDKITE_S3_ACCESS_URL` environment variable: ```bash export BUILDKITE_S3_ACCESS_URL="https://buildkite-artifacts.example.com/" ``` ##### Using your private Google Cloud bucket You can configure the `buildkite-agent artifact` command to store artifacts in your private Google Cloud Storage bucket. For instructions for how to set this up, see the [Google Cloud installation guide](/docs/agent/self-hosted/gcp#uploading-artifacts-to-google-cloud-storage). ##### Using your Artifactory instance You can configure the `buildkite-agent artifact` command to store artifacts in Artifactory. For instructions for how to set this up, see our [Artifactory guide](/docs/pipelines/integrations/artifacts-and-packages/artifactory). ##### Using your private Azure Blob container You can configure the `buildkite-agent artifact` command to store artifacts in your private [Azure Blob Storage container](https://learn.microsoft.com/en-us/azure/storage/blobs/storage-blobs-introduction). Support for uploading artifacts to Azure Blob Storage was added in [Agent v3.53.0](https://github.com/buildkite/agent/releases/tag/v3.53.0). ###### Preparation Firstly, make sure that each agent has access to Azure credentials. [By default](https://pkg.go.dev/github.com/Azure/azure-sdk-for-go/sdk/azidentity#readme-defaultazurecredential), these can be provided using: - Azure environment variables such as `AZURE_CLIENT_ID`. - Loaded by a Kubernetes workload identity hook. - Loaded on a host with Azure Managed Identity enabled. - Loaded from a user logged in with the Azure CLI. You can also use an account key or connection string by setting one of these environment variables: ```shell #### To use an account key: export BUILDKITE_AZURE_BLOB_ACCESS_KEY='...' #### To use a connection string: export BUILDKITE_AZURE_BLOB_CONNECTION_STRING='...' ``` Since these can contain access credentials, they are [redacted from job logs by default](/docs/pipelines/configure/managing-log-output#redacted-environment-variables). Make sure you have a valid storage account name and container. These can be created with the Azure web console or Azure CLI. Make sure the Azure principal for the Azure credential has a role assignment that permits reading and writing to the container, for example, [Storage Blob Data Contributor](https://learn.microsoft.com/en-us/azure/role-based-access-control/built-in-roles#storage-blob-data-contributor). ###### Configuration Configure the agent to target your container by exporting the `BUILDKITE_ARTIFACT_UPLOAD_DESTINATION` environment variable using an [environment agent hook](/docs/agent/hooks) (this can not be set using the Buildkite web interface, API, or during pipeline upload). For example: ```shell export BUILDKITE_ARTIFACT_UPLOAD_DESTINATION="https://mystorageaccountname.blob.core.windows.net/my-container/$BUILDKITE_PIPELINE_ID/$BUILDKITE_BUILD_ID/$BUILDKITE_JOB_ID" ``` Alternatively, when running `buildkite-agent artifact upload` or `buildkite-agent artifact download`, you can specify the full upload destination in the form: ``` https://[storageaccountname].blob.core.windows.net/[container]/[path] ``` ###### Usage If you have not [explicitly enabled anonymous public access](https://learn.microsoft.com/en-us/azure/storage/blobs/anonymous-read-access-configure?tabs=portal) to data in your container, you won't have automatic access to your artifacts through the links in the Buildkite web interface. To generate SAS (shared access signatures) as part of each artifact URL, which allow temporary access to your artifacts, you will need to set a token duration as well as use a shared key for the credential: ```shell #### Provide a token duration; SAS URLs will expire after this length of time. export BUILDKITE_AZURE_BLOB_SAS_TOKEN_DURATION=1h #### Generating SAS tokens requires an account key. export BUILDKITE_AZURE_BLOB_ACCOUNT_KEY='...' ``` --- ### bootstrap URL: https://buildkite.com/docs/agent/cli/reference/bootstrap #### buildkite-agent bootstrap `bootstrap` is the command the agent executes when it comes to run a job on your agent. ##### Running the bootstrap ###### Usage `buildkite-agent bootstrap [options...]` ###### Description The bootstrap command executes a Buildkite job locally. Generally the bootstrap command is run as a sub-process of the buildkite-agent to execute a given job sent from buildkite.com, but you can also invoke the bootstrap manually. Execution is broken down into phases. By default, the bootstrap runs a plugin phase which sets up any plugins specified, then a checkout phase which pulls down your code and then a command phase that executes the specified command in the created environment. You can run only specific phases with the --phases flag. The bootstrap is also responsible for executing hooks around the phases. See https://buildkite.com/docs/agent/v3/hooks for more details. ###### Example ```shell $ eval $(curl -s -H "Authorization: Bearer xxx" \ "https://api.buildkite.com/v2/organizations/[org]/pipelines/[proj]/builds/[build]/jobs/[job]/env.txt" | \ sed 's/^/export /' \ ) $ buildkite-agent bootstrap --build-path builds ``` ###### Options | `--command value` [#](#command) | The command to run **Environment variable**: `$BUILDKITE_COMMAND` | `--job value` [#](#job) | The ID of the job being run **Environment variable**: `$BUILDKITE_JOB_ID` | `--repository value` [#](#repository) | The repository to clone and run the job from **Environment variable**: `$BUILDKITE_REPO` | `--commit value` [#](#commit) | The commit to checkout in the repository **Environment variable**: `$BUILDKITE_COMMIT` | `--branch value` [#](#branch) | The branch the commit is in **Environment variable**: `$BUILDKITE_BRANCH` | `--tag value` [#](#tag) | The tag the commit **Environment variable**: `$BUILDKITE_TAG` | `--refspec value` [#](#refspec) | Optional refspec to override git fetch **Environment variable**: `$BUILDKITE_REFSPEC` | `--plugins value` [#](#plugins) | The plugins for the job **Environment variable**: `$BUILDKITE_PLUGINS` | `--secrets value` [#](#secrets) | Secrets to be loaded into the job environment **Environment variable**: `$BUILDKITE_SECRETS_CONFIG` | `--pullrequest value` [#](#pullrequest) | The number/id of the pull request this commit belonged to **Environment variable**: `$BUILDKITE_PULL_REQUEST` | `--pull-request-using-merge-refspec ` [#](#pull-request-using-merge-refspec) | Whether the agent should attempt to checkout the pull request commit using the merge refspec. This feature is in private preview and requires backend enablement—contact support to enable (default: false) **Environment variable**: `$BUILDKITE_PULL_REQUEST_USING_MERGE_REFSPEC` | `--agent value` [#](#agent) | The name of the agent running the job **Environment variable**: `$BUILDKITE_AGENT_NAME` | `--queue value` [#](#queue) | The name of the queue the agent belongs to, if tagged **Environment variable**: `$BUILDKITE_AGENT_META_DATA_QUEUE` | `--organization value` [#](#organization) | The slug of the organization that the job is a part of **Environment variable**: `$BUILDKITE_ORGANIZATION_SLUG` | `--pipeline value` [#](#pipeline) | The slug of the pipeline that the job is a part of **Environment variable**: `$BUILDKITE_PIPELINE_SLUG` | `--pipeline-provider value` [#](#pipeline-provider) | The id of the SCM provider that the repository is hosted on **Environment variable**: `$BUILDKITE_PIPELINE_PROVIDER` | `--artifact-upload-paths value` [#](#artifact-upload-paths) | Paths to files to automatically upload at the end of a job **Environment variable**: `$BUILDKITE_ARTIFACT_PATHS` | `--artifact-upload-destination value` [#](#artifact-upload-destination) | A custom location to upload artifact paths to (for example, s3://my-custom-bucket/and/prefix) **Environment variable**: `$BUILDKITE_ARTIFACT_UPLOAD_DESTINATION` | `--clean-checkout ` [#](#clean-checkout) | Whether or not the bootstrap should remove the existing repository before running the command (default: false) **Environment variable**: `$BUILDKITE_CLEAN_CHECKOUT` | `--skip-checkout ` [#](#skip-checkout) | Skip the git checkout phase entirely **Environment variable**: `$BUILDKITE_SKIP_CHECKOUT` | `--git-checkout-flags value` [#](#git-checkout-flags) | Flags to pass to "git checkout" command (default: "-f") **Environment variable**: `$BUILDKITE_GIT_CHECKOUT_FLAGS` | `--git-clone-flags value` [#](#git-clone-flags) | Flags to pass to "git clone" command (default: "-v") **Environment variable**: `$BUILDKITE_GIT_CLONE_FLAGS` | `--git-clone-mirror-flags value` [#](#git-clone-mirror-flags) | Flags to pass to "git clone" command when mirroring (default: "-v") **Environment variable**: `$BUILDKITE_GIT_CLONE_MIRROR_FLAGS` | `--git-clean-flags value` [#](#git-clean-flags) | Flags to pass to "git clean" command (default: "-ffxdq") **Environment variable**: `$BUILDKITE_GIT_CLEAN_FLAGS` | `--git-fetch-flags value` [#](#git-fetch-flags) | Flags to pass to "git fetch" command (default: "-v --prune") **Environment variable**: `$BUILDKITE_GIT_FETCH_FLAGS` | `--git-mirrors-path value` [#](#git-mirrors-path) | Path to where mirrors of git repositories are stored **Environment variable**: `$BUILDKITE_GIT_MIRRORS_PATH` | `--git-mirror-checkout-mode value` [#](#git-mirror-checkout-mode) | Changes how clones of a mirror are made; available modes are [dissociate reference]. In `dissociate` mode, clones from a mirror uses the git clone `--dissociate` flag, which copies underlying objects from the mirror, making the clone robust to changes in the mirror such as garbage collection, at the expense of additional disk usage and setup time. `reference` mode does not pass `--dissociate`, which causes the clone to directly use objects from the mirror, which is more fragile and can cause the clone to break under entirely normal operation of the mirror, but is slightly faster to clone and uses less disk space. (default: "reference") **Environment variable**: `$BUILDKITE_GIT_MIRROR_CHECKOUT_MODE` | `--git-mirrors-lock-timeout value` [#](#git-mirrors-lock-timeout) | Seconds to lock a git mirror during clone, should exceed your longest checkout (default: 300) **Environment variable**: `$BUILDKITE_GIT_MIRRORS_LOCK_TIMEOUT` | `--git-mirrors-skip-update ` [#](#git-mirrors-skip-update) | Skip updating the Git mirror (default: false) **Environment variable**: `$BUILDKITE_GIT_MIRRORS_SKIP_UPDATE` | `--git-submodule-clone-config value` [#](#git-submodule-clone-config) | Comma separated key=value git config pairs applied before git submodule clone commands such as `update --init`. If the config is needed to be applied to all git commands, supply it in a global git config file for the system that the agent runs in instead **Environment variable**: `$BUILDKITE_GIT_SUBMODULE_CLONE_CONFIG` | `--git-skip-fetch-existing-commits ` [#](#git-skip-fetch-existing-commits) | Skip git fetch if the commit already exists in the local git directory (default: false) **Environment variable**: `$BUILDKITE_GIT_SKIP_FETCH_EXISTING_COMMITS` | `--checkout-attempts value` [#](#checkout-attempts) | Number of checkout attempts (including the initial attempt). Failed attempts are retried with exponential backoff (factor of 2, starting at 1s: 1s, 2s, 4s, ...) (default: 6) **Environment variable**: `$BUILDKITE_CHECKOUT_ATTEMPTS` | `--bin-path value` [#](#bin-path) | Directory where the buildkite-agent binary lives **Environment variable**: `$BUILDKITE_BIN_PATH` | `--build-path value` [#](#build-path) | Path to where the builds will run from **Environment variable**: `$BUILDKITE_BUILD_PATH` | `--hooks-path value` [#](#hooks-path) | Directory where the hook scripts are found **Environment variable**: `$BUILDKITE_HOOKS_PATH` | `--additional-hooks-paths value` [#](#additional-hooks-paths) | Additional directories to look for agent hooks **Environment variable**: `$BUILDKITE_ADDITIONAL_HOOKS_PATHS` | `--sockets-path value` [#](#sockets-path) | Directory where the agent will place sockets (default: "$HOME/.buildkite-agent/sockets") **Environment variable**: `$BUILDKITE_SOCKETS_PATH` | `--plugins-path value` [#](#plugins-path) | Directory where the plugins are saved to **Environment variable**: `$BUILDKITE_PLUGINS_PATH` | `--command-eval ` [#](#command-eval) | Allow running of arbitrary commands (default: true) **Environment variable**: `$BUILDKITE_COMMAND_EVAL` | `--plugins-enabled ` [#](#plugins-enabled) | Allow plugins to be run (default: true) **Environment variable**: `$BUILDKITE_PLUGINS_ENABLED` | `--plugin-validation ` [#](#plugin-validation) | Validate plugin configuration (default: false) **Environment variable**: `$BUILDKITE_PLUGIN_VALIDATION` | `--plugins-always-clone-fresh ` [#](#plugins-always-clone-fresh) | Always make a new clone of plugin source, even if already present (default: false) **Environment variable**: `$BUILDKITE_PLUGINS_ALWAYS_CLONE_FRESH` | `--local-hooks-enabled ` [#](#local-hooks-enabled) | Allow local hooks to be run (default: true) **Environment variable**: `$BUILDKITE_LOCAL_HOOKS_ENABLED` | `--ssh-keyscan ` [#](#ssh-keyscan) | Automatically run ssh-keyscan before checkout (default: true) **Environment variable**: `$BUILDKITE_SSH_KEYSCAN` | `--git-submodules ` [#](#git-submodules) | Enable git submodules (default: true) **Environment variable**: `$BUILDKITE_GIT_SUBMODULES` | `--pty ` [#](#pty) | Run jobs within a pseudo terminal (default: true) **Environment variable**: `$BUILDKITE_PTY` | `--shell value` [#](#shell) | The shell to use to interpret build commands (default: "/bin/bash -e -c") **Environment variable**: `$BUILDKITE_SHELL` | `--hooks-shell value` [#](#hooks-shell) | The shell to use to interpret hooks commands **Environment variable**: `$BUILDKITE_HOOKS_SHELL` | `--phases value` [#](#phases) | The specific phases to execute. The order they're defined is irrelevant. **Environment variable**: `$BUILDKITE_BOOTSTRAP_PHASES` | `--tracing-backend value` [#](#tracing-backend) | The name of the tracing backend to use. **Environment variable**: `$BUILDKITE_TRACING_BACKEND` | `--tracing-service-name value` [#](#tracing-service-name) | Service name to use when reporting traces. (default: "buildkite-agent") **Environment variable**: `$BUILDKITE_TRACING_SERVICE_NAME` | `--tracing-traceparent value` [#](#tracing-traceparent) | W3C Trace Parent for tracing **Environment variable**: `$BUILDKITE_TRACING_TRACEPARENT` | `--tracing-propagate-traceparent ` [#](#tracing-propagate-traceparent) | Accept traceparent from Buildkite control plane (default: false) **Environment variable**: `$BUILDKITE_TRACING_PROPAGATE_TRACEPARENT` | `--no-job-api ` [#](#no-job-api) | Disables the Job API, which gives commands in jobs some abilities to introspect and mutate the state of the job (default: false) **Environment variable**: `$BUILDKITE_AGENT_NO_JOB_API` | `--disable-warnings-for value` [#](#disable-warnings-for) | A list of warning IDs to disable **Environment variable**: `$BUILDKITE_AGENT_DISABLE_WARNINGS_FOR` | `--cancel-signal value` [#](#cancel-signal) | The signal to use for cancellation (default: "SIGTERM") **Environment variable**: `$BUILDKITE_CANCEL_SIGNAL` | `--cancel-grace-period value` [#](#cancel-grace-period) | The number of seconds a canceled or timed out job is given to gracefully terminate and upload its artifacts (default: 10) **Environment variable**: `$BUILDKITE_CANCEL_GRACE_PERIOD` | `--signal-grace-period-seconds value` [#](#signal-grace-period-seconds) | The number of seconds given to a subprocess to handle being sent `cancel-signal`. After this period has elapsed, SIGKILL will be sent. Negative values are taken relative to `cancel-grace-period`. The default value (-1) means that the effective signal grace period is equal to `cancel-grace-period` minus 1. (default: -1) **Environment variable**: `$BUILDKITE_SIGNAL_GRACE_PERIOD_SECONDS` | `--debug ` [#](#debug) | Enable debug mode. Synonym for `--log-level debug`. Takes precedence over `--log-level` (default: false) **Environment variable**: `$BUILDKITE_AGENT_DEBUG` | `--log-level value` [#](#log-level) | Set the log level for the agent, making logging more or less verbose. Defaults to notice. Allowed values are: debug, info, error, warn, fatal (default: "notice") **Environment variable**: `$BUILDKITE_AGENT_LOG_LEVEL` | `--experiment value` [#](#experiment) | Enable experimental features within the buildkite-agent **Environment variable**: `$BUILDKITE_AGENT_EXPERIMENT` | `--profile value` [#](#profile) | Enable a profiling mode, either cpu, memory, mutex or block **Environment variable**: `$BUILDKITE_AGENT_PROFILE` | `--redacted-vars value` [#](#redacted-vars) | Pattern of environment variable names containing sensitive values (default: "*_PASSWORD", "*_SECRET", "*_TOKEN", "*_PRIVATE_KEY", "*_ACCESS_KEY", "*_SECRET_KEY", "*_CONNECTION_STRING", "*_API_KEY") **Environment variable**: `$BUILDKITE_REDACTED_VARS` | `--strict-single-hooks ` [#](#strict-single-hooks) | Enforces that only one checkout hook, and only one command hook, can be run (default: false) **Environment variable**: `$BUILDKITE_STRICT_SINGLE_HOOKS` | `--trace-context-encoding value` [#](#trace-context-encoding) | Sets the inner encoding for BUILDKITE_TRACE_CONTEXT. Must be either json or gob (default: "gob") **Environment variable**: `$BUILDKITE_TRACE_CONTEXT_ENCODING` --- ### build URL: https://buildkite.com/docs/agent/cli/reference/build #### buildkite-agent build The Buildkite agent's `build` subcommands provide the ability to control builds. ##### Canceling a build ###### Usage `buildkite-agent build cancel [options...]` ###### Description Cancel a running build. ###### Example ```shell #### Cancels the current build $ buildkite-agent build cancel ``` ###### Options | `--no-color ` [#](#no-color) | Don't show colors in logging (default: false) **Environment variable**: `$BUILDKITE_AGENT_NO_COLOR` | `--debug ` [#](#debug) | Enable debug mode. Synonym for `--log-level debug`. Takes precedence over `--log-level` (default: false) **Environment variable**: `$BUILDKITE_AGENT_DEBUG` | `--log-level value` [#](#log-level) | Set the log level for the agent, making logging more or less verbose. Defaults to notice. Allowed values are: debug, info, error, warn, fatal (default: "notice") **Environment variable**: `$BUILDKITE_AGENT_LOG_LEVEL` | `--experiment value` [#](#experiment) | Enable experimental features within the buildkite-agent **Environment variable**: `$BUILDKITE_AGENT_EXPERIMENT` | `--profile value` [#](#profile) | Enable a profiling mode, either cpu, memory, mutex or block **Environment variable**: `$BUILDKITE_AGENT_PROFILE` | `--agent-access-token value` [#](#agent-access-token) | The access token used to identify the agent **Environment variable**: `$BUILDKITE_AGENT_ACCESS_TOKEN` | `--endpoint value` [#](#endpoint) | The Agent API endpoint (default: "https://agent-edge.buildkite.com/v3") **Environment variable**: `$BUILDKITE_AGENT_ENDPOINT` | `--no-http2 ` [#](#no-http2) | Disable HTTP2 when communicating with the Agent API (default: false) **Environment variable**: `$BUILDKITE_NO_HTTP2` | `--debug-http ` [#](#debug-http) | Enable HTTP debug mode, which dumps all request and response bodies to the log (default: false) **Environment variable**: `$BUILDKITE_AGENT_DEBUG_HTTP` | `--trace-http ` [#](#trace-http) | Enable HTTP trace mode, which logs timings for each HTTP request. Timings are logged at the debug level unless a request fails at the network level in which case they are logged at the error level (default: false) **Environment variable**: `$BUILDKITE_AGENT_TRACE_HTTP` | `--build value` [#](#build) | The build UUID to cancel **Environment variable**: `$BUILDKITE_BUILD_ID` --- ### env URL: https://buildkite.com/docs/agent/cli/reference/env #### buildkite-agent env The Buildkite agent's `env` subcommands provide the ability to inspect environment variables. From version 3.115.2 of the Buildkite agent, jobs can inspect and modify their environment variables using the `get`, `set`, and `unset` sub-commands. These provide an alternative to using shell commands to inspect and modify environment variables. ##### Printing env This command is used internally by the agent and isn't recommended for use in your builds. ###### Usage `buildkite-agent env dump [options]` ###### Description Prints out the environment of the current process as a JSON object, easily parsable by other programs. Used when executing hooks to discover changes that hooks make to the environment. ###### Example ```shell $ buildkite-agent env dump --format json-pretty ``` ###### Options | `--no-color ` [#](#no-color) | Don't show colors in logging (default: false) **Environment variable**: `$BUILDKITE_AGENT_NO_COLOR` | `--debug ` [#](#debug) | Enable debug mode. Synonym for `--log-level debug`. Takes precedence over `--log-level` (default: false) **Environment variable**: `$BUILDKITE_AGENT_DEBUG` | `--log-level value` [#](#log-level) | Set the log level for the agent, making logging more or less verbose. Defaults to notice. Allowed values are: debug, info, error, warn, fatal (default: "notice") **Environment variable**: `$BUILDKITE_AGENT_LOG_LEVEL` | `--experiment value` [#](#experiment) | Enable experimental features within the buildkite-agent **Environment variable**: `$BUILDKITE_AGENT_EXPERIMENT` | `--profile value` [#](#profile) | Enable a profiling mode, either cpu, memory, mutex or block **Environment variable**: `$BUILDKITE_AGENT_PROFILE` | `--format value` [#](#format) | Output format; json or json-pretty (default: "json") **Environment variable**: `$BUILDKITE_AGENT_ENV_DUMP_FORMAT` ##### Getting a job's environment variables ###### Usage `buildkite-agent env get [variables]` ###### Description Retrieves environment variables and their current values from the current job execution environment. Changes to the job environment only apply to the environments of subsequent phases of the job. However, `env get` can be used to inspect the changes made with `env set` and `env unset`. ###### Examples Getting all variables in key=value format: ```shell $ buildkite-agent env get ALPACA=Geronimo the Incredible BUILDKITE=true LLAMA=Kuzco ... ``` Getting the value of one variable: ```shell $ buildkite-agent env get LLAMA Kuzco ``` Getting multiple specific variables: ```shell $ buildkite-agent env get LLAMA ALPACA ALPACA=Geronimo the Incredible LLAMA=Kuzco ``` Getting variables as a JSON object: ```shell $ buildkite-agent env get --format=json-pretty { "ALPACA": "Geronimo the Incredible", "BUILDKITE": "true", "LLAMA": "Kuzco", ... } ``` ###### Options | `--no-color ` [#](#no-color) | Don't show colors in logging (default: false) **Environment variable**: `$BUILDKITE_AGENT_NO_COLOR` | `--debug ` [#](#debug) | Enable debug mode. Synonym for `--log-level debug`. Takes precedence over `--log-level` (default: false) **Environment variable**: `$BUILDKITE_AGENT_DEBUG` | `--log-level value` [#](#log-level) | Set the log level for the agent, making logging more or less verbose. Defaults to notice. Allowed values are: debug, info, error, warn, fatal (default: "notice") **Environment variable**: `$BUILDKITE_AGENT_LOG_LEVEL` | `--experiment value` [#](#experiment) | Enable experimental features within the buildkite-agent **Environment variable**: `$BUILDKITE_AGENT_EXPERIMENT` | `--profile value` [#](#profile) | Enable a profiling mode, either cpu, memory, mutex or block **Environment variable**: `$BUILDKITE_AGENT_PROFILE` | `--format value` [#](#format) | Output format: plain, json, or json-pretty (default: "plain") **Environment variable**: `$BUILDKITE_AGENT_ENV_GET_FORMAT` ##### Setting a job's environment variables ###### Usage `buildkite-agent env set [variable]` ###### Description Sets environment variables in the current job execution environment. Changes to the job environment variables only apply to subsequent phases of the job. This command cannot unset Buildkite read-only variables. To read the new values of variables from within the current phase, use `env get`. ###### Examples Setting the variables `LLAMA` and `ALPACA`: ```shell $ buildkite-agent env set LLAMA=Kuzco "ALPACA=Geronimo the Incredible" Added: + LLAMA Updated: ~ ALPACA ``` Setting the variables `LLAMA` and `ALPACA` using a JSON object supplied over standard input: ```shell $ echo '{"ALPACA":"Geronimo the Incredible","LLAMA":"Kuzco"}' | \ buildkite-agent env set --input-format=json --output-format=quiet - ``` ###### Options | `--no-color ` [#](#no-color) | Don't show colors in logging (default: false) **Environment variable**: `$BUILDKITE_AGENT_NO_COLOR` | `--debug ` [#](#debug) | Enable debug mode. Synonym for `--log-level debug`. Takes precedence over `--log-level` (default: false) **Environment variable**: `$BUILDKITE_AGENT_DEBUG` | `--log-level value` [#](#log-level) | Set the log level for the agent, making logging more or less verbose. Defaults to notice. Allowed values are: debug, info, error, warn, fatal (default: "notice") **Environment variable**: `$BUILDKITE_AGENT_LOG_LEVEL` | `--experiment value` [#](#experiment) | Enable experimental features within the buildkite-agent **Environment variable**: `$BUILDKITE_AGENT_EXPERIMENT` | `--profile value` [#](#profile) | Enable a profiling mode, either cpu, memory, mutex or block **Environment variable**: `$BUILDKITE_AGENT_PROFILE` | `--input-format value` [#](#input-format) | Input format: plain or json (default: "plain") **Environment variable**: `$BUILDKITE_AGENT_ENV_SET_INPUT_FORMAT` | `--output-format value` [#](#output-format) | Output format: quiet (no output), plain, json, or json-pretty (default: "plain") **Environment variable**: `$BUILDKITE_AGENT_ENV_SET_OUTPUT_FORMAT` ##### Removing environment variables from a job ###### Usage `buildkite-agent env unset [variables]` ###### Description Unsets environment variables in the current job execution environment. Changes to the job environment variables only apply to subsequent phases of the job. This command cannot unset Buildkite read-only variables. To read the new values of variables from within the current phase, use `env get`. ###### Examples Unsetting the variables `LLAMA` and `ALPACA`: ```shell $ buildkite-agent env unset LLAMA ALPACA Unset: - ALPACA - LLAMA ``` Unsetting the variables `LLAMA` and `ALPACA` with a JSON list supplied over standard input ```shell $ echo '["LLAMA","ALPACA"]' | \ buildkite-agent env unset --input-format=json --output-format=quiet - ``` ###### Options | `--no-color ` [#](#no-color) | Don't show colors in logging (default: false) **Environment variable**: `$BUILDKITE_AGENT_NO_COLOR` | `--debug ` [#](#debug) | Enable debug mode. Synonym for `--log-level debug`. Takes precedence over `--log-level` (default: false) **Environment variable**: `$BUILDKITE_AGENT_DEBUG` | `--log-level value` [#](#log-level) | Set the log level for the agent, making logging more or less verbose. Defaults to notice. Allowed values are: debug, info, error, warn, fatal (default: "notice") **Environment variable**: `$BUILDKITE_AGENT_LOG_LEVEL` | `--experiment value` [#](#experiment) | Enable experimental features within the buildkite-agent **Environment variable**: `$BUILDKITE_AGENT_EXPERIMENT` | `--profile value` [#](#profile) | Enable a profiling mode, either cpu, memory, mutex or block **Environment variable**: `$BUILDKITE_AGENT_PROFILE` | `--input-format value` [#](#input-format) | Input format: plain or json (default: "plain") **Environment variable**: `$BUILDKITE_AGENT_ENV_UNSET_INPUT_FORMAT` | `--output-format value` [#](#output-format) | Output format: quiet (no output), plain, json, or json-pretty (default: "plain") **Environment variable**: `$BUILDKITE_AGENT_ENV_UNSET_OUTPUT_FORMAT` --- ### job URL: https://buildkite.com/docs/agent/cli/reference/job #### buildkite-agent job The Buildkite agent's `job update` command provides the ability to update the attributes of a job. ##### Updating a job Use this command in your build scripts to update a job's attributes. Only command jobs can be updated and must not have finished. Currently, only the `timeout_in_minutes` attribute can be updated. `timeout_in_minutes` (alias `timeout`): The maximum number of minutes this step is allowed to run, relative to the job's start time. If the job exceeds this time limit, the job is automatically canceled and the build fails. Jobs that time out with an exit status of `0` are marked as `passed`. See [Updating timeouts during a job](/docs/pipelines/configure/build-timeouts#command-timeouts-updating-timeouts-during-a-job) for more information. ###### Usage `buildkite-agent job update [options...]` ###### Description Update an attribute of a job. Only command jobs can be updated, and only before they are finished. ###### Example ```shell $ buildkite-agent job update timeout 20 $ echo 20 | buildkite-agent job update timeout ``` ###### Options | `--no-color ` [#](#no-color) | Don't show colors in logging (default: false) **Environment variable**: `$BUILDKITE_AGENT_NO_COLOR` | `--debug ` [#](#debug) | Enable debug mode. Synonym for `--log-level debug`. Takes precedence over `--log-level` (default: false) **Environment variable**: `$BUILDKITE_AGENT_DEBUG` | `--log-level value` [#](#log-level) | Set the log level for the agent, making logging more or less verbose. Defaults to notice. Allowed values are: debug, info, error, warn, fatal (default: "notice") **Environment variable**: `$BUILDKITE_AGENT_LOG_LEVEL` | `--experiment value` [#](#experiment) | Enable experimental features within the buildkite-agent **Environment variable**: `$BUILDKITE_AGENT_EXPERIMENT` | `--profile value` [#](#profile) | Enable a profiling mode, either cpu, memory, mutex or block **Environment variable**: `$BUILDKITE_AGENT_PROFILE` | `--agent-access-token value` [#](#agent-access-token) | The access token used to identify the agent **Environment variable**: `$BUILDKITE_AGENT_ACCESS_TOKEN` | `--endpoint value` [#](#endpoint) | The Agent API endpoint (default: "https://agent-edge.buildkite.com/v3") **Environment variable**: `$BUILDKITE_AGENT_ENDPOINT` | `--no-http2 ` [#](#no-http2) | Disable HTTP2 when communicating with the Agent API (default: false) **Environment variable**: `$BUILDKITE_NO_HTTP2` | `--debug-http ` [#](#debug-http) | Enable HTTP debug mode, which dumps all request and response bodies to the log (default: false) **Environment variable**: `$BUILDKITE_AGENT_DEBUG_HTTP` | `--trace-http ` [#](#trace-http) | Enable HTTP trace mode, which logs timings for each HTTP request. Timings are logged at the debug level unless a request fails at the network level in which case they are logged at the error level (default: false) **Environment variable**: `$BUILDKITE_AGENT_TRACE_HTTP` | `--job value` [#](#job) | The job to update. Defaults to the current job **Environment variable**: `$BUILDKITE_JOB_ID` | `--redacted-vars value` [#](#redacted-vars) | Pattern of environment variable names containing sensitive values (default: "*_PASSWORD", "*_SECRET", "*_TOKEN", "*_PRIVATE_KEY", "*_ACCESS_KEY", "*_SECRET_KEY", "*_CONNECTION_STRING", "*_API_KEY") **Environment variable**: `$BUILDKITE_REDACTED_VARS` --- ### lock URL: https://buildkite.com/docs/agent/cli/reference/lock #### buildkite-agent lock The Buildkite agent's `lock` subcommands provide the ability to coordinate multiple concurrent builds on the same host that access shared resources. With the `lock` command, processes can acquire and release a lock using the `acquire` and `release` subcommands. For the special case of performing setup once for the life of the agent (and waiting until it is complete), there are the `do` and `done` subcommands. These provide an alternative to using `flock` or OS-dependent locking mechanisms. Each type of `lock` subcommand makes use of a `[key]` value, which is an arbitrary name (for example, `my-key-value`) that you choose to identify your lock. A key does not reference any predefined value, and can be any name of your choosing, but it is recommended using a descriptive name that clearly indicates what resource or operation is being protected. All builds using the same lock key will coordinate with each other on the same host. > 📘 Flock file locks > The Buildkite agent also has an internal `flock` file locking mechanism, which is an automatic feature that's unrelated to the locking feature provided by these agent `lock` commands. The `flock` mechanism is used for Git mirror and SSH `known_hosts` handling, and these locks are automatically released when the process is completed, including when the process terminates abnormally, for example, when an agent is not cleanly shut down. ##### Inspecting the state of a lock ###### Usage `buildkite-agent lock get [key]` ###### Description Retrieves the value of a lock key. Any key not in use returns an empty string. Note that this subcommand is only available when an agent has been started with the `agent-api` experiment enabled. `lock get` is generally only useful for inspecting lock state, as the value can change concurrently. To acquire or release a lock, use `lock acquire` and `lock release`. ###### Examples ```shell $ buildkite-agent lock get llama Kuzco ``` ###### Options | `--no-color ` [#](#no-color) | Don't show colors in logging (default: false) **Environment variable**: `$BUILDKITE_AGENT_NO_COLOR` | `--debug ` [#](#debug) | Enable debug mode. Synonym for `--log-level debug`. Takes precedence over `--log-level` (default: false) **Environment variable**: `$BUILDKITE_AGENT_DEBUG` | `--log-level value` [#](#log-level) | Set the log level for the agent, making logging more or less verbose. Defaults to notice. Allowed values are: debug, info, error, warn, fatal (default: "notice") **Environment variable**: `$BUILDKITE_AGENT_LOG_LEVEL` | `--experiment value` [#](#experiment) | Enable experimental features within the buildkite-agent **Environment variable**: `$BUILDKITE_AGENT_EXPERIMENT` | `--profile value` [#](#profile) | Enable a profiling mode, either cpu, memory, mutex or block **Environment variable**: `$BUILDKITE_AGENT_PROFILE` | `--lock-scope value` [#](#lock-scope) | The scope for locks used in this operation. Currently only 'machine' scope is supported (default: "machine") **Environment variable**: `$BUILDKITE_LOCK_SCOPE` | `--sockets-path value` [#](#sockets-path) | Directory where the agent will place sockets (default: "$HOME/.buildkite-agent/sockets") **Environment variable**: `$BUILDKITE_SOCKETS_PATH` ##### Acquiring a lock ###### Usage `buildkite-agent lock acquire [key]` ###### Description Acquires the lock for the given key. `lock acquire` will wait (potentially forever) until it can acquire the lock, if the lock is already held by another process. If multiple processes are waiting for the same lock, there is no ordering guarantee of which one will be given the lock next. To prevent separate processes unlocking each other, the output from `lock acquire` should be stored, and passed to `lock release`. Note that this subcommand is only available when an agent has been started with the `agent-api` experiment enabled. ###### Examples ```shell #!/usr/bin/env bash token=$(buildkite-agent lock acquire llama) #### your critical section here... buildkite-agent lock release llama "${token}" ``` ###### Options | `--no-color ` [#](#no-color) | Don't show colors in logging (default: false) **Environment variable**: `$BUILDKITE_AGENT_NO_COLOR` | `--debug ` [#](#debug) | Enable debug mode. Synonym for `--log-level debug`. Takes precedence over `--log-level` (default: false) **Environment variable**: `$BUILDKITE_AGENT_DEBUG` | `--log-level value` [#](#log-level) | Set the log level for the agent, making logging more or less verbose. Defaults to notice. Allowed values are: debug, info, error, warn, fatal (default: "notice") **Environment variable**: `$BUILDKITE_AGENT_LOG_LEVEL` | `--experiment value` [#](#experiment) | Enable experimental features within the buildkite-agent **Environment variable**: `$BUILDKITE_AGENT_EXPERIMENT` | `--profile value` [#](#profile) | Enable a profiling mode, either cpu, memory, mutex or block **Environment variable**: `$BUILDKITE_AGENT_PROFILE` | `--lock-scope value` [#](#lock-scope) | The scope for locks used in this operation. Currently only 'machine' scope is supported (default: "machine") **Environment variable**: `$BUILDKITE_LOCK_SCOPE` | `--sockets-path value` [#](#sockets-path) | Directory where the agent will place sockets (default: "$HOME/.buildkite-agent/sockets") **Environment variable**: `$BUILDKITE_SOCKETS_PATH` | `--lock-wait-timeout value` [#](#lock-wait-timeout) | Sets a maximum duration to wait for a lock before giving up (default: 0s) **Environment variable**: `$BUILDKITE_LOCK_WAIT_TIMEOUT` ##### Releasing a previously-acquired lock ###### Usage `buildkite-agent lock release [key] [token]` ###### Description Releases the lock for the given key. This should only be called by the process that acquired the lock. To help prevent different processes unlocking each other unintentionally, the output from `lock acquire` is required as the second argument, namely, the `token` in the Usage section above. Note that this subcommand is only available when an agent has been started with the `agent-api` experiment enabled. ###### Examples ```shell #!/usr/bin/env bash token=$(buildkite-agent lock acquire llama) #### your critical section here... buildkite-agent lock release llama "${token}" ``` ###### Options | `--no-color ` [#](#no-color) | Don't show colors in logging (default: false) **Environment variable**: `$BUILDKITE_AGENT_NO_COLOR` | `--debug ` [#](#debug) | Enable debug mode. Synonym for `--log-level debug`. Takes precedence over `--log-level` (default: false) **Environment variable**: `$BUILDKITE_AGENT_DEBUG` | `--log-level value` [#](#log-level) | Set the log level for the agent, making logging more or less verbose. Defaults to notice. Allowed values are: debug, info, error, warn, fatal (default: "notice") **Environment variable**: `$BUILDKITE_AGENT_LOG_LEVEL` | `--experiment value` [#](#experiment) | Enable experimental features within the buildkite-agent **Environment variable**: `$BUILDKITE_AGENT_EXPERIMENT` | `--profile value` [#](#profile) | Enable a profiling mode, either cpu, memory, mutex or block **Environment variable**: `$BUILDKITE_AGENT_PROFILE` | `--lock-scope value` [#](#lock-scope) | The scope for locks used in this operation. Currently only 'machine' scope is supported (default: "machine") **Environment variable**: `$BUILDKITE_LOCK_SCOPE` | `--sockets-path value` [#](#sockets-path) | Directory where the agent will place sockets (default: "$HOME/.buildkite-agent/sockets") **Environment variable**: `$BUILDKITE_SOCKETS_PATH` ##### Starting a do-once section ###### Usage `buildkite-agent lock do [key]` ###### Description Begins a do-once lock. Do-once can be used by multiple processes to wait for completion of some shared work, where only one process should do the work. Note that this subcommand is only available when an agent has been started with the `agent-api` experiment enabled. `lock do` will do one of two things: - Print 'do'. The calling process should proceed to do the work and then call `lock done`. - Wait until the work is marked as done (with `lock done`) and print 'done'. If `lock do` prints 'done' immediately, the work was already done. ###### Examples ```shell #!/usr/bin/env bash if [[ $(buildkite-agent lock do llama) == 'do' ]]; then # your critical section here... buildkite-agent lock done llama fi ``` ###### Options | `--no-color ` [#](#no-color) | Don't show colors in logging (default: false) **Environment variable**: `$BUILDKITE_AGENT_NO_COLOR` | `--debug ` [#](#debug) | Enable debug mode. Synonym for `--log-level debug`. Takes precedence over `--log-level` (default: false) **Environment variable**: `$BUILDKITE_AGENT_DEBUG` | `--log-level value` [#](#log-level) | Set the log level for the agent, making logging more or less verbose. Defaults to notice. Allowed values are: debug, info, error, warn, fatal (default: "notice") **Environment variable**: `$BUILDKITE_AGENT_LOG_LEVEL` | `--experiment value` [#](#experiment) | Enable experimental features within the buildkite-agent **Environment variable**: `$BUILDKITE_AGENT_EXPERIMENT` | `--profile value` [#](#profile) | Enable a profiling mode, either cpu, memory, mutex or block **Environment variable**: `$BUILDKITE_AGENT_PROFILE` | `--lock-scope value` [#](#lock-scope) | The scope for locks used in this operation. Currently only 'machine' scope is supported (default: "machine") **Environment variable**: `$BUILDKITE_LOCK_SCOPE` | `--sockets-path value` [#](#sockets-path) | Directory where the agent will place sockets (default: "$HOME/.buildkite-agent/sockets") **Environment variable**: `$BUILDKITE_SOCKETS_PATH` | `--lock-wait-timeout value` [#](#lock-wait-timeout) | Sets a maximum duration to wait for a lock before giving up (default: 0s) **Environment variable**: `$BUILDKITE_LOCK_WAIT_TIMEOUT` ##### Completing a do-once section ###### Usage `buildkite-agent lock done [key]` ###### Description Completes a do-once lock. This should only be used by the process performing the work. Note that this subcommand is only available when an agent has been started with the `agent-api` experiment enabled. ###### Examples ```shell #!/usr/bin/env bash if [[ $(buildkite-agent lock do llama) == 'do' ]]; then # your critical section here... buildkite-agent lock done llama fi ``` ###### Options | `--no-color ` [#](#no-color) | Don't show colors in logging (default: false) **Environment variable**: `$BUILDKITE_AGENT_NO_COLOR` | `--debug ` [#](#debug) | Enable debug mode. Synonym for `--log-level debug`. Takes precedence over `--log-level` (default: false) **Environment variable**: `$BUILDKITE_AGENT_DEBUG` | `--log-level value` [#](#log-level) | Set the log level for the agent, making logging more or less verbose. Defaults to notice. Allowed values are: debug, info, error, warn, fatal (default: "notice") **Environment variable**: `$BUILDKITE_AGENT_LOG_LEVEL` | `--experiment value` [#](#experiment) | Enable experimental features within the buildkite-agent **Environment variable**: `$BUILDKITE_AGENT_EXPERIMENT` | `--profile value` [#](#profile) | Enable a profiling mode, either cpu, memory, mutex or block **Environment variable**: `$BUILDKITE_AGENT_PROFILE` | `--lock-scope value` [#](#lock-scope) | The scope for locks used in this operation. Currently only 'machine' scope is supported (default: "machine") **Environment variable**: `$BUILDKITE_LOCK_SCOPE` | `--sockets-path value` [#](#sockets-path) | Directory where the agent will place sockets (default: "$HOME/.buildkite-agent/sockets") **Environment variable**: `$BUILDKITE_SOCKETS_PATH` ##### Usage within a pipeline Locks help coordinate access to shared resources when multiple agents run concurrently on the same host, such as when `--spawn` is used to create multiple agents. ###### Coordinating sequential access Use [`acquire`](#acquiring-a-lock) and [`release`](#releasing-a-previously-acquired-lock) when multiple builds need to run the same operation sequentially to prevent conflicts. Each build will execute the task, but only one at a time. This coordination works across multiple pipelines when they use the same lock key and the jobs run on the same host. Unlike [`do`](#starting-a-do-once-section) and [`done`](#completing-a-do-once-section), each build still performs the work—locks just ensure they don't interfere with each other. ###### Sequential locks example In the following example, the key `db-migration-lock` ensures that database migrations run sequentially across multiple builds on the same host. ```yml steps: - label: "Install Dependencies" commands: - "echo '+++ Installing dependencies'" - "bundle install" - "npm ci" key: "install" - label: "Migrate DB Schema" commands: - "echo '+++ Running DB migration with lock'" - "token=$(buildkite-agent lock acquire db-migration-lock)" - "bundle exec rake db:migrate" - "buildkite-agent lock release db-migration-lock '$${token}'" plugins: - vault-secrets#v2.2.1: server: "https://my-vault-server" path: "data/buildkite/postgres" auth: method: "approle" role-id: "my-role-id" secret-env: "VAULT_SECRET_ID" env: RAILS_ENV: "development" depends_on: "install" key: "migrate-db" ``` This lock only controls access to the `bundle exec rake db:migrate` process itself, and does not lock access to the vault server defined by the plugin, or any subsequent commands following the `buildkite-agent lock release db-migration-lock '$${token}'` command. Only processes that occur between the `lock acquire` and `lock release` commands are the ones which are locked. Multiple builds can still retrieve secrets from the vault concurrently, but only one can execute the actual database migration at a time, as long as all builds use the same lock key. ###### One-time locks When running parallel jobs on the same host that need a shared setup, [`do`](#starting-a-do-once-section) and [`done`](#completing-a-do-once-section) ensure expensive operations happen only once. For instance, one agent performs the setup (for example, downloading datasets, generating certificates, starting services, etc.), while others wait and then proceed. This saves time and resources compared to each parallel job repeating the same work. Once marked as `done`, the lock remains completed for all subsequent jobs on the host unless it is restarted. ###### One-time locks example In the following example, they key `test-env-setup` ensures that the test environment setup happens only once across multiple parallel jobs on the same host. ```yml steps: - label: "Install Dependencies" commands: - "echo '+++ Installing dependencies'" - "bundle install" - "npm ci" key: "install" - label: "Setup Test Environment" command: "setup_test.sh" depends_on: "install" key: "prep" parallelism: 5 - label: "Run Tests" commands: - "echo '+++ Running tests'" - "bundle exec rspec" depends_on: "prep" parallelism: 10 ``` ```bash #!/usr/bin/env bash echo "+++ Setting up shared test environment" if [[ $(buildkite-agent lock do test-env-setup) == 'do' ]]; then echo "Downloading assets..." curl -o /tmp/test-data.zip https://releases.example.com/data.zip unzip /tmp/test-data.zip -d /tmp/shared-test-files/ buildkite-agent lock done test-env-setup else echo "Assets have already been pulled and unarchived" fi ``` The first job to reach the `buildkite-agent lock do test-env-setup` command receives a response of `do` and executes the setup work (downloading and extracting test data). All other parallel jobs will wait and then receive a response of `done`. These jobs will skip the `if` statement in this example bash script and output `Assets have already been pulled and unarchived`. Unlike the `acquire`/`release` pattern, this lock is performed only once and subsequent jobs benefit from the completed work without repeating it. --- ### meta-data URL: https://buildkite.com/docs/agent/cli/reference/meta-data #### buildkite-agent meta-data The Buildkite agent's `meta-data` command provides your build pipeline with a powerful key/value data-store that works across build steps and build agents, no matter the machine or network. See the [Using build meta-data](/docs/pipelines/configure/build-meta-data) guide for a step-by-step example. ##### Setting data Use this command in your build scripts to save string data in the Buildkite meta-data store. ###### Usage `buildkite-agent meta-data set [value] [options...]` ###### Description Set arbitrary data on a build using a basic key/value store. You can supply the value as an argument to the command, or pipe in a file or script output. The value must be a non-empty string, and strings containing only whitespace characters are not allowed. ###### Example ```shell $ buildkite-agent meta-data set "foo" "bar" $ buildkite-agent meta-data set "foo" | `--no-color ` [#](#no-color) | Don't show colors in logging (default: false) **Environment variable**: `$BUILDKITE_AGENT_NO_COLOR` | `--debug ` [#](#debug) | Enable debug mode. Synonym for `--log-level debug`. Takes precedence over `--log-level` (default: false) **Environment variable**: `$BUILDKITE_AGENT_DEBUG` | `--log-level value` [#](#log-level) | Set the log level for the agent, making logging more or less verbose. Defaults to notice. Allowed values are: debug, info, error, warn, fatal (default: "notice") **Environment variable**: `$BUILDKITE_AGENT_LOG_LEVEL` | `--experiment value` [#](#experiment) | Enable experimental features within the buildkite-agent **Environment variable**: `$BUILDKITE_AGENT_EXPERIMENT` | `--profile value` [#](#profile) | Enable a profiling mode, either cpu, memory, mutex or block **Environment variable**: `$BUILDKITE_AGENT_PROFILE` | `--agent-access-token value` [#](#agent-access-token) | The access token used to identify the agent **Environment variable**: `$BUILDKITE_AGENT_ACCESS_TOKEN` | `--endpoint value` [#](#endpoint) | The Agent API endpoint (default: "https://agent-edge.buildkite.com/v3") **Environment variable**: `$BUILDKITE_AGENT_ENDPOINT` | `--no-http2 ` [#](#no-http2) | Disable HTTP2 when communicating with the Agent API (default: false) **Environment variable**: `$BUILDKITE_NO_HTTP2` | `--debug-http ` [#](#debug-http) | Enable HTTP debug mode, which dumps all request and response bodies to the log (default: false) **Environment variable**: `$BUILDKITE_AGENT_DEBUG_HTTP` | `--trace-http ` [#](#trace-http) | Enable HTTP trace mode, which logs timings for each HTTP request. Timings are logged at the debug level unless a request fails at the network level in which case they are logged at the error level (default: false) **Environment variable**: `$BUILDKITE_AGENT_TRACE_HTTP` | `--job value` [#](#job) | Which job's build should the meta-data be set on **Environment variable**: `$BUILDKITE_JOB_ID` | `--redacted-vars value` [#](#redacted-vars) | Pattern of environment variable names containing sensitive values (default: "*_PASSWORD", "*_SECRET", "*_TOKEN", "*_PRIVATE_KEY", "*_ACCESS_KEY", "*_SECRET_KEY", "*_CONNECTION_STRING", "*_API_KEY") **Environment variable**: `$BUILDKITE_REDACTED_VARS` Meta-data values are restricted to a maximum of 100 kilobytes. Keys and values larger than 1 kilobyte are discouraged. Please use [artifacts](/docs/agent/cli/reference/artifact) for large data which needs to uploaded and downloaded. ##### Getting data Use this command in your build scripts to get a previously saved value from the Buildkite meta-data store. ###### Usage `buildkite-agent meta-data get [options...]` ###### Description Get data from a build's key/value store. ###### Example ```shell $ buildkite-agent meta-data get "foo" ``` ###### Options | `--no-color ` [#](#no-color) | Don't show colors in logging (default: false) **Environment variable**: `$BUILDKITE_AGENT_NO_COLOR` | `--debug ` [#](#debug) | Enable debug mode. Synonym for `--log-level debug`. Takes precedence over `--log-level` (default: false) **Environment variable**: `$BUILDKITE_AGENT_DEBUG` | `--log-level value` [#](#log-level) | Set the log level for the agent, making logging more or less verbose. Defaults to notice. Allowed values are: debug, info, error, warn, fatal (default: "notice") **Environment variable**: `$BUILDKITE_AGENT_LOG_LEVEL` | `--experiment value` [#](#experiment) | Enable experimental features within the buildkite-agent **Environment variable**: `$BUILDKITE_AGENT_EXPERIMENT` | `--profile value` [#](#profile) | Enable a profiling mode, either cpu, memory, mutex or block **Environment variable**: `$BUILDKITE_AGENT_PROFILE` | `--agent-access-token value` [#](#agent-access-token) | The access token used to identify the agent **Environment variable**: `$BUILDKITE_AGENT_ACCESS_TOKEN` | `--endpoint value` [#](#endpoint) | The Agent API endpoint (default: "https://agent-edge.buildkite.com/v3") **Environment variable**: `$BUILDKITE_AGENT_ENDPOINT` | `--no-http2 ` [#](#no-http2) | Disable HTTP2 when communicating with the Agent API (default: false) **Environment variable**: `$BUILDKITE_NO_HTTP2` | `--debug-http ` [#](#debug-http) | Enable HTTP debug mode, which dumps all request and response bodies to the log (default: false) **Environment variable**: `$BUILDKITE_AGENT_DEBUG_HTTP` | `--trace-http ` [#](#trace-http) | Enable HTTP trace mode, which logs timings for each HTTP request. Timings are logged at the debug level unless a request fails at the network level in which case they are logged at the error level (default: false) **Environment variable**: `$BUILDKITE_AGENT_TRACE_HTTP` | `--default value` [#](#default) | If the meta-data value doesn't exist return this instead | `--job value` [#](#job) | Which job's build should the meta-data be retrieved from **Environment variable**: `$BUILDKITE_JOB_ID` | `--build value` [#](#build) | Which build should the meta-data be retrieved from. --build will take precedence over --job **Environment variable**: `$BUILDKITE_METADATA_BUILD_ID` ##### Checking if data exists ###### Usage `buildkite-agent meta-data exists [options...]` ###### Description The command exits with a status of 0 if the key has been set, or it will exit with a status of 100 if the key doesn't exist. ###### Example ```shell $ buildkite-agent meta-data exists "foo" ``` ###### Options | `--no-color ` [#](#no-color) | Don't show colors in logging (default: false) **Environment variable**: `$BUILDKITE_AGENT_NO_COLOR` | `--debug ` [#](#debug) | Enable debug mode. Synonym for `--log-level debug`. Takes precedence over `--log-level` (default: false) **Environment variable**: `$BUILDKITE_AGENT_DEBUG` | `--log-level value` [#](#log-level) | Set the log level for the agent, making logging more or less verbose. Defaults to notice. Allowed values are: debug, info, error, warn, fatal (default: "notice") **Environment variable**: `$BUILDKITE_AGENT_LOG_LEVEL` | `--experiment value` [#](#experiment) | Enable experimental features within the buildkite-agent **Environment variable**: `$BUILDKITE_AGENT_EXPERIMENT` | `--profile value` [#](#profile) | Enable a profiling mode, either cpu, memory, mutex or block **Environment variable**: `$BUILDKITE_AGENT_PROFILE` | `--agent-access-token value` [#](#agent-access-token) | The access token used to identify the agent **Environment variable**: `$BUILDKITE_AGENT_ACCESS_TOKEN` | `--endpoint value` [#](#endpoint) | The Agent API endpoint (default: "https://agent-edge.buildkite.com/v3") **Environment variable**: `$BUILDKITE_AGENT_ENDPOINT` | `--no-http2 ` [#](#no-http2) | Disable HTTP2 when communicating with the Agent API (default: false) **Environment variable**: `$BUILDKITE_NO_HTTP2` | `--debug-http ` [#](#debug-http) | Enable HTTP debug mode, which dumps all request and response bodies to the log (default: false) **Environment variable**: `$BUILDKITE_AGENT_DEBUG_HTTP` | `--trace-http ` [#](#trace-http) | Enable HTTP trace mode, which logs timings for each HTTP request. Timings are logged at the debug level unless a request fails at the network level in which case they are logged at the error level (default: false) **Environment variable**: `$BUILDKITE_AGENT_TRACE_HTTP` | `--job value` [#](#job) | Which job's build should the meta-data be checked for **Environment variable**: `$BUILDKITE_JOB_ID` | `--build value` [#](#build) | Which build should the meta-data be retrieved from. --build will take precedence over --job **Environment variable**: `$BUILDKITE_METADATA_BUILD_ID` ##### Listing keys ###### Usage `buildkite-agent meta-data keys [options...]` ###### Description Lists all meta-data keys that have been previously set, delimited by a newline and terminated with a trailing newline. ###### Example ```shell $ buildkite-agent meta-data keys ``` ###### Options | `--no-color ` [#](#no-color) | Don't show colors in logging (default: false) **Environment variable**: `$BUILDKITE_AGENT_NO_COLOR` | `--debug ` [#](#debug) | Enable debug mode. Synonym for `--log-level debug`. Takes precedence over `--log-level` (default: false) **Environment variable**: `$BUILDKITE_AGENT_DEBUG` | `--log-level value` [#](#log-level) | Set the log level for the agent, making logging more or less verbose. Defaults to notice. Allowed values are: debug, info, error, warn, fatal (default: "notice") **Environment variable**: `$BUILDKITE_AGENT_LOG_LEVEL` | `--experiment value` [#](#experiment) | Enable experimental features within the buildkite-agent **Environment variable**: `$BUILDKITE_AGENT_EXPERIMENT` | `--profile value` [#](#profile) | Enable a profiling mode, either cpu, memory, mutex or block **Environment variable**: `$BUILDKITE_AGENT_PROFILE` | `--agent-access-token value` [#](#agent-access-token) | The access token used to identify the agent **Environment variable**: `$BUILDKITE_AGENT_ACCESS_TOKEN` | `--endpoint value` [#](#endpoint) | The Agent API endpoint (default: "https://agent-edge.buildkite.com/v3") **Environment variable**: `$BUILDKITE_AGENT_ENDPOINT` | `--no-http2 ` [#](#no-http2) | Disable HTTP2 when communicating with the Agent API (default: false) **Environment variable**: `$BUILDKITE_NO_HTTP2` | `--debug-http ` [#](#debug-http) | Enable HTTP debug mode, which dumps all request and response bodies to the log (default: false) **Environment variable**: `$BUILDKITE_AGENT_DEBUG_HTTP` | `--trace-http ` [#](#trace-http) | Enable HTTP trace mode, which logs timings for each HTTP request. Timings are logged at the debug level unless a request fails at the network level in which case they are logged at the error level (default: false) **Environment variable**: `$BUILDKITE_AGENT_TRACE_HTTP` | `--job value` [#](#job) | Which job's build should the meta-data be checked for **Environment variable**: `$BUILDKITE_JOB_ID` | `--build value` [#](#build) | Which build should the meta-data be retrieved from. --build will take precedence over --job **Environment variable**: `$BUILDKITE_METADATA_BUILD_ID` --- ### oidc URL: https://buildkite.com/docs/agent/cli/reference/oidc #### buildkite-agent oidc The Buildkite agent's `oidc` command allows you to request an OpenID Connect (OIDC) token from Buildkite, representing the current pipeline and its job. These tokens can be exchanged for specific roles on federated systems like [AWS](https://aws.amazon.com/), [GCP](https://cloud.google.com/), [Azure](https://azure.microsoft.com/) and many others. Refer to the following documentation for more information: - The [What is OpenID Connect](https://openid.net/developers/how-connect-works/) overview on the OpenID web site for more details about how OIDC works. - The [OpenID Connect Core documentation](https://openid.net/specs/openid-connect-core-1_0.html#IDToken) for more information about how OIDC tokens are constructed and how to extract and use claims. Learn more about how to restrict your Buildkite agents' access to deployment environments like AWS, from the OIDC in [Buildkite Pipelines](/docs/pipelines/security/oidc) and with [AWS](/docs/pipelines/security/oidc/aws) documentation pages, as well as the [Buildkite Package Registries](/docs/package-registries/security/oidc) documentation page. > 📘 > From version 3.104.0 of the Buildkite agent, OIDC tokens are automatically redacted from build logs by default, with an optional `skip-redaction` flag to disable this behavior when needed. This behavior is similar to the [buildkite-agent secret get](/docs/agent/cli/reference/secret) command for redacting the token. ##### Request OIDC token ##### OIDC URLs If using a plugin, such as the [AWS assume-role-with-web-identity](https://github.com/buildkite-plugins/aws-assume-role-with-web-identity-buildkite-plugin) plugin, you'll need to provide an OpenID provider URL. You should set the provider URL to: https://agent.buildkite.com. For specific endpoints for OpenID or JWKS, use: - **OpenID Connect Discovery URL:** https://agent.buildkite.com/.well-known/openid-configuration - **JWKS URI:** https://agent.buildkite.com/.well-known/jwks ##### Claims All of the following claims (with the exception of the [`aud` claim](#aud), which is usually overridden by the [`--audience` option](#audience)) are automatically generated by the Buildkite agent, and are based on metadata from the pipeline job it is currently building. | Claim | Description | `iss` | Issuer Identifies the entity that issued the JWT. _Example:_ `https://agent.buildkite.com` | `sub` | Subject Identifies the subject of the JWT, typically representing the user or entity being authenticated. By default, the subject contains a composite string in the following format: `organization:ORGANIZATION_SLUG:pipeline:PIPELINE_SLUG:ref:REF: commit:BUILD_COMMIT:step:STEP_KEY` If the build has a tag, `REF` is `refs/tags/TAG`. Otherwise, `REF` is `refs/heads/BRANCH`. _Example:_`organization:acme-inc:pipeline:super-duper-app:ref:refs/heads/main:commit:9f3182061f1e2cca4702c368cbc039b7dc9d4485:step:build` If `--subject-claim` is specified, the subject contains only the value of the specified claim instead. See [Custom subject claims](#custom-subject-claims). | `aud` | Audience Identifies the intended audience for the JWT. Defaults to `https://buildkite.com/ORGANIZATION_SLUG` but can be overridden using the ` --audience` flag | `exp` | Expiration time Specifies the expiration time of the JWT, after which the token is no longer valid. Defaults to 5 minutes in the future at generation, but can be overridden using the ` --lifetime` flag. _Example:_ `1669015898` | `nbf` | Not before Specifies the time before which the JWT must not be accepted for processing. Set to the current timestamp at generation. _Example:_ `1669014898` | `iat` | Issued at Specifies the time at which the JWT was issued. Set to the current timestamp at generation. _Example:_ `1669014898` | `organization_slug` | The organization's slug. _Example:_ `acme-inc` | `pipeline_slug` | The pipeline's slug. _Example:_ `super-duper-app` | `build_number` | The build number. _Example:_ `1` | `build_branch` | The repository branch used in the build. _Example:_ `main` | `build_tag` | The tag of the build if enabled in Buildkite. This claim is only included if the tag is set. _Example:_ `1` | `build_commit` | The SHA commit from the repository. _Example:_ `9f3182061f1e2cca4702c368cbc039b7dc9d4485` | `step_key` | The `key` attribute of the step from the pipeline. If the key is not set for the step, `nil` is returned. _Example:_ `build_step` | `job_id` | The job UUID. _Example:_ `0184990a-477b-4fa8-9968-496074483cee` | `agent_id` | The agent UUID. _Example:_ `0184990a-4782-42b5-afc1-16715b10b8ff` | `runner_environment` | Indicates whether the current job is being run on Buildkite hosted agents or the customer's own self-hosted agents. _Valid values:_ `buildkite-hosted`, `self-hosted` | `build_source` | The source of the event that created the build. _Valid values:_ `ui`, `api`, `webhook`, `trigger_job`, `schedule` ###### Optional claims You can generate additional optional claims by adding `--claims` to the `buildkite-agent oidc request-token` command. The `--claims` option can also take multiple values. For example, this command adds the Buildkite organization's UUID value as a claim to the OIDC token: ```sh $ buildkite-agent oidc request-token ... --claim "organization_id" ``` This command adds both the Buildkite organization's UUID and pipeline's UUID values in their own additional claims to the OIDC token: ```sh $ buildkite-agent oidc request-token ... --claim "organization_id,pipeline_id" ``` The following optional claims can be added, whose values are automatically generated by the Buildkite agent, and are based on the pipeline job it is currently building. | Claim | Description | `organization_id` | The organization UUID. _Example:_ `0184990a-477b-4fa8-9968-496074483k77` | `pipeline_id` | The pipeline UUID. _Example:_ `0184990a-4782-42b5-afc1-16715b10b1l0` | `build_id` | The build UUID. _Example:_ `019583d7-3737-4e38-af67-f7cc356bd580` | `cluster_id` | The cluster UUID if using clusters. _Example:_ `0191f956-042f-7ec4-aa62-8e5eeae396d0` | `cluster_name` | The cluster name if using clusters. _Example:_ `default` | `queue_id` | The cluster queue UUID if using clusters. _Example:_ `0191f956-62da-7515-b79b-bdecb519aa32` | `queue_key` | The cluster queue key if using clusters. _Example:_ `runners` | `agent_tag:NAME` | An [agent tag](/docs/agent/cli/reference/start#setting-tags) _Example:_ `agent_tag:queue` ###### Example token contents OIDC tokens are JSON Web Tokens — [JWTs](https://datatracker.ietf.org/doc/html/rfc7519) — and the following is a complete example: ```json { "iss": "https://agent.buildkite.com", "sub": "organization:acme-inc:pipeline:super-duper-app:ref:refs/heads/main:commit:9f3182061f1e2cca4702c368cbc039b7dc9d4485:step:build", "aud": "https://buildkite.com/acme-inc", "iat": 1669014898, "nbf": 1669014898, "exp": 1669015198, "organization_slug": "acme-inc", "pipeline_slug": "super-duper-app", "build_number": 1, "build_branch": "main", "build_tag": "v1.0.0", "build_commit": "9f3182061f1e2cca4702c368cbc039b7dc9d4485", "step_key": "build", "job_id": "0184990a-477b-4fa8-9968-496074483cee", "agent_id": "0184990a-4782-42b5-afc1-16715b10b8ff", "build_source": "ui", "runner_environment": "buildkite-hosted" } ``` ##### AWS session tags For Buildkite OIDC tokens used to integrate with Amazon Web Services (AWS), you can optionally include any of the supported claims in the [AWS session tag format required by the `AssumeRoleWithWebIdentity` operation](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_session-tags.html#id_session-tags_adding-assume-role-idp). These OIDC tokens also typically have an audience of `sts.amazonaws.com`. For example, this command generates an AWS compatible OIDC token that includes the `organization_slug` and `organization_id`: ```sh $ buildkite-agent oidc request-token --audience sts.amazonaws.com --aws-session-tag "organization_slug,organization_id" ``` AWS requires that session tags are string values. Therefore: - Numeric claim values (for example, `build_number`) are presented as strings. - Nullable claim values (for example, `step_key`) are presented as `""` instead of the literal value `null`. Learn more about using Buildkite OIDC tokens with AWS in [OIDC with AWS](/docs/pipelines/security/oidc/aws). ###### Example token contents When the `--aws-session-tag` flag has been used to generate an OIDC token, the contents includes a nested `https://aws.amazon.com/tags` claim: ```json { "iss": "https://agent.buildkite.com", "sub": "organization:acme-inc:pipeline:super-duper-app:ref:refs/heads/main:commit:9f3182061f1e2cca4702c368cbc039b7dc9d4485:step:build", "aud": "https://buildkite.com/acme-inc", "iat": 1669014898, "nbf": 1669014898, "exp": 1669015198, "organization_slug": "acme-inc", "pipeline_slug": "super-duper-app", "build_number": 1, "build_branch": "main", "build_tag": "v1.0.0", "build_commit": "9f3182061f1e2cca4702c368cbc039b7dc9d4485", "step_key": "build", "job_id": "0184990a-477b-4fa8-9968-496074483cee", "agent_id": "0184990a-4782-42b5-afc1-16715b10b8ff", "build_source": "ui", "runner_environment": "buildkite-hosted", "https://aws.amazon.com/tags": { "principal_tags": { "organization_slug": [ "acme-inc" ], "organization_id": [ "f892efa9-103e-4d28-97a1-3b8616a0994d" ] } } } ``` ##### Custom subject claims By default, the `sub` claim in a Buildkite OIDC token contains a composite string with the organization slug, pipeline slug, ref, commit, and step key. The `--subject-claim` flag lets you replace this with a single immutable identifier, which is useful for federated identity providers like [Azure](/docs/pipelines/security/oidc/azure) that require an exact match on the subject. For example, to set the subject to the cluster UUID: ```sh $ buildkite-agent oidc request-token --audience "api://AzureADTokenExchange" --subject-claim cluster_id ``` Only immutable identifiers are allowed as subject claims. Mutable values like slugs and branch names are excluded because renaming them would silently break trust relationships. The following claims can be used with `--subject-claim`: | Claim | Description | | --- | --- | | `organization_id` | The organization UUID | | `pipeline_id` | The pipeline UUID (matches the default subject for providers that expect a pipeline UUID) | | `cluster_id` | The cluster UUID | | `queue_id` | The queue UUID | | `build_id` | The build UUID | | `job_id` | The job UUID | | `agent_id` | The agent UUID | When `--subject-claim` is used, the specified claim is automatically included in the token. You don't need to also pass it with `--claim`. ###### Example token contents When `--subject-claim cluster_id` is used, the `sub` claim contains the cluster UUID instead of the default composite string: ```json { "iss": "https://agent.buildkite.com", "sub": "0191f956-042f-7ec4-aa62-8e5eeae396d0", "aud": "api://AzureADTokenExchange", "iat": 1669014898, "nbf": 1669014898, "exp": 1669015198, "organization_slug": "acme-inc", "pipeline_slug": "super-duper-app", "build_number": 1, "build_branch": "main", "build_commit": "9f3182061f1e2cca4702c368cbc039b7dc9d4485", "step_key": "build", "job_id": "0184990a-477b-4fa8-9968-496074483cee", "agent_id": "0184990a-4782-42b5-afc1-16715b10b8ff", "build_source": "ui", "runner_environment": "buildkite-hosted", "cluster_id": "0191f956-042f-7ec4-aa62-8e5eeae396d0" } ``` --- ### pause URL: https://buildkite.com/docs/agent/cli/reference/pause #### buildkite-agent pause The Buildkite agent's `pause` command is used to manually pause a running Buildkite agent. ##### Pausing an agent ###### Usage `buildkite-agent pause [options...]` ###### Description Pauses the current agent. ###### Example ```shell #### Pauses the agent $ buildkite-agent pause ``` ```shell #### Pauses the agent with an explanatory note and a custom timeout $ buildkite-agent pause --note 'too many llamas' --timeout-in-minutes 60 ``` ###### Options | `--no-color ` [#](#no-color) | Don't show colors in logging (default: false) **Environment variable**: `$BUILDKITE_AGENT_NO_COLOR` | `--debug ` [#](#debug) | Enable debug mode. Synonym for `--log-level debug`. Takes precedence over `--log-level` (default: false) **Environment variable**: `$BUILDKITE_AGENT_DEBUG` | `--log-level value` [#](#log-level) | Set the log level for the agent, making logging more or less verbose. Defaults to notice. Allowed values are: debug, info, error, warn, fatal (default: "notice") **Environment variable**: `$BUILDKITE_AGENT_LOG_LEVEL` | `--experiment value` [#](#experiment) | Enable experimental features within the buildkite-agent **Environment variable**: `$BUILDKITE_AGENT_EXPERIMENT` | `--profile value` [#](#profile) | Enable a profiling mode, either cpu, memory, mutex or block **Environment variable**: `$BUILDKITE_AGENT_PROFILE` | `--agent-access-token value` [#](#agent-access-token) | The access token used to identify the agent **Environment variable**: `$BUILDKITE_AGENT_ACCESS_TOKEN` | `--endpoint value` [#](#endpoint) | The Agent API endpoint (default: "https://agent-edge.buildkite.com/v3") **Environment variable**: `$BUILDKITE_AGENT_ENDPOINT` | `--no-http2 ` [#](#no-http2) | Disable HTTP2 when communicating with the Agent API (default: false) **Environment variable**: `$BUILDKITE_NO_HTTP2` | `--debug-http ` [#](#debug-http) | Enable HTTP debug mode, which dumps all request and response bodies to the log (default: false) **Environment variable**: `$BUILDKITE_AGENT_DEBUG_HTTP` | `--trace-http ` [#](#trace-http) | Enable HTTP trace mode, which logs timings for each HTTP request. Timings are logged at the debug level unless a request fails at the network level in which case they are logged at the error level (default: false) **Environment variable**: `$BUILDKITE_AGENT_TRACE_HTTP` | `--note value` [#](#note) | A descriptive note to record why the agent is paused **Environment variable**: `$BUILDKITE_AGENT_PAUSE_NOTE` | `--timeout-in-minutes value` [#](#timeout-in-minutes) | Timeout after which the agent is automatically resumed, in minutes (default: 0) **Environment variable**: `$BUILDKITE_AGENT_PAUSE_TIMEOUT_MINUTES` --- ### pipeline URL: https://buildkite.com/docs/agent/cli/reference/pipeline #### buildkite-agent pipeline The Buildkite agent's `pipeline` command allows you to add and replace build steps in the running build. The steps are defined using YAML or JSON and can be read from a file or streamed from the output of a script. See the [Defining your pipeline steps](/docs/pipelines/configure/defining-steps) guide for a step-by-step example and list of step types. ##### Uploading pipelines > 🚧 Processing of a single pipeline file > In versions of the Buildkite agent prior to 3.104.0, the `buildkite-agent pipeline upload` command only processes a single pipeline file. If multiple files are passed into a command (including using a wildcard `*` in the filename), only the first pipeline file will be processed, and any additional pipeline files provided as arguments are ignored. Later versions of the Buildkite agent do support multiple pipeline file uploads. See [Uploading multiple pipelines](#uploading-multiple-pipelines) for more information. ###### Usage `buildkite-agent pipeline upload [file] [options...]` ###### Description Allows you to change the pipeline of a running build by uploading either a YAML (recommended) or JSON configuration file. If no configuration file is provided, the command looks for the file in the following locations: - buildkite.yml - buildkite.yaml - buildkite.json - .buildkite/pipeline.yml - .buildkite/pipeline.yaml - .buildkite/pipeline.json - buildkite/pipeline.yml - buildkite/pipeline.yaml - buildkite/pipeline.json You can also pipe build pipelines to the command allowing you to create scripts that generate dynamic pipelines. The configuration file has a limit of 500 steps per file. Configuration files with over 500 steps must be split into multiple files and uploaded in separate steps. ###### Example ```shell $ buildkite-agent pipeline upload $ buildkite-agent pipeline upload my-custom-pipeline.yml $ ./script/dynamic_step_generator | buildkite-agent pipeline upload ``` ###### Options | `--no-color ` [#](#no-color) | Don't show colors in logging (default: false) **Environment variable**: `$BUILDKITE_AGENT_NO_COLOR` | `--debug ` [#](#debug) | Enable debug mode. Synonym for `--log-level debug`. Takes precedence over `--log-level` (default: false) **Environment variable**: `$BUILDKITE_AGENT_DEBUG` | `--log-level value` [#](#log-level) | Set the log level for the agent, making logging more or less verbose. Defaults to notice. Allowed values are: debug, info, error, warn, fatal (default: "notice") **Environment variable**: `$BUILDKITE_AGENT_LOG_LEVEL` | `--experiment value` [#](#experiment) | Enable experimental features within the buildkite-agent **Environment variable**: `$BUILDKITE_AGENT_EXPERIMENT` | `--profile value` [#](#profile) | Enable a profiling mode, either cpu, memory, mutex or block **Environment variable**: `$BUILDKITE_AGENT_PROFILE` | `--agent-access-token value` [#](#agent-access-token) | The access token used to identify the agent **Environment variable**: `$BUILDKITE_AGENT_ACCESS_TOKEN` | `--endpoint value` [#](#endpoint) | The Agent API endpoint (default: "https://agent-edge.buildkite.com/v3") **Environment variable**: `$BUILDKITE_AGENT_ENDPOINT` | `--no-http2 ` [#](#no-http2) | Disable HTTP2 when communicating with the Agent API (default: false) **Environment variable**: `$BUILDKITE_NO_HTTP2` | `--debug-http ` [#](#debug-http) | Enable HTTP debug mode, which dumps all request and response bodies to the log (default: false) **Environment variable**: `$BUILDKITE_AGENT_DEBUG_HTTP` | `--trace-http ` [#](#trace-http) | Enable HTTP trace mode, which logs timings for each HTTP request. Timings are logged at the debug level unless a request fails at the network level in which case they are logged at the error level (default: false) **Environment variable**: `$BUILDKITE_AGENT_TRACE_HTTP` | `--replace ` [#](#replace) | Replace the rest of the existing pipeline with the steps uploaded. Jobs that are already running are not removed (default: false) **Environment variable**: `$BUILDKITE_PIPELINE_REPLACE` | `--job value` [#](#job) | The job that is making the changes to its build **Environment variable**: `$BUILDKITE_JOB_ID` | `--dry-run ` [#](#dry-run) | Rather than uploading the pipeline, it will be echoed to stdout (default: false) **Environment variable**: `$BUILDKITE_PIPELINE_UPLOAD_DRY_RUN` | `--format value` [#](#format) | In dry-run mode, specifies the form to output the pipeline in. Must be one of: json,yaml (default: "json") **Environment variable**: `$BUILDKITE_PIPELINE_UPLOAD_DRY_RUN_FORMAT` | `--no-interpolation ` [#](#no-interpolation) | Skip variable interpolation into the pipeline prior to upload (default: false) **Environment variable**: `$BUILDKITE_PIPELINE_NO_INTERPOLATION` | `--reject-secrets ` [#](#reject-secrets) | When true, fail the pipeline upload early if the pipeline contains secrets (default: false) **Environment variable**: `$BUILDKITE_AGENT_PIPELINE_UPLOAD_REJECT_SECRETS` | `--apply-if-changed ` [#](#apply-if-changed) | When enabled, steps containing an `if_changed` key are evaluated against the git diff. If the `if_changed` glob pattern match no files changed in the build, the step is skipped. Minimum Buildkite Agent version: v3.99 (with --apply-if-changed flag), v3.103.0 (enabled by default) (default: true) [$BUILDKITE_AGENT_APPLY_IF_CHANGED, $BUILDKITE_AGENT_APPLY_SKIP_IF_UNCHANGED] **Environment variable**: `$BUILDKITE_AGENT_APPLY_IF_CHANGED` | `--git-diff-base value` [#](#git-diff-base) | Provides the base from which to find the git diff when processing `if_changed`, e.g. origin/main. If not provided, it uses the first valid value of {origin/$BUILDKITE_PULL_REQUEST_BASE_BRANCH, origin/$BUILDKITE_PIPELINE_DEFAULT_BRANCH, origin/main}. **Environment variable**: `$BUILDKITE_PULL_REQUEST_BASE_BRANCH` | `--fetch-diff-base ` [#](#fetch-diff-base) | When enabled, the base for computing the git diff will be git-fetched prior to computing the diff (default: false) **Environment variable**: `$BUILDKITE_FETCH_DIFF_BASE` | `--changed-files-path value` [#](#changed-files-path) | Path to a file containing the list of changed files (newline-separated) to use for `if_changed` evaluation. When provided, the agent skips running git commands to determine changed files. **Environment variable**: `$BUILDKITE_CHANGED_FILES_PATH` | `--jwks-file value` [#](#jwks-file) | Path to a file containing a JWKS. Passing this flag enables pipeline signing **Environment variable**: `$BUILDKITE_AGENT_JWKS_FILE` | `--jwks-key-id value` [#](#jwks-key-id) | The JWKS key ID to use when signing the pipeline. Required when using a JWKS **Environment variable**: `$BUILDKITE_AGENT_JWKS_KEY_ID` | `--signing-aws-kms-key value` [#](#signing-aws-kms-key) | The AWS KMS key identifier which is used to sign pipelines. **Environment variable**: `$BUILDKITE_AGENT_AWS_KMS_KEY` | `--signing-gcp-kms-key value` [#](#signing-gcp-kms-key) | The GCP KMS key identifier which is used to sign pipelines. This should be in the format projects/*/locations/*/keyRings/*/cryptoKeys/*/cryptoKeyVersions/* **Environment variable**: `$BUILDKITE_AGENT_GCP_KMS_KEY` | `--debug-signing ` [#](#debug-signing) | Enable debug logging for pipeline signing. This can potentially leak secrets to the logs as it prints each step in full before signing. Requires debug logging to be enabled (default: false) **Environment variable**: `$BUILDKITE_AGENT_DEBUG_SIGNING` | `--redacted-vars value` [#](#redacted-vars) | Pattern of environment variable names containing sensitive values (default: "*_PASSWORD", "*_SECRET", "*_TOKEN", "*_PRIVATE_KEY", "*_ACCESS_KEY", "*_SECRET_KEY", "*_CONNECTION_STRING", "*_API_KEY") **Environment variable**: `$BUILDKITE_REDACTED_VARS` ##### Pipeline format The pipeline can be written as YAML or JSON, but YAML is more common for its readability. There are three top-level properties you can specify: * The `agents` attribute - a map of agent characteristics such as `os` or `queue` that restrict what agents the command will run on. * The `env` attribute - a map of [environment variables](/docs/pipelines/configure/environment-variables) to apply to all steps. * The `steps` attribute - an array of [build pipeline steps](/docs/pipelines/configure/defining-steps). ##### Insertion order Steps are inserted immediately following the job performing the pipeline upload. Note that if you perform multiple uploads from a single step, they can appear to be in reverse order, because the later uploads are inserted earlier in the pipeline. ##### Environment variable substitution The `pipeline upload` command supports environment variable substitution using the syntax `$VAR` and `${VAR}`. For example, the following pipeline substitutes a number of [Buildkite's default environment variables](/docs/pipelines/configure/environment-variables) into a [trigger step](/docs/pipelines/configure/step-types/trigger-step): ```yaml - trigger: "app-deploy" label: "\:rocket\: Deploy" branches: "main" async: true build: message: "${BUILDKITE_MESSAGE}" commit: "${BUILDKITE_COMMIT}" branch: "${BUILDKITE_BRANCH}" ``` If you want an environment variable to be evaluated at runtime (for example, using the step's environment variables), ensure you escape the `$` character using `$$` or `\$`. For example: ```yaml - command: "deploy.sh $$SERVER" env: SERVER: "server-a" ``` ###### Escaping the $ character If you need to prevent substitution, you can escape the `$` character by using `$$` or `\$`. For example, using `$$USD` and `\$USD` will both result in the same value: `$USD`. ###### Disabling interpolation You can disable interpolation with the `--no-interpolation` flag, which was added in v3.1.1. ###### Requiring environment variables You can set required environment variables using the syntax `${VAR?}`. If `VAR` is not set, the `pipeline upload` command will print an error and exit with a status of 1. For example, the following step will cause the pipeline upload to error if the `SERVER` environment variable has not been set: ```yaml - command: "deploy.sh \"${SERVER?}\"" ``` You can set a custom error message after the `?` character. For example, the following prints the error message `SERVER: is not set. Please specify a server` if the environment variable has not been set: ```yaml - command: "deploy.sh \"${SERVER?is not set. Please specify a server}\"" ``` ###### Default, blank, and missing values If an environment variable has not been set it will evaluate to a blank string. You can set a fallback value using the syntax `${VAR:-default-value}`. For example, the following step will run the command `deploy.sh staging`: ```yaml - command: "deploy.sh \"${SERVER:-staging}\"" ``` | Environment Variables | Syntax | Result | `` | `"${SERVER:-staging}"` | `"staging"` | `SERVER=""` | `"${SERVER:-staging}"` | `"staging"` | `SERVER="staging-5"` | `"${SERVER:-staging}"` | `"staging-5"` If you need to substitute environment variables containing empty strings, you can use the syntax `${VAR-default-value}` (notice the missing `:`). | Environment Variables | Syntax | Result | `` | `"${SERVER-staging}"` | `"staging"` | `SERVER=""` | `"${SERVER-staging}"` | `""` | `SERVER="staging-5"` | `"${SERVER-staging}"` | `"staging-5"` ###### Extracting character ranges You can substitute a subset of characters from an environment variable by specifying a start and end range using the syntax `${VAR:start:end}`. For example, the following step will echo the first 7 characters of the `BUILDKITE_COMMIT` environment variable: ```yaml - command: "echo \"Short commit is: ${BUILDKITE_COMMIT:0:7}\"" ``` If the environment variable has not been set, the range will return a blank string. ##### Uploading multiple pipelines From version 3.104.0 of the Buildkite agent, multiple pipelines can be uploaded by passing them as arguments to a single command: ```bash buildkite-agent pipeline upload .buildkite/pipeline1.yml .buildkite/pipeline2.yml ```` Shell glob expansions, which are expanded by the shell into a list of files, can also be used: ```bash buildkite-agent pipeline upload .buildkite/pipeline*.yml ```` Older agent versions only process a single file. Different approaches are available for handling the upload of multiple pipeline files for processing. ###### Multiple sequential uploads You can call `buildkite-agent pipeline upload` multiple times within the same step to upload multiple pipeline files: ```bash buildkite-agent pipeline upload .buildkite/pipeline1.yml buildkite-agent pipeline upload .buildkite/pipeline2.yml ``` ###### Pass multiple files to command Using the `find` command, you can pipe `|` multiple file paths into the `buildkite-agent pipeline upload` command: ```bash find .buildkite/ -type f -iname '*.yaml' -print0 | xargs -0 -n1 buildkite-agent pipeline upload ``` ###### Combine multiple pipeline files Since the `buildkite-agent pipeline upload` command is also able to accept pipeline YAML, you can emit the contents of multiple pipeline files and have this combined output be processed directly from STDIN. > 🚧 Processing of multiple pipeline files > When passing multiple pipeline files into the pipeline upload command, include a `---` on the first line of each pipeline file to indicate the beginning of each new pipeline YAML file. This is required to ensure the `buildkite-agent` is able to correctly process multiple files that have been combined into a single input stream. Using the following three example pipeline files: ```yaml --- steps: - label: "Start of the build" command: ./scripts/build-start.sh ``` ```yaml --- steps: - label: "Middle of the build" command: ./scripts/build-middle.sh ``` ```yaml --- steps: - label: "End of the build" command: ./scripts/build-end.sh ``` Pass the contents of all the pipeline files that are matching the wildcard `*` file pattern into the pipeline upload command: ```bash cat .buildkite/pipeline-*.yml | buildkite-agent pipeline upload ``` Alternatively, you can explicitly list each pipeline file to be passed into the pipeline upload command: ```bash cat .buildkite/pipeline-start.yml .buildkite/pipeline-middle.yml .buildkite/pipeline-end.yml | buildkite-agent pipeline upload ``` ##### Troubleshooting Here are some common issues that can occur when uploading a pipeline. ###### Common errors Pipeline uploads can be rejected if certain criteria are not met. Here are explanations for why your pipeline upload might be rejected. | Error | Reason | `The key "duplicate-key-name" has already been used by another step in this build` | This error occurs when you try to upload a pipeline step with a `key` attribute that matches the `key` attribute of an existing step in the pipeline. `key` attributes must be unique for all steps in a build. To resolve this error, either remove the duplicate `key` or change it to a unique value. | `You can only change the pipeline of a running build` | This error occurs when you attempt to upload a pipeline to a build that has already finished. This typically happens when using the `--job` option with the upload command. To resolve this, ensure the build is still running before uploading, or start a new build. --- ### redactor URL: https://buildkite.com/docs/agent/cli/reference/redactor #### buildkite-agent redactor The Buildkite agent automatically redacts some sensitive information from logs, such as secrets fetched with the [`secret get`](/docs/agent/cli/reference/secret) command, and any environment variables that match the value given in the [`--redacted-vars` flag](/docs/agent/cli/reference/start#redacted-vars). However, sometimes a job will source something sensitive through a side channel - perhaps a third-party secrets storage system like Hashicorp Vault or AWS Secrets Manager. In these cases, you can use the `redactor add` command to add the sensitive information to the redactor, ensuring it is redacted from subsequent logs. ##### Adding a value to the redactor ###### Usage `buildkite-agent redactor add [options...] [file-with-content-to-redact]` ###### Description This command may be used to parse a file for values to redact from a running job's log output. If you dynamically fetch secrets during a job, it is recommended that you use this command to ensure they will be redacted from subsequent logs. Secrets fetched with the builtin `secret get` command do not require the use of this command, they will be redacted automatically. ###### Examples To redact the verbatim contents of the file 'id_ed25519' from future logs: ```shell $ buildkite-agent redactor add id_ed25519 ``` To redact the string 'llamasecret' from future logs: ```shell $ echo llamasecret | buildkite-agent redactor add ``` Pass a flat JSON object whose keys are unique and whose values are your secrets: ```shell $ echo '{"db_password":"secret1","api_token":"secret2","ssh_key":"secret3"}' | buildkite-agent redactor add --format json ``` Or ```shell $ buildkite-agent redactor add --format json my-secrets.json ``` JSON does not allow duplicate keys. If you repeat the same key ("key"), the JSON parser keeps only the final entry, so only that single value is added to the redactor: ```shell $ echo '{"key":"value1","key":"value2","key":"value3"}' | buildkite-agent redactor add --format json ``` ###### Options | `--no-color ` [#](#no-color) | Don't show colors in logging (default: false) **Environment variable**: `$BUILDKITE_AGENT_NO_COLOR` | `--debug ` [#](#debug) | Enable debug mode. Synonym for `--log-level debug`. Takes precedence over `--log-level` (default: false) **Environment variable**: `$BUILDKITE_AGENT_DEBUG` | `--log-level value` [#](#log-level) | Set the log level for the agent, making logging more or less verbose. Defaults to notice. Allowed values are: debug, info, error, warn, fatal (default: "notice") **Environment variable**: `$BUILDKITE_AGENT_LOG_LEVEL` | `--experiment value` [#](#experiment) | Enable experimental features within the buildkite-agent **Environment variable**: `$BUILDKITE_AGENT_EXPERIMENT` | `--profile value` [#](#profile) | Enable a profiling mode, either cpu, memory, mutex or block **Environment variable**: `$BUILDKITE_AGENT_PROFILE` | `--agent-access-token value` [#](#agent-access-token) | The access token used to identify the agent **Environment variable**: `$BUILDKITE_AGENT_ACCESS_TOKEN` | `--endpoint value` [#](#endpoint) | The Agent API endpoint (default: "https://agent-edge.buildkite.com/v3") **Environment variable**: `$BUILDKITE_AGENT_ENDPOINT` | `--no-http2 ` [#](#no-http2) | Disable HTTP2 when communicating with the Agent API (default: false) **Environment variable**: `$BUILDKITE_NO_HTTP2` | `--debug-http ` [#](#debug-http) | Enable HTTP debug mode, which dumps all request and response bodies to the log (default: false) **Environment variable**: `$BUILDKITE_AGENT_DEBUG_HTTP` | `--trace-http ` [#](#trace-http) | Enable HTTP trace mode, which logs timings for each HTTP request. Timings are logged at the debug level unless a request fails at the network level in which case they are logged at the error level (default: false) **Environment variable**: `$BUILDKITE_AGENT_TRACE_HTTP` | `--format value` [#](#format) | The format for the input, whose value is either `json` or `none`. `none` adds the entire input's content to the redactor, with the exception of leading and trailing space. `json` parses the input's content as a JSON object, where each value of each key is added to the redactor. (default: "none") **Environment variable**: `$BUILDKITE_AGENT_REDACT_ADD_FORMAT` --- ### resume URL: https://buildkite.com/docs/agent/cli/reference/resume #### buildkite-agent resume The Buildkite agent's `resume` command is used to manually resume a paused Buildkite agent. ##### Resuming an agent ###### Usage `buildkite-agent resume [options...]` ###### Description Resumes the current agent if it is paused. ###### Example ```shell #### Resumes the agent $ buildkite-agent resume ``` ###### Options | `--no-color ` [#](#no-color) | Don't show colors in logging (default: false) **Environment variable**: `$BUILDKITE_AGENT_NO_COLOR` | `--debug ` [#](#debug) | Enable debug mode. Synonym for `--log-level debug`. Takes precedence over `--log-level` (default: false) **Environment variable**: `$BUILDKITE_AGENT_DEBUG` | `--log-level value` [#](#log-level) | Set the log level for the agent, making logging more or less verbose. Defaults to notice. Allowed values are: debug, info, error, warn, fatal (default: "notice") **Environment variable**: `$BUILDKITE_AGENT_LOG_LEVEL` | `--experiment value` [#](#experiment) | Enable experimental features within the buildkite-agent **Environment variable**: `$BUILDKITE_AGENT_EXPERIMENT` | `--profile value` [#](#profile) | Enable a profiling mode, either cpu, memory, mutex or block **Environment variable**: `$BUILDKITE_AGENT_PROFILE` | `--agent-access-token value` [#](#agent-access-token) | The access token used to identify the agent **Environment variable**: `$BUILDKITE_AGENT_ACCESS_TOKEN` | `--endpoint value` [#](#endpoint) | The Agent API endpoint (default: "https://agent-edge.buildkite.com/v3") **Environment variable**: `$BUILDKITE_AGENT_ENDPOINT` | `--no-http2 ` [#](#no-http2) | Disable HTTP2 when communicating with the Agent API (default: false) **Environment variable**: `$BUILDKITE_NO_HTTP2` | `--debug-http ` [#](#debug-http) | Enable HTTP debug mode, which dumps all request and response bodies to the log (default: false) **Environment variable**: `$BUILDKITE_AGENT_DEBUG_HTTP` | `--trace-http ` [#](#trace-http) | Enable HTTP trace mode, which logs timings for each HTTP request. Timings are logged at the debug level unless a request fails at the network level in which case they are logged at the error level (default: false) **Environment variable**: `$BUILDKITE_AGENT_TRACE_HTTP` --- ### secret URL: https://buildkite.com/docs/agent/cli/reference/secret #### buildkite-agent secret The `buildkite-agent secret get` command allows you to query and retrieve secrets from [Buildkite secrets](/docs/pipelines/security/secrets/buildkite-secrets). This command is useful for fetching secrets that are required by your build scripts, without having to configure third-party secret management systems. ##### Getting a secret --- ### step URL: https://buildkite.com/docs/agent/cli/reference/step #### buildkite-agent step The Buildkite agent's `step` command provides the ability to retrieve and update the attributes of steps in your `pipeline.yml` files. ##### Updating a step Use this command in your build scripts to update the step attributes. The following attributes can be updated: * `label` * `notify` * `priority` ###### Usage `buildkite-agent step update [options...]` ###### Description Update an attribute of a step in the build Note that step labels are used in commit status updates, so if you change the label of a running step, you may end up with an 'orphaned' status update under the old label, as well as new ones using the updated label. To avoid orphaned status updates, in your Pipeline Settings > GitHub: * Make sure Update commit statuses is not selected. Note that this prevents Buildkite from automatically creating and sending statuses for this pipeline, meaning you will have to handle all commit statuses through the pipeline.yml ###### Example ```shell $ buildkite-agent step update "label" "New Label" $ buildkite-agent step update "label" " (add to end of label)" --append $ buildkite-agent step update "label" | `--no-color ` [#](#no-color) | Don't show colors in logging (default: false) **Environment variable**: `$BUILDKITE_AGENT_NO_COLOR` | `--debug ` [#](#debug) | Enable debug mode. Synonym for `--log-level debug`. Takes precedence over `--log-level` (default: false) **Environment variable**: `$BUILDKITE_AGENT_DEBUG` | `--log-level value` [#](#log-level) | Set the log level for the agent, making logging more or less verbose. Defaults to notice. Allowed values are: debug, info, error, warn, fatal (default: "notice") **Environment variable**: `$BUILDKITE_AGENT_LOG_LEVEL` | `--experiment value` [#](#experiment) | Enable experimental features within the buildkite-agent **Environment variable**: `$BUILDKITE_AGENT_EXPERIMENT` | `--profile value` [#](#profile) | Enable a profiling mode, either cpu, memory, mutex or block **Environment variable**: `$BUILDKITE_AGENT_PROFILE` | `--agent-access-token value` [#](#agent-access-token) | The access token used to identify the agent **Environment variable**: `$BUILDKITE_AGENT_ACCESS_TOKEN` | `--endpoint value` [#](#endpoint) | The Agent API endpoint (default: "https://agent-edge.buildkite.com/v3") **Environment variable**: `$BUILDKITE_AGENT_ENDPOINT` | `--no-http2 ` [#](#no-http2) | Disable HTTP2 when communicating with the Agent API (default: false) **Environment variable**: `$BUILDKITE_NO_HTTP2` | `--debug-http ` [#](#debug-http) | Enable HTTP debug mode, which dumps all request and response bodies to the log (default: false) **Environment variable**: `$BUILDKITE_AGENT_DEBUG_HTTP` | `--trace-http ` [#](#trace-http) | Enable HTTP trace mode, which logs timings for each HTTP request. Timings are logged at the debug level unless a request fails at the network level in which case they are logged at the error level (default: false) **Environment variable**: `$BUILDKITE_AGENT_TRACE_HTTP` | `--step value` [#](#step) | The step to update. Can be either its ID (BUILDKITE_STEP_ID) or key (BUILDKITE_STEP_KEY) **Environment variable**: `$BUILDKITE_STEP_ID` | `--build value` [#](#build) | The build to look for the step in. Only required when targeting a step using its key (BUILDKITE_STEP_KEY) **Environment variable**: `$BUILDKITE_BUILD_ID` | `--append ` [#](#append) | Append to current attribute instead of replacing it (default: false) **Environment variable**: `$BUILDKITE_STEP_UPDATE_APPEND` | `--redacted-vars value` [#](#redacted-vars) | Pattern of environment variable names containing sensitive values (default: "*_PASSWORD", "*_SECRET", "*_TOKEN", "*_PRIVATE_KEY", "*_ACCESS_KEY", "*_SECRET_KEY", "*_CONNECTION_STRING", "*_API_KEY") **Environment variable**: `$BUILDKITE_REDACTED_VARS` ##### Getting a step Use this command in your build scripts to get the value of a particular attribute from a step. The following attributes values can be retrieved: * `agents` * `command` * `concurrency_key` * `concurrency_limit` * `depends_on` * `env` * `if` * `key` * `label` * `notify` * `outcome` * `parallelism` * `state` * `timeout` * `type` ###### Usage `buildkite-agent step get [options...]` ###### Description Retrieve the value of an attribute in a step. If no attribute is passed, the entire step will be returned. In the event a complex object is returned (an object or an array), you'll need to supply the --format option to tell the agent how it should output the data (currently only JSON is supported). ###### Example ```shell $ buildkite-agent step get "label" --step "key" $ buildkite-agent step get --format json $ buildkite-agent step get "state" --step "my-other-step" ``` ###### Options | `--no-color ` [#](#no-color) | Don't show colors in logging (default: false) **Environment variable**: `$BUILDKITE_AGENT_NO_COLOR` | `--debug ` [#](#debug) | Enable debug mode. Synonym for `--log-level debug`. Takes precedence over `--log-level` (default: false) **Environment variable**: `$BUILDKITE_AGENT_DEBUG` | `--log-level value` [#](#log-level) | Set the log level for the agent, making logging more or less verbose. Defaults to notice. Allowed values are: debug, info, error, warn, fatal (default: "notice") **Environment variable**: `$BUILDKITE_AGENT_LOG_LEVEL` | `--experiment value` [#](#experiment) | Enable experimental features within the buildkite-agent **Environment variable**: `$BUILDKITE_AGENT_EXPERIMENT` | `--profile value` [#](#profile) | Enable a profiling mode, either cpu, memory, mutex or block **Environment variable**: `$BUILDKITE_AGENT_PROFILE` | `--agent-access-token value` [#](#agent-access-token) | The access token used to identify the agent **Environment variable**: `$BUILDKITE_AGENT_ACCESS_TOKEN` | `--endpoint value` [#](#endpoint) | The Agent API endpoint (default: "https://agent-edge.buildkite.com/v3") **Environment variable**: `$BUILDKITE_AGENT_ENDPOINT` | `--no-http2 ` [#](#no-http2) | Disable HTTP2 when communicating with the Agent API (default: false) **Environment variable**: `$BUILDKITE_NO_HTTP2` | `--debug-http ` [#](#debug-http) | Enable HTTP debug mode, which dumps all request and response bodies to the log (default: false) **Environment variable**: `$BUILDKITE_AGENT_DEBUG_HTTP` | `--trace-http ` [#](#trace-http) | Enable HTTP trace mode, which logs timings for each HTTP request. Timings are logged at the debug level unless a request fails at the network level in which case they are logged at the error level (default: false) **Environment variable**: `$BUILDKITE_AGENT_TRACE_HTTP` | `--step value` [#](#step) | The step to get. Can be either its ID (BUILDKITE_STEP_ID) or key (BUILDKITE_STEP_KEY) **Environment variable**: `$BUILDKITE_STEP_ID` | `--build value` [#](#build) | The build to look for the step in. Only required when targeting a step using its key (BUILDKITE_STEP_KEY) **Environment variable**: `$BUILDKITE_BUILD_ID` | `--format value` [#](#format) | The format to output the attribute value in (currently only JSON is supported) **Environment variable**: `$BUILDKITE_STEP_GET_FORMAT` ##### Getting the outcome of a step If you're only interested in whether a step passed or failed, perhaps to use conditional logic inside your build script, you can use the same approach as above in [Getting a step](#getting-a-step). For example, the following pipeline has one step that fails (`one`), and another that passes (`two`). After the `wait`, the next two steps print the `outcome` attribute of steps `one` and `two`, and the last step [annotates the build](/docs/agent/cli/reference/annotate) if step `one` fails. Note that `step get` needs the `key` of the step to identify it, not the `label`. The `outcome` is `passed`, `hard_failed`, `soft_failed`, or `errored`. A "hard fail" is a non-zero exit status that fails the build. A ["soft fail"](/docs/pipelines/configure/soft-fail) is a non-zero exit status that does not fail the build. An "errored" step outcome is reserved for infrastructure issues, such as timeouts, cancellations or expired jobs. ```yaml steps: - label: 'Step 1' command: "false" key: 'one' - label: 'Step 2' command: "true" key: 'two' - wait: continue_on_failure: true - label: 'Step 3' command: 'echo `buildkite-agent step get "outcome" --step "one"`' - label: 'Step 4' command: 'echo `buildkite-agent step get "outcome" --step "two"`' - label: 'Step 5' command: | if [ $(buildkite-agent step get "outcome" --step "one") == "hard_failed" ]; then buildkite-agent annotate 'this build failed' --style 'error' fi ``` ##### Understanding step states vs job states The `buildkite-agent step get` command returns _step_ `state` and `outcome` values. The [REST](/docs/apis/rest-api) and [GraphQL](/docs/apis/graphql-api) APIs return [_job_ states](/docs/pipelines/configure/defining-steps#job-states). For more information regarding the difference between these values, see the definitions of [step](/docs/pipelines/glossary#step) and [job](/docs/pipelines/glossary#job). ##### Canceling a step Use this command to programmatically cancel all jobs for a step. It is possible to issue graceful and forced cancel commands. Force canceling a step can be used to cancel lost or hung jobs before their agents would otherwise be marked as lost. ###### Usage `buildkite-agent step cancel [options...]` ###### Description Cancel all unfinished jobs for a step ###### Example ```shell $ buildkite-agent step cancel --step "key" $ buildkite-agent step cancel --step "key" --force $ buildkite-agent step cancel --step "key" --force --force-grace-period-seconds 30 ``` ###### Options | `--no-color ` [#](#no-color) | Don't show colors in logging (default: false) **Environment variable**: `$BUILDKITE_AGENT_NO_COLOR` | `--debug ` [#](#debug) | Enable debug mode. Synonym for `--log-level debug`. Takes precedence over `--log-level` (default: false) **Environment variable**: `$BUILDKITE_AGENT_DEBUG` | `--log-level value` [#](#log-level) | Set the log level for the agent, making logging more or less verbose. Defaults to notice. Allowed values are: debug, info, error, warn, fatal (default: "notice") **Environment variable**: `$BUILDKITE_AGENT_LOG_LEVEL` | `--experiment value` [#](#experiment) | Enable experimental features within the buildkite-agent **Environment variable**: `$BUILDKITE_AGENT_EXPERIMENT` | `--profile value` [#](#profile) | Enable a profiling mode, either cpu, memory, mutex or block **Environment variable**: `$BUILDKITE_AGENT_PROFILE` | `--agent-access-token value` [#](#agent-access-token) | The access token used to identify the agent **Environment variable**: `$BUILDKITE_AGENT_ACCESS_TOKEN` | `--endpoint value` [#](#endpoint) | The Agent API endpoint (default: "https://agent-edge.buildkite.com/v3") **Environment variable**: `$BUILDKITE_AGENT_ENDPOINT` | `--no-http2 ` [#](#no-http2) | Disable HTTP2 when communicating with the Agent API (default: false) **Environment variable**: `$BUILDKITE_NO_HTTP2` | `--debug-http ` [#](#debug-http) | Enable HTTP debug mode, which dumps all request and response bodies to the log (default: false) **Environment variable**: `$BUILDKITE_AGENT_DEBUG_HTTP` | `--trace-http ` [#](#trace-http) | Enable HTTP trace mode, which logs timings for each HTTP request. Timings are logged at the debug level unless a request fails at the network level in which case they are logged at the error level (default: false) **Environment variable**: `$BUILDKITE_AGENT_TRACE_HTTP` | `--step value` [#](#step) | The step to cancel. Can be either its ID (BUILDKITE_STEP_ID) or key (BUILDKITE_STEP_KEY) **Environment variable**: `$BUILDKITE_STEP_ID` | `--force ` [#](#force) | Transition unfinished jobs to a canceled state instead of waiting for jobs to finish uploading artifacts (default: false) **Environment variable**: `$BUILDKITE_STEP_CANCEL_FORCE` | `--force-grace-period-seconds value` [#](#force-grace-period-seconds) | The number of seconds to wait for agents to finish uploading artifacts before transitioning unfinished jobs to a canceled state. `--force` must also be supplied for this to take affect (default: 10) [$BUILDKITE_STEP_CANCEL_FORCE_GRACE_PERIOD_SECONDS, $BUILDKITE_CANCEL_GRACE_PERIOD] **Environment variable**: `$BUILDKITE_STEP_CANCEL_FORCE_GRACE_PERIOD_SECONDS` --- ### stop URL: https://buildkite.com/docs/agent/cli/reference/stop #### buildkite-agent stop The Buildkite agent's `stop` command is used to manually stop a running Buildkite agent. ##### Stopping an agent ###### Usage `buildkite-agent stop [options...]` ###### Description Stop the current agent. ###### Example ```shell #### Stops the agent gracefully after any currently running job completes $ buildkite-agent stop ``` ```shell #### Stops the agent, cancelling any currently running job $ buildkite-agent stop --force ``` ###### Options | `--no-color ` [#](#no-color) | Don't show colors in logging (default: false) **Environment variable**: `$BUILDKITE_AGENT_NO_COLOR` | `--debug ` [#](#debug) | Enable debug mode. Synonym for `--log-level debug`. Takes precedence over `--log-level` (default: false) **Environment variable**: `$BUILDKITE_AGENT_DEBUG` | `--log-level value` [#](#log-level) | Set the log level for the agent, making logging more or less verbose. Defaults to notice. Allowed values are: debug, info, error, warn, fatal (default: "notice") **Environment variable**: `$BUILDKITE_AGENT_LOG_LEVEL` | `--experiment value` [#](#experiment) | Enable experimental features within the buildkite-agent **Environment variable**: `$BUILDKITE_AGENT_EXPERIMENT` | `--profile value` [#](#profile) | Enable a profiling mode, either cpu, memory, mutex or block **Environment variable**: `$BUILDKITE_AGENT_PROFILE` | `--agent-access-token value` [#](#agent-access-token) | The access token used to identify the agent **Environment variable**: `$BUILDKITE_AGENT_ACCESS_TOKEN` | `--endpoint value` [#](#endpoint) | The Agent API endpoint (default: "https://agent-edge.buildkite.com/v3") **Environment variable**: `$BUILDKITE_AGENT_ENDPOINT` | `--no-http2 ` [#](#no-http2) | Disable HTTP2 when communicating with the Agent API (default: false) **Environment variable**: `$BUILDKITE_NO_HTTP2` | `--debug-http ` [#](#debug-http) | Enable HTTP debug mode, which dumps all request and response bodies to the log (default: false) **Environment variable**: `$BUILDKITE_AGENT_DEBUG_HTTP` | `--trace-http ` [#](#trace-http) | Enable HTTP trace mode, which logs timings for each HTTP request. Timings are logged at the debug level unless a request fails at the network level in which case they are logged at the error level (default: false) **Environment variable**: `$BUILDKITE_AGENT_TRACE_HTTP` | `--force ` [#](#force) | Cancel any currently running job (default: false) --- ### tool URL: https://buildkite.com/docs/agent/cli/reference/tool #### buildkite-agent tool The Buildkite agent's `tool` subcommands are used for performing tasks that are expected to be called by a human as part of setting up a pipeline, rather than during the execution of a job. Any and all of these subcommand may be removed in the future into a separate CLI tool, so they should all be considered experimental. > 🛠 Experimental feature > The `tool` subcommand may be removed from the Buildkite agent in the future. ##### Generate a JSON Web Key Set ###### Usage `buildkite-agent tool keygen [options...]` ###### Description This command generates a new JWS key pair, used for signing and verifying jobs in Buildkite. The pair is written as a JSON Web Key Set (JWKS) to two files, a private JWKS file and a public JWKS file. The private JWKS should be used as for signing, and the public JWKS for verification. For more information about JWS, see https://tools.ietf.org/html/rfc7515 and for information about JWKS, see https://tools.ietf.org/html/rfc7517 ###### Options | `--no-color ` [#](#no-color) | Don't show colors in logging (default: false) **Environment variable**: `$BUILDKITE_AGENT_NO_COLOR` | `--debug ` [#](#debug) | Enable debug mode. Synonym for `--log-level debug`. Takes precedence over `--log-level` (default: false) **Environment variable**: `$BUILDKITE_AGENT_DEBUG` | `--log-level value` [#](#log-level) | Set the log level for the agent, making logging more or less verbose. Defaults to notice. Allowed values are: debug, info, error, warn, fatal (default: "notice") **Environment variable**: `$BUILDKITE_AGENT_LOG_LEVEL` | `--experiment value` [#](#experiment) | Enable experimental features within the buildkite-agent **Environment variable**: `$BUILDKITE_AGENT_EXPERIMENT` | `--profile value` [#](#profile) | Enable a profiling mode, either cpu, memory, mutex or block **Environment variable**: `$BUILDKITE_AGENT_PROFILE` | `--alg value` [#](#alg) | The JWS signing algorithm to use for the key pair. Defaults to 'EdDSA'. Valid algorithms are: [PS512 ES512 EdDSA] **Environment variable**: `$BUILDKITE_AGENT_KEYGEN_ALG` | `--key-id value` [#](#key-id) | The ID to use for the keys generated. If none is provided, a random one will be generated **Environment variable**: `$BUILDKITE_AGENT_KEYGEN_KEY_ID` | `--private-jwks-file value` [#](#private-jwks-file) | The filename to write the private key to. Defaults to a name based on the key id in the current directory **Environment variable**: `$BUILDKITE_AGENT_KEYGEN_PRIVATE_JWKS_FILE` | `--public-jwks-file value` [#](#public-jwks-file) | The filename to write the public keyset to. Defaults to a name based on the key id in the current directory **Environment variable**: `$BUILDKITE_AGENT_KEYGEN_PUBLIC_JWKS_FILE` ##### Sign a pipeline ###### Usage `buildkite-agent tool sign [options...] [pipeline-file]` ###### Description This command takes a pipeline in YAML format as input, and annotates the appropriate parts of the pipeline with signatures. This can then be input into the YAML steps editor in the Buildkite UI so that the agents running these steps can verify the signatures. If a token is provided using the `graphql-token` flag, the tool will attempt to retrieve the pipeline definition and repo using the Buildkite GraphQL API. If `update` is also set, it will update the pipeline definition with the signed version using the GraphQL API too. ###### Examples Retrieving the pipeline from the GraphQL API and signing it: ```shell $ buildkite-agent tool sign \ --graphql-token \ --organization-slug \ --pipeline-slug #### or $ cat pipeline.yml | buildkite-agent tool sign \ --jwks-file /path/to/private/key.json \ --repo ``` ###### Options | `--no-color ` [#](#no-color) | Don't show colors in logging (default: false) **Environment variable**: `$BUILDKITE_AGENT_NO_COLOR` | `--debug ` [#](#debug) | Enable debug mode. Synonym for `--log-level debug`. Takes precedence over `--log-level` (default: false) **Environment variable**: `$BUILDKITE_AGENT_DEBUG` | `--log-level value` [#](#log-level) | Set the log level for the agent, making logging more or less verbose. Defaults to notice. Allowed values are: debug, info, error, warn, fatal (default: "notice") **Environment variable**: `$BUILDKITE_AGENT_LOG_LEVEL` | `--experiment value` [#](#experiment) | Enable experimental features within the buildkite-agent **Environment variable**: `$BUILDKITE_AGENT_EXPERIMENT` | `--profile value` [#](#profile) | Enable a profiling mode, either cpu, memory, mutex or block **Environment variable**: `$BUILDKITE_AGENT_PROFILE` | `--graphql-token value` [#](#graphql-token) | A token for the buildkite graphql API. This will be used to populate the value of the repository URL, and download the pipeline definition. Both `repo` and `pipeline-file` will be ignored in preference of values from the GraphQL API if the token in provided. **Environment variable**: `$BUILDKITE_GRAPHQL_TOKEN` | `--update ` [#](#update) | Update the pipeline using the GraphQL API after signing it. This can only be used if `graphql-token` is provided (default: false) **Environment variable**: `$BUILDKITE_TOOL_SIGN_UPDATE` | `--no-confirm ` [#](#no-confirm) | Show confirmation prompts before updating the pipeline with the GraphQL API (default: false) **Environment variable**: `$BUILDKITE_TOOL_SIGN_NO_CONFIRM` | `--jwks-file value` [#](#jwks-file) | Path to a file containing a JWKS. **Environment variable**: `$BUILDKITE_AGENT_JWKS_FILE` | `--jwks-key-id value` [#](#jwks-key-id) | The JWKS key ID to use when signing the pipeline. If none is provided and the JWKS file contains only one key, that key will be used. **Environment variable**: `$BUILDKITE_AGENT_JWKS_KEY_ID` | `--signing-aws-kms-key value` [#](#signing-aws-kms-key) | The AWS KMS key identifier which is used to sign pipelines. **Environment variable**: `$BUILDKITE_AGENT_AWS_KMS_KEY` | `--signing-gcp-kms-key value` [#](#signing-gcp-kms-key) | The GCP KMS key identifier which is used to sign pipelines. This should be in the format projects/*/locations/*/keyRings/*/cryptoKeys/*/cryptoKeyVersions/* **Environment variable**: `$BUILDKITE_AGENT_GCP_KMS_KEY` | `--debug-signing ` [#](#debug-signing) | Enable debug logging for pipeline signing. This can potentially leak secrets to the logs as it prints each step in full before signing. Requires debug logging to be enabled (default: false) **Environment variable**: `$BUILDKITE_AGENT_DEBUG_SIGNING` | `--organization-slug value` [#](#organization-slug) | The organization slug. Required to connect to the GraphQL API. **Environment variable**: `$BUILDKITE_ORGANIZATION_SLUG` | `--pipeline-slug value` [#](#pipeline-slug) | The pipeline slug. Required to connect to the GraphQL API. **Environment variable**: `$BUILDKITE_PIPELINE_SLUG` | `--graphql-endpoint value` [#](#graphql-endpoint) | The endpoint for the Buildkite GraphQL API. This is only needed if you are using the the graphql-token flag, and is mostly useful for development purposes (default: "https://graphql.buildkite.com/v1") **Environment variable**: `$BUILDKITE_GRAPHQL_ENDPOINT` | `--repo value` [#](#repo) | The URL of the pipeline's repository, which is used in the pipeline signature. If the GraphQL token is provided, this will be ignored. **Environment variable**: `$BUILDKITE_REPO` --- ### Overview URL: https://buildkite.com/docs/pipelines/configure #### Pipeline configuration overview Pipelines are the top level containers for modelling and defining your workflows. Connecting pipelines to your source control allows you to run builds when your code changes. You can run anything with a Buildkite pipeline! 🚀 ##### What is a pipeline? A pipeline is a template of the steps you want to run. There are many types of steps, some run scripts, some define conditional logic, and others wait for user input. When you run a pipeline, a build is created. Each of the steps in the pipeline end up as jobs in the build, which then get distributed to available agents. ##### Is this only for running tests and deploying code? Not at all! You can do all kinds of exciting things with pipelines, like generating static sites, running data imports, provisioning servers, and automating app store submissions. You can even use pipelines to [create other pipelines](/docs/pipelines/uploading-pipelines) 😱 ##### Where's the best place to start? If you've completed [Getting started](/docs/pipelines/getting-started) and are looking to learn more about pipelines, we recommend you start with the following: - [Example pipelines](/docs/pipelines/configure/example-pipelines): Browse examples for various technologies and use cases. - [Defining steps](/docs/pipelines/configure/defining-steps): Learn how to write pipeline definitions. - [Step types](/docs/pipelines/configure/step-types): See the actions you can take in a pipeline. - [Environment variables](/docs/pipelines/configure/environment-variables): All the variables you can access in the build environment. --- ### Defining steps URL: https://buildkite.com/docs/pipelines/configure/defining-steps #### Defining your pipeline steps Pipeline steps are defined in YAML and are either stored in Buildkite or in your repository using a `pipeline.yml` file. Defining your pipeline steps in a `pipeline.yml` file gives you access to more configuration options and environment variables than the web interface, and allows you to version, audit and review your build pipelines alongside your source code. ##### Getting started On the **Pipelines** page, select **New pipeline** to begin creating a new pipeline. The required fields are **Git scope**, **Repository** and **Pipeline name**. Learn more about this page from [Understanding the New Pipeline page](/docs/pipelines/getting-started#create-a-new-pipeline-understanding-the-new-pipeline-page). Both the REST API and GraphQL API can be used to create a pipeline programmatically. See the [Pipelines REST API](/docs/apis/rest-api/pipelines) and the [GraphQL API](/docs/apis/graphql-api) for details and examples. ###### Webhooks for GitHub For a GitHub account's repositories, webhooks are automatically created for any of these repositories when a Buildkite pipeline is created for that repository. This functionality is implemented automatically once your GitHub account is connected to Buildkite using the GitHub App. This GitHub App connection is established as part of [signing up with GitHub](/docs/pipelines/getting-started#before-you-start) (see [Getting started with Pipelines](/docs/pipelines/getting-started) page for details), or if you select the [**Git scope** > **Connect GitHub account** option](/docs/pipelines/getting-started#create-a-new-pipeline-understanding-the-new-pipeline-page) on the **New Pipeline** page. Learn more about configuring the GitHub App from the [GitHub page of Connect source control](/docs/pipelines/source-control/github). ###### Webhooks for other repository providers For all other repository providers, you'll need to configure the connection to your repository provider manually from the [**Repository Providers** page](https://buildkite.com/organizations/~/repository-providers). You can access this page by selecting either of the following: - **Git scope** > **Manage accounts** from the **New Pipeline** page. - **Settings** in the global navigation > **Repository Providers** (within the **Integrations** section). Doing this will allow you to use repositories from the relevant [**Git scope** > **Use remote URL**](/docs/pipelines/getting-started#create-a-new-pipeline-understanding-the-new-pipeline-page) options. Learn more about configuring these other repository providers from the [Source control](/docs/pipelines/source-control) section. ##### Adding steps There are two ways to define steps in your pipeline: using the YAML step editor in Buildkite or with a `pipeline.yml` file. The web steps visual editor is still available if you haven't migrated to [YAML steps](https://buildkite.com/changelog/99-introducing-the-yaml-steps-editor) but will be deprecated in the future. If you have not yet migrated to YAML Steps, you can do so on your pipeline's settings page. See the [Migrating to YAML steps guide](/docs/pipelines/tutorials/pipeline-upgrade) for more information about the changes and the migration process. However you add steps to your pipeline, keep in mind that steps may run on different agents. It is good practice to install your dependencies in the same step that you run them. ##### Step defaults If you're using [YAML steps](/docs/pipelines/tutorials/pipeline-upgrade), you can set defaults which will be applied to every command step in a pipeline unless they are overridden by the step itself. You can set default agent properties and default environment variables: - `agents` - A map of agent characteristics such as `os` or `queue` that restrict what agents the command will run on - `env` - A map of [environment variables](/docs/pipelines/configure/environment-variables) to apply to all steps > 📘 Environment variable precedence > Because you can set environment variables in many different places, be sure to check [environment variable precedence](/docs/pipelines/configure/environment-variables#environment-variable-precedence) to ensure your environment variables work as expected. For example, to set steps `do-something.sh` and `do-something-else.sh` to use the `something` queue and the step `do-another-thing.sh` to use the `another` queue: ```yml agents: queue: "something" steps: - command: "do-something.sh" - command: "do-something-else.sh" - label: "Another" command: "do-another-thing.sh" agents: queue: "another" ``` ###### YAML steps editor To add steps using the YAML editor, click the 'Edit Pipeline' button on the Pipeline Settings page. Starting your YAML with the `steps` object, you can add as many steps as you require of each different type. Quick reference documentation and examples for each step type can be found in the sidebar on the right. ###### pipeline.yml file Before getting started with a `pipeline.yml` file, you'll need to tell Buildkite where it will be able to find your steps. In the YAML steps editor in your Buildkite dashboard, add the following YAML: ```yml steps: - label: "\:pipeline\: Pipeline upload" command: buildkite-agent pipeline upload ``` When you eventually run a build from this pipeline, this step will look for a directory called `.buildkite` containing a file named `pipeline.yml`. Any steps it finds inside that file will be [uploaded to Buildkite](/docs/agent/cli/reference/pipeline#uploading-pipelines) and will appear during the build. > 📘 > When using WSL2 or PowerShell Core, you cannot add a `buildkite-agent pipeline upload` command step directly in the YAML steps editor. To work around this, there are two options: > > - Use the YAML steps editor alone > - Place the `buildkite-agent pipeline upload` command in a script file. In the YAML steps editor, add a command to run that script file. It will upload your pipeline. Create your `pipeline.yml` file in a `.buildkite` directory in your repo. If you're using any tools that ignore hidden directories, you can store your `pipeline.yml` file either in the top level of your repository, or in a non-hidden directory called `buildkite`. The upload command will search these places if it doesn't find a `.buildkite` directory. The following example YAML defines a pipeline with one command step that will echo 'Hello' into your build log: ```yml steps: - label: "Example Test" command: echo "Hello!" ``` With the above example code in a `pipeline.yml` file, commit and push the file up to your repository. If you have set up webhooks, this will automatically create a new build. You can also create a new build using the 'New Build' button on the pipeline page. For more example steps and detailed configuration options, see the example `pipeline.yml` below, or the step type specific documentation: - [command steps](/docs/pipelines/configure/step-types/command-step) - [wait steps](/docs/pipelines/configure/step-types/wait-step) - [block steps](/docs/pipelines/configure/step-types/block-step) - [input steps](/docs/pipelines/configure/step-types/input-step) - [trigger steps](/docs/pipelines/configure/step-types/trigger-step) - [group steps](/docs/pipelines/configure/step-types/group-step) > 📘 > You can also upload additional steps while a build is in progress. This is not a separate step type but an action performed by an agent within the context of a job. See [Dynamic pipelines](/docs/pipelines/configure/dynamic_pipelines) for more information. If your pipeline has more than one step and you have multiple agents available to run them, the steps will automatically run at the same time. If your steps rely on running in sequence, you can separate them with [wait steps](/docs/pipelines/configure/step-types/wait-step). This will ensure that any steps before the wait are completed before steps after the wait can be run. >🚧 Explicit dependencies in uploaded steps > If a step [depends](/docs/pipelines/configure/depends-on) on an upload step, then all steps uploaded by that step become dependencies of the original step. For example, if step B depends on step A, and step A uploads step C, then step B will also depend on step C. When a step is run by an agent, it will be run with a clean checkout of the pipeline's repository. If your commands or scripts rely on the output from previous steps, you will need to either combine them into a single script or use [artifacts](/docs/pipelines/configure/artifacts) to pass data between steps. This enables any step to be picked up by any agent and run steps in parallel to speed up your build. ##### Build states When you run a pipeline, a build is created. The following diagram shows you how builds progress from start to end. A build state can be one of of the following values: `creating`, `scheduled`, `running`, `passed`, `failing`, `failed`, `blocked`, `canceling`, `canceled`, `skipped`, `not_run`. You can query for `finished` builds to return builds in any of the following states: `passed`, `failed`, `blocked`, or `canceled`. > 🚧 > When a [triggered build](/docs/pipelines/configure/step-types/trigger-step) fails, the step that triggered it will be stuck in the `running` state forever. > When all the steps in a build are skipped (either by using the skip attribute or by using the `if` condition), the build state will be marked as `not_run`. > By default, all steps depend on the step that uploads them. They will not run until the uploading step is finished. > Unlike the [`notify` attribute](/docs/pipelines/configure/notify), the build state value for a [`steps` attribute](/docs/pipelines/configure/defining-steps) may differ depending on the state of a pipeline. For example, when a build is blocked within a `steps` section, the `state` value in the [API response for getting a build](/docs/apis/rest-api/builds#get-a-build) retains its last value (for example, `passed`), rather than having the value `blocked`, and instead, the response also returns a `blocked` field with a value of `true`. ###### Build timestamps Each build has several timestamps that track its lifecycle from creation to completion. The expected chronological order is: `created_at` → `scheduled_at` → `started_at` → `finished_at`. Timestamp | Description ---------------- | ----------- `created_at` | When the build record was initially created in the database. This happens when a build is first triggered (via API, webhook, UI, etc.) and the build enters the `creating` state. `scheduled_at` | When the build is scheduled to run. For scheduled builds (triggered from pipeline schedules), this represents the intended execution time. `started_at` | When the build begins executing (transitions from `scheduled` to `started` state). This occurs when the first job starts running, marking the build as active. `finished_at` | When the build reaches a terminal state (`passed`, `failed`, `canceled`, `skipped`, or `not_run`). This is set when all jobs are complete and the build's final state is determined. > 📘 Builds with job retries > A build's `started_at` timestamp can be more recent than some of its job's `started_at` timestamps. This occurs when builds move from terminal states back to non-terminal states when failed jobs are retried. ##### Job states When you run a pipeline, a build is created. Each of the steps in the pipeline ends up as a job in the build, which then get distributed to available agents. Job states have a similar flow to [build states](#build-states) but with a few extra states. The following diagram shows you how jobs progress from start to end. > 📘 API state differences > The table of job states below describes the internal lifecycle states, where `finished` is the terminal state. The [REST API](/docs/apis/rest-api) flattens `finished` into `passed` or `failed` based on the job's exit status, so `jobs[].state` will be `passed` or `failed` rather than `finished`. The [GraphQL API](/docs/apis/graphql-api) uses `finished` for all completed jobs, regardless of exit status. Job state | Description ----------------------| ----------------------------------------- `pending` | The job has just been created and doesn't have a state yet. `waiting` | The job is waiting on a wait step to finish. `waiting_failed` | The job was in a `waiting` state when the build failed. `blocked` | The job is waiting on a block step to finish. `blocked_failed` | The job was in a `blocked` state when the build failed. `unblocked` | This block job has been manually unblocked. `unblocked_failed` | This block job was in an `unblocked` state when the build failed. `limiting` | The job is waiting on a concurrency group check before becoming either `limited` or `scheduled`. `limited` | The job is waiting for jobs with the same concurrency group to finish. `scheduled` | The job is scheduled and waiting for an agent. `assigned` | The job has been assigned to an agent, and it's waiting for it to accept. `accepted` | The job was accepted by the agent, and now it's waiting to start running. `running` | The job is running. `finished` | The job has finished (internal lifecycle state; REST API returns `passed` or `failed` instead). `passed` | The job finished successfully (REST API only; returned instead of `finished` for successful jobs). `failed` | The job finished with a failure (REST API only; returned instead of `finished` for failed jobs). `canceling` | The job is currently canceling. `canceled` | The job was canceled. `timing_out` | The job is timing out for taking too long. `timed_out` | The job timed out. `skipped` | The job was skipped. `broken` | The job's configuration means that it can't be run. `expired` | The job expired before it was started on an agent. `platform_limiting` | The job is waiting for limits imposed by Buildkite to be checked before moving to `platform_limited` or `scheduled`. `platform_limited` | The job is waiting for capacity within limits imposed by Buildkite to become available before moving to `scheduled`. As well as the states shown in the diagram, the following progressions can occur: can progress to `skipped` | can progress to `canceling` or `canceled` -------------------------- | ----------------------------------------- `pending` | `accepted` `waiting` | `pending` `blocked` | `limiting` `limiting` | `limited` `limited` | `blocked` `accepted` | `unblocked` `broken` | `platform_limiting` `platform_limiting` | `platform_limited` `platform_limited` | Differentiating between `broken`, `skipped` and `canceled` states: - Jobs become `broken` when their configuration prevents them from running. This might be because their branch configuration doesn't match the build's branch, or because a conditional returned false. - This is distinct from `skipped` jobs, which might happen if a newer build is started and [build skipping](/docs/apis/rest-api/pipelines#create-a-yaml-pipeline) is enabled. Broadly, jobs break because of something inside the build, and are skipped by something outside the build. - Jobs can be `canceled` intentionally, either using the Buildkite interface or one of the APIs. Differentiating between `timing_out`, `timed_out`, and `expired` states: - Jobs become `timing_out`, `timed_out` when a job starts running on an agent but doesn't complete within the timeout period. - Jobs become `expired` when they reach the scheduled job expiry timeout before being picked up by an agent. See [Build timeouts](/docs/pipelines/configure/build-timeouts) for information about setting timeout values. > 📘 REST API state mapping > The [REST API](/docs/apis/rest-api) maps the internal `finished` state to `passed` or `failed` based on the job's exit status. When querying job states via the REST API, you'll see `passed` or `failed` instead of `finished`. The REST API also lists `limiting` and `limited` as `scheduled` for legacy compatibility. A job state can be one of the following values: `pending`, `waiting`, `waiting_failed`, `blocked`, `blocked_failed`, `unblocked`, `unblocked_failed`, `limiting`, `limited`, `scheduled`, `assigned`, `accepted`, `running`, `finished`, `passed`, `failed`, `canceling`, `canceled`, `expired`, `timing_out`, `timed_out`, `skipped`, `broken`, `platform_limiting`, or `platform_limited`. Note: `finished` is an internal lifecycle terminal state. The REST API maps `finished` to `passed` or `failed` based on the job's exit status, so REST API responses will display `passed` or `failed` instead of `finished`. The GraphQL API uses `finished` for all completed jobs, regardless of the exit status. Each job in a build also has a footer that displays exit status information. It may include an exit signal reason, which indicates whether the Buildkite agent stopped or the job was canceled. > 🚧 > Exit status information is available in the [GraphQL API](/docs/apis/graphql-api) but not the [REST API](/docs/apis/rest-api). ###### Job timestamps Each job has several timestamps that track its lifecycle from creation to completion. The expected chronological order is: `created_at` → `scheduled_at` → `runnable_at` → `started_at` → `finished_at`. Timestamp | Description ---------------- | ----------- `created_at` | When the job record was first created in the database. This happens when a build's pipeline is processed and jobs are created in the `pending` state. `scheduled_at` | When the job was intended to run. This is set during initial job creation and defaults to the job's `created_at` timestamp. `runnable_at` | When the job became ready for agent assignment and eligible to run. This is set when the job transitions to the `scheduled` state after resolving dependencies (for example, wait steps, manual blocks, concurrency limits, or other dependencies). `started_at` | When an agent confirmed it had started running the job (and the job transitions to the `running` state). This occurs after the job has been `assigned` to an agent, `accepted` by the agent, and the agent sends the first log output indicating that the execution has begun. `finished_at` | When the job reaches a terminal state (`finished`, `canceled`, `timed_out`, `skipped`, or `expired`). Transitioning to this state marks the completion of the job's execution, whether successful or not. ###### Platform limits Platform limits are restrictions imposed by Buildkite on usage within your Buildkite organization. Jobs will enter the `platform_limiting` and `platform_limited` states when these limits are being evaluated or enforced. The following platform limits may apply: - **Job concurrency limits**: A Buildkite organization on the [Personal](https://buildkite.com/pricing/) plan has a total concurrency limit of three jobs that applies across both [Buildkite hosted agents](/docs/agent/buildkite-hosted) and [self-hosted agents](/docs/pipelines/architecture). When jobs are scheduled beyond this limit, they will be queued using the platform limiting states. To remove or increase this limit for your Buildkite organization, at least [upgrade to the Pro plan](https://buildkite.com/organizations/~/billing/plan_changes/new?plan_id=platform_pro_monthly_plan) or reach out to Buildkite support at support@buildkite.com for help. ##### Example pipeline Here's a more complete example based on [the Buildkite agent's build pipeline](https://github.com/buildkite/agent/blob/main/.buildkite/pipeline.yml). It contains script commands, wait steps, block steps, and automatic artifact uploading: ```yml steps: - label: "\:hammer\: Tests" command: scripts/tests.sh env: BUILDKITE_DOCKER_COMPOSE_CONTAINER: app - wait - label: "\:package\: Package" command: scripts/build-binaries.sh artifact_paths: "pkg/*" env: BUILDKITE_DOCKER_COMPOSE_CONTAINER: app - wait - label: "\:debian\: Publish" command: scripts/build-debian-packages.sh artifact_paths: "deb/**/*" branches: "main" agents: queue: "deploy" - block: "\:shipit\: Release" branches: "main" - label: "\:github\: Release" command: scripts/build-github-release.sh artifact_paths: "releases/**/*" branches: "main" - wait - label: "\:whale\: Update images" command: scripts/release-docker.sh branches: "main" agents: queue: "deploy" ``` ##### Step types Buildkite pipelines are made up of the following step types: - [Command step](/docs/pipelines/configure/step-types/command-step) - [Wait step](/docs/pipelines/configure/step-types/wait-step) - [Block step](/docs/pipelines/configure/step-types/block-step) - [Input step](/docs/pipelines/configure/step-types/input-step) - [Trigger step](/docs/pipelines/configure/step-types/trigger-step) - [Group step](/docs/pipelines/configure/step-types/group-step) ##### Customizing the pipeline upload path By default the pipeline upload step reads your pipeline definition from `.buildkite/pipeline.yml` in your repository. You can specify a different file path by adding it as the first argument: ```yml steps: - label: "\:pipeline\: Pipeline upload" command: buildkite-agent pipeline upload .buildkite/deploy.yml ``` A common use for custom file paths is when separating test and deployment steps into two separate pipelines. Both `pipeline.yml` files are stored in the same repo and both Buildkite pipelines use the same repo URL. For example, your test pipeline's upload command could be: ``` buildkite-agent pipeline upload .buildkite/pipeline.yml ``` And your deployment pipeline's upload command could be: ``` buildkite-agent pipeline upload .buildkite/pipeline.deploy.yml ``` For a list of all command line options, see the [buildkite-agent pipeline upload](/docs/agent/cli/reference/pipeline#uploading-pipelines) documentation. ##### Targeting specific agents To run [command steps](/docs/pipelines/configure/step-types/command-step) only on specific agents: 1. In the agent configuration file, [tag the agent](/docs/agent/cli/reference/start#setting-tags) 1. In the pipeline command step, [set the agent property](/docs/agent/cli/reference/start#agent-targeting) in the command step For example to run commands only on agents running on macOS: ```yml steps: - command: "script.sh" agents: os: "macOS" ``` ##### Further documentation You can also upload pipelines from the command line using the `buildkite-agent` command line tool. See the [buildkite-agent pipeline documentation](/docs/agent/cli/reference/pipeline) for a full list of the available parameters. --- ### Overview URL: https://buildkite.com/docs/pipelines/configure/step-types #### Step types overview A step describes a single, self-contained task as part of a pipeline. There are different types of steps to use depending on the task. The flexibility and extensibility of steps let you create highly customized and efficient pipelines tailored to your needs. By understanding these step types, you'll be in a good position to design, build, and manage your pipelines effectively. The following pages describe the different step types: - [Command step](/docs/pipelines/configure/step-types/command-step) - [Wait step](/docs/pipelines/configure/step-types/wait-step) - [Block step](/docs/pipelines/configure/step-types/block-step) - [Input step](/docs/pipelines/configure/step-types/input-step) - [Trigger step](/docs/pipelines/configure/step-types/trigger-step) - [Group step](/docs/pipelines/configure/step-types/group-step) --- ### Command step URL: https://buildkite.com/docs/pipelines/configure/step-types/command-step #### Command step A command step runs one or more shell commands on one or more agents. Each command step can run either a shell command like `npm test`, or an executable file, or script like `build.sh`. A command step can be defined in your pipeline settings, or in your [pipeline.yml](/docs/pipelines/configure/defining-steps) file. ```yml steps: - command: "tests.sh" ``` To have a set of commands execute sequentially in a single step, use the `command` syntax followed by a `|` symbol: ```yml steps: - command: | "tests.sh" "echo 'running tests'" ``` You can also define multiple commands by using the `commands` syntax and starting each new command on a new line: ```yml steps: - commands: - "tests.sh" - "echo 'running tests'" ``` When running multiple commands, either defined in a single line (`npm install && tests.sh`) or defined in a list, any failure will prevent subsequent commands from running, and will mark the command step as failed. The results of running the commands defined in separate command steps are not guaranteed to be available to the subsequent command steps as those steps could be running on a different machine in the [cluster queue](/docs/agent/queues/managing#setting-up-queues). > 📘 Commands and `PATH` > The shell command(s) provided for execution must be resolvable through the directories defined within the `PATH` environment variable. When referencing scripts for execution, preference using a relative path (for example, `./scripts/build.sh`, or `scripts/bin/build-prod`). ##### Command step attributes Required attributes: | `command` | The shell command/s to run during this step. This can be a single line of commands, or a list of commands that must all pass. _Example:_ `"build.sh"` _Example:_ `- "npm install"` `- "./tests.sh"` _Alias:_ `commands` ```yml steps: - commands: - "npm install && npm test" - "extras/moretests.sh" - "./build.sh" ``` > 📘 Pipelines without command steps > Although the `command` attribute is required for a command step, some [plugins](/docs/pipelines/integrations/plugins/using#adding-a-plugin-to-your-pipeline) work without a command step, so it isn't strictly necessary for your pipeline to have an explicit command step. Optional attributes: | `agents` | A map of [agent tag](/docs/agent/cli/reference/start#setting-tags) keys to values to [target specific agents](/docs/agent/cli/reference/start#agent-targeting) for this step. _Example:_ `npm: "true"` _Alias:_ `agent_query_rules` | `allow_dependency_failure` | Whether to continue to run this step if any of the steps named in the `depends_on` attribute fail. _Default:_ `false` | `artifact_paths` | The [glob path](/docs/pipelines/configure/glob-pattern-syntax) or paths of [artifacts](/docs/agent/cli/reference/artifact) to upload from this step. This can be a single line of paths separated by semicolons, or a list. _Example:_ `"logs/**/*;coverage/**/*"` _Example:_ `- "logs/**/*"` `- "coverage/**/*"` _Alias:_ `artifacts` | `branches` | The [branch pattern](/docs/pipelines/configure/workflows/branch-configuration#branch-pattern-examples) defining which branches will include this step in their builds. _Example:_ `"main stable/*"` | `cancel_on_build_failing` | Setting this attribute to `true` cancels the job as soon as the build is marked as [failing](/docs/pipelines/configure/defining-steps#build-states). _Default:_ `"false"` | `concurrency` | The [maximum number of jobs](/docs/pipelines/configure/workflows/controlling-concurrency#concurrency-limits) created from this step that are allowed to run at the same time. If you use this attribute, you must also define a label for it with the `concurrency_group` attribute. _Example:_ `3` | `concurrency_group` | A unique name for the concurrency group that you are creating. If you use this attribute, you must also define the `concurrency` attribute. _Example:_ `"my-app/deploy"` | `concurrency_method` | This attribute provides control of the scheduling method for jobs in a [concurrency group](/docs/pipelines/configure/workflows/controlling-concurrency). With the `"ordered"` value set, the jobs run sequentially in the order they were queued, while the `"eager"` value allows jobs to run as soon as resources become available. If you use this attribute, you must also define the `concurrency` and `concurrency_group` attributes. _Default:_ `"ordered"` _Example:_ `"eager"` | `depends_on` | A list of step keys that this step depends on. This step will only run after the named steps have completed. See [managing step dependencies](/docs/pipelines/configure/depends-on) for more information. _Example:_ `"test-suite"` | `env` | A map of [environment variables](/docs/pipelines/configure/environment-variables) for this step. _Example:_ `RAILS_ENV: "test"` | `secrets` | Either an array of [Buildkite secrets](/docs/pipelines/security/secrets/buildkite-secrets) or a map of environment variables names to Buildkite secrets for this step. _Example:_ `- API_ACCESS_TOKEN` | `if` | A boolean expression that omits the step when false. See [Using conditionals](/docs/pipelines/configure/conditionals) for supported expressions. _Example:_ `build.message != "skip me"` | `key` | A unique string to identify the command step. The value is available in the `BUILDKITE_STEP_KEY` [environment variable](/docs/pipelines/configure/environment-variables). Keys can not have the same pattern as a UUID (`xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx`). _Example:_ `"linter"` _Aliases:_ `identifier`, `id` | `label` | The label that will be displayed in the pipeline visualization in Buildkite. Supports emoji. _Example:_ `"\:hammer\: Tests" will be rendered as ":hammer: Tests"` _Alias:_ `name` | `matrix` | Either an array of values to be used in the matrix expansion, or a single `setup` key, and an optional `adjustments` key. `steps: - label: "{{matrix}} build" command: "echo '.buildkite/steps/build-binary.sh {{matrix}}'" matrix: - "macOS" - "Linux" | parallelism` | The number of [parallel jobs](/docs/pipelines/tutorials/parallel-builds#parallel-jobs) that will be created based on this step. _Example:_ `3` | `plugins` | An array of [plugins](/docs/pipelines/integrations/plugins) for this step. _Example:_ `- docker-compose#v1.0.0: run: app` | `priority` | Adjust the [priority](/docs/pipelines/configure/workflows/job-priority) for a specific job, as a positive or negative integer. _Example:_ `- command: "will-run-first.sh" priority: 1` | `retry` | The conditions for retrying this step. Available types: `automatic`, `manual` For detailed configuration options, see [Retry](/docs/pipelines/configure/retry). | `skip` | Whether to skip this step or not. Passing a string (with a 70-character limit) provides a reason for skipping this command. Passing an empty string is equivalent to `false`. Note: Skipped steps will be hidden in the pipeline view by default, but can be made visible by toggling the 'Skipped jobs' icon. _Example:_ `true` _Example:_ `false` _Example:_ `"My reason"` | `soft_fail` | Allow specified non-zero exit statuses not to fail the build. Can be either `true` to make all exit statuses soft-fail or an `array` of allowed soft failure exit statuses with the `exit_status` attribute. Use `exit_status: "*"` to allow all non-zero exit statuses not to fail the build. _Example:_ `true` _Example:_ `- exit_status: 1` _Example:_ `- exit_status: "*"` See [Soft fail](/docs/pipelines/configure/soft-fail) for more details. | `timeout_in_minutes` | The maximum number of minutes a job created from this step is allowed to run. If the job exceeds this time limit, it automatically times out. A job that times out with an exit status of `0` is marked as `passed`. You can also set [default and maximum timeouts](/docs/pipelines/configure/build-timeouts) in the Buildkite UI, or [update a job's timeout dynamically](/docs/pipelines/configure/build-timeouts#command-timeouts-updating-timeouts-during-a-job) while it is running. _Example:_ `60` _Alias:_ `timeout` > 📘 Signed pipelines > When [signed pipelines](/docs/agent/self-hosted/security/signed-pipelines) are enabled, command steps also include a `signature` object with fields such as `value`, `version`, `hashing_algorithm`, and `signed_attributes`. This object is computed by the agent during pipeline upload and is not user-configurable. ##### Agent-applied attributes These attributes are _only_ applied by the Buildkite agent when uploading a pipeline (`buildkite-agent pipeline upload`), since they require direct access to your code or repository to process correctly. > 🚧 > Agent-applied attributes are not accepted in pipelines set using the Buildkite interface. ###### if_changed | `if_changed` | A [glob pattern](/docs/pipelines/configure/glob-pattern-syntax) that omits the step from a build if it does not match any files changed in the build. _Example:_ `"{**.go,go.mod,go.sum,fixtures/**}"` From version 3.109.0 of the Buildkite agent, `if_changed` also supports lists of glob patterns and `include` and `exclude` attributes. _Minimum Buildkite agent versions:_ 3.99 (with `--apply-if-changed` flag), 3.103.0 (enabled by default), 3.109.0 (expanded syntax) For an example pipeline, demonstrating various forms of `if_changed`, see [Using `if_changed`](/docs/pipelines/configure/dynamic-pipelines/if-changed). ##### Container image attributes The `image` attribute can be used with either the [Agent Stack for Kubernetes](/docs/agent/self-hosted/agent-stack-k8s) controller to run your [Buildkite agents](/docs/agent), or [Buildkite hosted agents](/docs/agent/buildkite-hosted). - If you are running your Buildkite agents using the Agent Stack for Kubernetes, you can use the `image` attribute to specify a [container image](/docs/agent/self-hosted/agent-stack-k8s/podspec#podspec-command-and-interpretation-of-arguments-custom-images) for a command step to run its job in. - If you are using Buildkite hosted agents, support for the `image` attribute is experimental and subject to change. | `image` | A fully qualified image reference string. The [Agent Stack for Kubernetes](/docs/agent/self-hosted/agent-stack-k8s) controller will configure the [custom image](/docs/agent/self-hosted/agent-stack-k8s/podspec#podspec-command-and-interpretation-of-arguments-custom-images) for the `command` container of this job. The value is available in the `BUILDKITE_IMAGE` [environment variable](/docs/pipelines/configure/environment-variables). _Example:_ `"alpine:latest"` > 🚧 > Support for this `image` attribute is currently experimental. Example pipeline, showing how build and step level `image` attributes interact: ```yml image: "ubuntu:22.04" # The default image for the pipeline's build steps: - name: "\:node\: Frontend tests" command: | cd frontend npm ci npm test image: "node:18" # This step's job uses the node:18 image - name: "\:golang\: Backend tests" command: | cd backend go mod download go test ./... image: "golang:1.21" # This step's job uses the golang:1.21 image - name: "\:package\: Package application" command: | apt-get update && apt-get install -y zip zip -r app.zip frontend/ backend/ # No image specified in this step. # Therefore, this step's job uses the pipeline's default ubuntu:22.04 image ``` ##### Matrix attributes | `setup` | A list of dimensions, each containing an array of elements. The job matrix is built by combining all values of each dimension, with the other elements of each dimension. | `adjustments` | A array of `with` keys, each mapping an element to each dimension listed in the `array.setup`, as well as the attribute to modify for that combination. Currently, only `soft_fail` and `skip` can be modified. ```yaml steps: - label: "💥 Matrix build with adjustments" command: "echo {{matrix.os}} {{matrix.arch}} {{matrix.test}}" matrix: setup: arch: - "amd64" - "arm64" os: - "windows" - "linux" test: - "A" - "B" adjustments: - with: os: "windows" arch: "arm64" test: "B" soft_fail: true - with: os: "linux" arch: "arm64" test: "B" skip: true ``` ##### Fast-fail running jobs To automatically cancel any remaining jobs as soon as any job in the build fails (except jobs marked as `soft_fail`), add the `cancel_on_build_failing: true` attribute to your command steps. When a job fails, the build enters a _failing_ state. Any jobs still running that have `cancel_on_build_failing: true` are automatically canceled. Once all running jobs have been cancelled, the build is marked as _failed_ due to the initial job failure. ##### Example ```yml steps: - label: "\:hammer\: Tests" commands: - "npm install" - "npm run tests" branches: "main" env: NODE_ENV: "test" agents: npm: "true" queue: "tests" artifact_paths: - "logs/**/*" - "coverage/**/*" parallelism: 5 timeout_in_minutes: 3 retry: automatic: - exit_status: -1 limit: 2 - exit_status: 143 limit: 2 - exit_status: 255 limit: 2 - label: "Visual diff" commands: - "npm install" - "npm run visual-diff" cancel_on_build_failing: true retry: automatic: limit: 3 - label: "Skipped job" command: "broken.sh" cancel_on_build_failing: true skip: "Currently broken and needs to be fixed" - wait: ~ - label: "\:shipit\: Deploy" command: "deploy.sh" branches: "main" concurrency: 1 concurrency_group: "my-app/deploy" concurrency_method: "eager" retry: manual: allowed: false reason: "Sorry, you can't retry a deployment" - wait: ~ - label: "Smoke test" command: "smoke-test.sh" soft_fail: - exit_status: 1 ``` --- ### Wait step URL: https://buildkite.com/docs/pipelines/configure/step-types/wait-step #### Wait step A _wait_ step waits for all previous steps to have successfully completed before allowing following jobs to continue. A wait step can be defined in your pipeline settings, or in your [pipeline.yml](/docs/pipelines/uploading-pipelines) file. It can be placed between steps to ensure that previous steps are successful before continuing to run the rest. ```yml - command: "command.sh" - wait: ~ - command: "echo The command passed" ``` ##### Dynamically uploaded steps If any step before a wait step uploads new steps using [`pipeline upload`](/docs/agent/cli/reference/pipeline#uploading-pipelines), the wait step automatically waits for those uploaded steps to complete as well. This applies recursively. If an uploaded step uploads further steps, the wait step also waits for those. For example, if step A uploads steps B1 and B2 during its execution: ```yaml steps: - label: "Step A" command: "buildkite-agent pipeline upload extra-steps.yml" - wait: ~ - label: "Step C" command: "echo 'Runs after A, B1, and B2 complete'" ``` Step C only runs after step A _and_ the uploaded steps B1 and B2 have all completed. A single wait step is sufficient. You do not need multiple consecutive wait steps to cover dynamically uploaded steps. Optional attributes: | `continue_on_failure` | Run the next step, even if the previous step has failed. _Example:_ `true` | `branches` | The [branch pattern](/docs/pipelines/configure/workflows/branch-configuration#branch-pattern-examples) defining which branches will include this wait step in their builds. _Example:_ `"main stable/*"` | `if` | A boolean expression that omits the step when false. See [Using conditionals](/docs/pipelines/configure/conditionals) for supported expressions. _Example:_ `build.message != "skip me"` | `depends_on` | A list of step keys that this step depends on. This step will only proceed after the named steps have completed. See [managing step dependencies](/docs/pipelines/configure/depends-on) for more information. _Example:_ `"test-suite"` | `key` | A unique string to identify the wait step. Keys can not have the same pattern as a UUID (`xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx`). _Example:_ `"confirmation"` _Aliases:_ `identifier`, `id` | `allow_dependency_failure` | Whether to continue to proceed past this step if any of the steps named in the `depends_on` attribute fail. _Default:_ `false` ##### Conditional wait You can use a conditional to only wait when certain conditions are met: ```yml steps: - wait: ~ if: build.branch == "develop" || build.branch == "main" ``` ##### Continuing on failure You can also configure the wait step to continue even if the previous steps failed. If steps failed, the build will be marked as failed only after any steps after the `wait` with `continue_on_failure: true` have completed. This is useful for processing results from previous steps, for example, test coverage or summarizing test failures. Successful steps that run after a `continue_on_failure` step will not affect the status of the build; if there has been a failure, the build will be marked as failed. In the example below, if `command.sh` succeeds, both of the following command steps will be run. If `command.sh` fails, only the first will be run, and the build will then be marked as failed. ```yml steps: - command: "command.sh" - wait: ~ continue_on_failure: true - command: "echo This runs regardless of the success or failure" - wait: ~ - command: "echo The command passed" ``` If there's a failure followed by a regular wait step, nothing after the wait step will run, including any subsequent wait steps with `continue_on_failure: true`. In the example below, when the first command fails, the second and third commands will not run: ```yml steps: - command: "exit -1" - wait: ~ - command: "echo SECOND command" - wait: ~ continue_on_failure: true - command: "echo THIRD command" ``` Any wait steps with `continue_on_failure: true` that aren't separated by regular wait steps will all run if a failure occurs. In the below example, after the first command fails, the second, third, and fourth commands will all run: ```yml steps: - command: "exit -1" - wait: ~ continue_on_failure: true - command: "echo SECOND command" - command: "echo THIRD command" - wait: ~ continue_on_failure: true - command: "echo FOURTH command" ``` The explicit null `~` character used in the above examples isn't required, but is recommended as a best practice. It ensures that nothing else is accidentally added to the `wait` before the `continue_on_failure` attribute. ###### Continuing after cancelation The `continue_on_failure` attribute enables builds to continue after a failed job but not after a cancelation. If you cancel a job, any subsequent wait steps with `continue_on_failure: true` do not execute. For example, if you cancel the first command, the second command doesn't run in the following `plugin.yml` file: ```yml steps: - command: "run-first-command.sh" - wait: ~ continue_on_failure: true - command: "run-second-command.sh" ``` ##### Block steps interacting with wait steps If a block step follows or precedes a wait step in your build, the wait step will be ignored and only the block step will run, like in this example: ```yml steps: - command: ".buildkite/steps/yarn" - wait: ~ - block: "unblock me" ``` But let's consider a different example. Now the wait step (with `continue_on_failure: true`) will be ignored, but the block step will _also not run_, because the 'previous' command step failed. ```yml steps: - command: "exit -1" - wait: ~ continue_on_failure: true - block: "unblock me" ``` If you need to run a block step after a failed step, set [`soft_fail`](/docs/pipelines/configure/soft-fail) on the failing step: ```yml steps: - command: "exit -1" soft_fail: - exit_status: "*" - block: "unblock me" ``` Alternatively, it is possible to use a wait step with a block step if conditionals are used. In these cases, the wait step must come before the block step: ```yml steps: - command: ".buildkite/steps/yarn" - wait: if: build.source == "schedule" - block: "Deploy changes?" if: build.branch == pipeline.default_branch && build.source != "schedule" - command: ".buildkite/scripts/deploy" if: build.branch == pipeline.default_branch ``` --- ### Block step URL: https://buildkite.com/docs/pipelines/configure/step-types/block-step #### Block step A _block_ step is used to pause the execution of a build and wait on a team member to unblock it using the web or the [API](/docs/apis/rest-api/jobs#unblock-a-job). A block step is functionally identical to an [input step](/docs/pipelines/configure/step-types/input-step), however a block step creates [implicit dependencies](/docs/pipelines/configure/depends-on#implicit-dependencies-with-wait-and-block) to the steps before and after it. Note that explicit dependencies specified by `depends_on` take precedence over implicit dependencies; subsequent steps will run when the step they depend on passes, without waiting for `block` or `wait` steps, unless those are also explicit dependencies. A block step can be defined in your pipeline settings, or in your [pipeline.yml](/docs/pipelines/configure/defining-steps) file. Once all steps before the block have completed, the pipeline will pause and wait for a team member to unblock it. Clicking on a block step in the Buildkite web UI opens a dialog box asking if you'd like to continue. ```yaml steps: - block: "\:rocket\: Are we ready?" ``` You can add form `fields` to block steps by adding a fields attribute. Block steps with input fields can only be defined using a `pipeline.yml`. There are two field types available: text or select. The select input type displays differently depending on how you configure the options. If you allow people to select multiple options, they display as checkboxes. If you are required to select only one option from six or fewer, they display as radio buttons. Otherwise, the options display in a dropdown menu. The data you collect from these fields is then available to subsequent steps in the pipeline in the [build meta-data](/docs/pipelines/configure/build-meta-data). In this example, the `pipeline.yml` defines an input step with the key `release-name`. The Bash script then accesses the value of the step using the [meta-data](/docs/agent/cli/reference/meta-data) command. ```yaml - block: "Release" prompt: "Fill out the details for release" fields: - text: "Release Name" key: "release-name" ``` ```bash RELEASE_NAME=$(buildkite-agent meta-data get release-name) ``` For a complete example pipeline, including dynamically generated input fields, see the [Block step example pipeline](https://github.com/buildkite/block-step-example/blob/main/.buildkite/pipeline.yml) on GitHub: [:pipeline: Block Step Example Pipeline github.com/buildkite/block-step-example](https://github.com/buildkite/block-step-example) ##### Block step attributes Input and block steps have the same attributes available for use. Optional attributes: | `prompt` | The instructional message displayed in the dialog box when the unblock step is activated. _Example:_ `"Release to production?"` _Example:_ `"Fill out the details for this release"` | `submit` | The label on the button that submits the dialog and unblocks the step. Defaults to `"Continue"` if not set. _Example:_ `"Yes, deploy it!"` _Default:_ `Continue` | `fields` | A list of input fields required to be filled out before unblocking the step. Available input field types: `text`, `select` | `blocked_state` | The state that the build is set to when the build is blocked by this block step. The default is `passed`. When the `blocked_state` of a block step is set to `failed`, the step that triggered it will be stuck in the `running` state until it is manually unblocked. If you're using GitHub, you can also [configure which GitHub status](/docs/pipelines/source-control/github#customizing-commit-statuses) to use for blocked builds on a per-pipeline basis. _Default:_ `passed` _Values:_ `passed`, `failed`, `running` | `allowed_teams` | A list of teams that are permitted to unblock this step, whose values are a list of one or more team slugs or IDs. If this field is specified, a user must be a member of one of the teams listed in order to unblock. The use of `allowed_teams` replaces the need for write access to the pipeline, meaning a member of an allowed team with read-only access may unblock the step. Learn more about this attribute in the [Permissions](#permissions) section. _Example:_ `["deployers", "approvers", "b50084ea-4ed1-405e-a204-58bde987f52b"]` | `branches` | The [branch pattern](/docs/pipelines/configure/workflows/branch-configuration#branch-pattern-examples) defining which branches will include this block step in their builds. _Example:_ `"main stable/*"` | `if` | A boolean expression that omits the step when false. See [Using conditionals](/docs/pipelines/configure/conditionals) for supported expressions. _Example:_ `build.message != "skip me"` | `depends_on` | A list of step keys that this step depends on. This step will only proceed after the named steps have completed. See [managing step dependencies](/docs/pipelines/configure/depends-on) for more information. _Example:_ `"test-suite"` | `key` | A unique string to identify the block step. Keys cannot have the same pattern as a UUID (`xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx`). _Example:_ `"test-suite"` _Aliases:_ `identifier`, `id` | `allow_dependency_failure` | Whether to continue to proceed past this step if any of the steps named in the `depends_on` attribute fail. _Default:_ `false` ```yaml steps: - block: "\:rocket\: Release!" ``` ##### Text field attributes > 📘 Line endings > A text field normalizes line endings to Unix format (`\n`). Required attributes: | `key` | The meta-data key that stores the field's input (using the [buildkite-agent meta-data command](/docs/agent/cli/reference/meta-data)). The key may only contain alphanumeric characters, slashes, dashes, or underscores. _Example:_ `"release-name"` ```yaml steps: - block: "Request Release" fields: - text: "Code Name" key: "release-name" ``` Optional attributes: | `text` | The text input name. _Example:_ `"Release Name"` | `hint` | The explanatory text that is shown after the label. _Example:_ `"What's the code name for this release? \:name_badge\:"` | `required` | A boolean value that defines whether the field is required for form submission. _Default:_ `true` | `default` | The value that is pre-filled in the text field. _Example:_ `"Flying Dolphin"` | `format` | A regular expression used for [input validation](#input-validation) that indicates invalid input. _Example:_ `"[a-zA-Z]+"` ```yaml steps: - block: "Request Release" fields: - text: "Code Name" key: "release-name" hint: "What's the code name for this release? \:name_badge\:" required: false default: "Release #" ``` ##### Select field attributes Required attributes: | `key` | The meta-data key that stores the field's input (using the [buildkite-agent meta-data command](/docs/agent/cli/reference/meta-data)). The key may only contain alphanumeric characters, slashes, dashes, or underscores. _Example:_ `"release-stream"` | `options` | The list of select field options. For six or fewer options they'll be displayed as radio buttons, otherwise they'll be displayed in a dropdown box. If selecting multiple options is permitted, the options will be displayed as checkboxes. ```yaml steps: - block: "Request Release" fields: - select: "Stream" key: "release-stream" options: - label: "Beta" value: "beta" - label: "Stable" value: "stable" ``` Optional attributes: | `hint` | The text displayed directly under the select field's label. _Example:_ `"Which release stream does this belong in? \:fork\:"` | `required` | A boolean value that defines whether the field is required for form submission. When this value is set to `false` and users can only select one option, the options display in a dropdown menu, regardless of how many options there are. _Default:_ `true` | `multiple` | A boolean value that defines whether multiple options may be selected. When multiple options are selected, they are delimited in the meta-data field by a comma (`,`). _Default:_ `false` | `default` | The value of the option or options that will be pre-selected. When `multiple` is enabled, this can be an array of values to select by default. _Example:_ `"beta"` ```yaml steps: - block: "Deploy To" fields: - select: "Regions" key: "deploy-regions" hint: "Which regions should we deploy this to? \:earth_asia\:" required: true multiple: true default: - "na" - "eur" - "asia" - "aunz" options: - label: "North America" value: "na" - label: "Europe" value: "eur" - label: "Asia" value: "asia" - label: "Oceania" value: "aunz" ``` Each select option has the following _required_ attributes: | `label` | The text displayed for the option. _Example:_ `"Stable"` | `value` | The value to be stored as meta-data (to be later retrieved using the [buildkite-agent meta-data command](/docs/agent/cli/reference/meta-data)). _Example:_ `"stable"` ##### Permissions To unblock a block step, a user must either have write access to the pipeline, or where the [`allowed_teams` attribute](#block-step-attributes) is specified, the user must belong to one of the allowed teams. When `allowed_teams` is specified, a user who has write access to the pipeline but is not a member of any of the allowed teams will not be permitted to unblock the step. The `allowed_teams` attribute serves as a useful way to restrict unblock permissions to a subset of users without restricting the ability to create builds. Conversely, this attribute is also useful for granting unblock permissions to users _without_ also granting the ability create builds. ```yml - block: "Release" prompt: "Fill out the details for release" allowed_teams: - "approvers" fields: - text: "Release Name" key: "release-name" ``` ##### Passing block step data to other steps Before you can do anything with the values from a block step, you need to store the data using the Buildkite meta-data store. Use the `key` attribute in your block step to store values from the text or select fields in meta-data: ```yaml steps: - block: "Request Release" fields: - text: "Code Name" key: "release-name" ``` You can access the stored meta-data after the block step has passed. Use the `buildkite-agent meta-data get` command to retrieve your data: ```shell buildkite-agent meta-data get "release-name" ``` > 🚧 > Meta-data cannot be interpolated directly from the `pipeline.yml` at runtime. The meta-data store can only be accessed from within a command step. In the below example, the script uses the `buildkite-agent` meta-data command to retrieve the meta-data and print it to the log: ```bash #!/bin/bash RELEASE_NAME="$(buildkite-agent meta-data get "release-name")" echo "Release name: $RELEASE_NAME" ``` ###### Passing meta-data to trigger steps When passing meta-data values to trigger steps you need to delay adding the trigger step to the pipeline until after the block step has completed; this can be done using [dynamic pipelines](/docs/agent/cli/reference/pipeline), and works around the lack of runtime meta-data interpolation. You can modify a trigger step to dynamically upload itself to a pipeline as follows: 1. Move your trigger step from your `pipeline.yml` file into a script. The below example script is stored in a file named `.buildkite/trigger-deploy.sh`: ```bash #!/bin/bash set -euo pipefail # Set up a variable to hold the meta-data from your block step RELEASE_NAME="$(buildkite-agent meta-data get "release-name")" # Create a pipeline with your trigger step PIPELINE="steps: - trigger: \"deploy-pipeline\" label: \"Trigger deploy\" build: meta_data: release-name: $RELEASE_NAME " # Upload the new pipeline and add it to the current build echo "$PIPELINE" | buildkite-agent pipeline upload ``` 1. Replace the old trigger step in your `pipeline.yml` with a dynamic pipeline upload: _Before_ the `pipeline.yml` file with the trigger step: ```yaml steps: - block: "\:shipit\:" fields: - text: "Code Name" key: "release-name" - trigger: "deploy-pipeline" label: "Trigger Deploy" ``` _After_ the `pipeline.yml` file dynamically uploading the trigger step: ```yaml steps: - block: "\:shipit\:" fields: - text: "Code Name" key: "release-name" - command: ".buildkite/trigger-deploy.sh" label: "Prepare Deploy Trigger" ``` The command step added in the above example will upload the trigger step and add it to the end of our pipeline at runtime. In the pipeline you're triggering, you will be able to use the meta-data that you have passed through as if it was set during the triggered build. ##### Metadata validation handling When using block steps with form fields, it's important to understand how the `required` and the `default` attributes interact with metadata validation. Setting `required: false` only affects the UI by making the field appear optional and allowing users to submit the form with an empty value. However, the metadata key will still be created in the build's metadata store. If you also set `default: ""`, the metadata key will exist with an empty string. This is important to remember as some `buildkite-agent` commands (for example, `buildkite-agent meta-data set`) will reject empty or whitespace-only values and fail at runtime. Recommended approach: - Set the field `required: true` (no default), or - Keep the field optional (`required: false`) but provide a non-empty default. ##### Input validation To prevent users from entering invalid text values in block steps (for example, to gather some deployment information), you can use input validation. If you associate a regular expression with a field, the field outline will turn red when an invalid value is entered. To implement input validation, use the following sample syntax: ```yaml steps: - block: "Click me!" fields: - text: "Must be hexadecimal" key: hex format: "[0-9a-f]+" ``` The `format` must be a regular expression implicitly anchored to the beginning and end of the input and is functionally equivalent to the [HTML5 pattern attribute](https://developer.mozilla.org/en-US/docs/Web/HTML/Attributes/pattern). ##### Block steps interacting with wait steps If a block step follows or precedes a wait step in your build, the wait step will be ignored and only the block step will run, like in this example: ```yml steps: - command: ".buildkite/steps/yarn" - wait: ~ - block: "unblock me" ``` But let's consider a different example. Now the wait step (with `continue_on_failure: true`) will be ignored, but the block step will _also not run_, because the 'previous' command step failed. ```yml steps: - command: "exit -1" - wait: ~ continue_on_failure: true - block: "unblock me" ``` If you need to run a block step after a failed step, set [`soft_fail`](/docs/pipelines/configure/soft-fail) on the failing step: ```yml steps: - command: "exit -1" soft_fail: - exit_status: "*" - block: "unblock me" ``` Alternatively, it is possible to use a wait step with a block step if conditionals are used. In these cases, the wait step must come before the block step: ```yml steps: - command: ".buildkite/steps/yarn" - wait: if: build.source == "schedule" - block: "Deploy changes?" if: build.branch == pipeline.default_branch && build.source != "schedule" - command: ".buildkite/scripts/deploy" if: build.branch == pipeline.default_branch ``` --- ### Input step URL: https://buildkite.com/docs/pipelines/configure/step-types/input-step #### Input step An _input_ step is used to collect information from a user. An input step is functionally identical to a [block step](/docs/pipelines/configure/step-types/block-step), however an input step doesn't create any [dependencies](/docs/pipelines/configure/depends-on) to the steps before and after it. Input steps block your build from completing, but do not automatically block other steps from running unless they specifically depend upon it. An input step can be defined in your pipeline settings, or in your [pipeline.yml](/docs/pipelines/configure/defining-steps) file. ```yaml steps: - input: "Information please" fields: - text: "What is the date today?" key: "todays-date" ``` You can add form fields to input steps by adding a `fields` attribute. There are two field types available: `text` or `select`. The `select` input type displays differently depending on how you configure the options. If you allow users to select multiple options, those options display as checkboxes. If users are required to select only one option from six or fewer, those options display as radio buttons. Seven or more options display as a dropdown menu. The data you collect from these fields is available to subsequent steps through the [build meta-data](/docs/pipelines/configure/build-meta-data) command. In this example, the pipeline defines an input step with the key `name`. The Bash script then accesses the value of the step using the [meta-data](/docs/agent/cli/reference/meta-data) command. ```yaml - input: "Who is running this script?" fields: - text: "Your name" key: "name" - label: "Run script" command: script.sh ``` ```bash NAME=$(buildkite-agent meta-data get name) ``` For an example pipeline, see the [Input step example pipeline](https://github.com/buildkite/input-step-example) on GitHub: [:pipeline: Input Step Example Pipeline github.com/buildkite/input-step-example](https://github.com/buildkite/input-step-example) > 🚧 Don't store sensitive data in input steps > You shouldn't use input steps to store sensitive information like secrets because the data will be stored in build metadata. ##### Input step attributes Input and block steps have the same attributes available for use. Optional attributes: | `prompt` | The instructional message displayed in the dialog box when the step is activated. _Example:_ `"Release to production?"` _Example:_ `"Fill out the details for this release"` | `fields` | A list of input fields required to be filled out before the step will be marked as passed. Available input field types: `text`, `select` | `allowed_teams` | A list of teams that are permitted to complete this step, whose values are a list of one or more team slugs or IDs. If this field is specified, a user must be a member of one of the teams listed in order to submit a value to complete this step. The use of `allowed_teams` replaces the need for write access to the pipeline, meaning a member of an allowed team with read-only access may complete the step. Learn more about this attribute in the [Permissions](#permissions) section. _Example:_ `["deployers", "approvers", "b50084ea-4ed1-405e-a204-58bde987f52b"]` | `branches` | The [branch pattern](/docs/pipelines/configure/workflows/branch-configuration#branch-pattern-examples) defining which branches will include this input step in their builds. _Example:_ `"main stable/*"` | `if` | A boolean expression to restrict the running of the step. See [Using conditionals](/docs/pipelines/configure/conditionals) for supported expressions. _Example:_ `build.message != "skip me"` | `depends_on` | A list of step keys that this step depends on. This step will only proceed after the named steps have completed. See [managing step dependencies](/docs/pipelines/configure/depends-on) for more information. _Example:_ `"test-suite"` | `blocked_state` | The state that the build is set to when the build is blocked by this block step. If you're using GitHub, you can also [configure which GitHub status](/docs/pipelines/source-control/github#customizing-commit-statuses) to use for blocked builds on a per-pipeline basis. _Default:_ `passed` _Values:_ `passed`, `failed`, `running` | `key` | A unique string to identify the input step. Keys cannot have the same pattern as a UUID (`xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx`). _Example:_ `"test-suite"` _Aliases:_ `identifier`, `id` | `allow_dependency_failure` | Whether to continue to proceed past this step if any of the steps named in the `depends_on` attribute fail. _Default:_ `false` ```yaml steps: - input: "\:rocket\: Release!" ``` ##### Text field attributes > 📘 Line endings > A text field normalizes line endings to Unix format (`\n`). Required attributes: | `key` | The meta-data key that stores the field's input (using the [buildkite-agent meta-data command](/docs/agent/cli/reference/meta-data)). The key may only contain alphanumeric characters, slashes, dashes, or underscores. _Example:_ `"release-name"` ```yaml steps: - input: "Release information" fields: - text: "Code Name" key: "release-name" ``` Optional attributes: | `text` | The text input name. _Example:_ `"Release Name"` | `hint` | The explanatory text that is shown after the label. _Example:_ `"What's the code name for this release? \:name_badge\:"` | `required` | A boolean value that defines whether the field is required for form submission. _Default value:_ `true` | `default` | The value that is pre-filled in the text field. _Example:_ `"Flying Dolphin"` | `format` | A regular expression used for [input validation](#input-validation) that indicates invalid input. _Example:_ `"[a-zA-Z]+"` ```yaml steps: - input: "Request Release" fields: - text: "Code Name" key: "release-name" hint: "What's the code name for this release? \:name_badge\:" required: false default: "Release #" ``` ##### Select field attributes Required attributes: | `key` | The meta-data key that stores the field's input (using the [buildkite-agent meta-data command](/docs/agent/cli/reference/meta-data)). The key may only contain alphanumeric characters, slashes, dashes, or underscores. _Example:_ `"release-stream"` | `options` | The list of select field options. For six or fewer options they'll be displayed as radio buttons, otherwise they'll be displayed in a dropdown box. If selecting multiple options is permitted, the options will be displayed as checkboxes. ```yaml steps: - input: "Request Release" fields: - select: "Stream" key: "release-stream" options: - label: "Beta" value: "beta" - label: "Stable" value: "stable" ``` Optional attributes: | `hint` | The text displayed directly under the select field's label. _Example:_ `"Which release stream does this belong in? \:fork\:"` | `required` | A boolean value that defines whether the field is required for form submission. When this value is set to `false` and users can only select one option, the options display in a dropdown menu, regardless of how many options there are. _Default:_ `true` | `multiple` | A boolean value that defines whether multiple options may be selected. When multiple options are selected, they are delimited in the meta-data field by a comma (`,`). _Default:_ `false` | `default` | The value of the option or options that will be pre-selected. When `multiple` is enabled, this can be an array of values to select by default. _Example:_ `"beta"` ```yaml steps: - input: "Deploy To" fields: - select: "Regions" key: "deploy-regions" hint: "Which regions should we deploy this to? \:earth_asia\:" required: true multiple: true default: - "na" - "eur" - "asia" - "aunz" options: - label: "North America" value: "na" - label: "Europe" value: "eur" - label: "Asia" value: "asia" - label: "Oceania" value: "aunz" ``` Each select option has the following _required_ attributes: | `label` | Descriptive text displayed for the option. _Example:_ `"Stable"` | `value` | The value to be stored as meta-data (to be later retrieved using the [buildkite-agent meta-data command](/docs/agent/cli/reference/meta-data)). _Example:_ `"stable"` ##### Permissions To complete an input step, a user must either have write access to the pipeline, or where the [`allowed_teams` attribute](#input-step-attributes) is specified, the user must belong to one of the allowed teams. When `allowed_teams` is specified, a user who has write access to the pipeline but is not a member of any of the allowed teams will not be permitted to complete the step. The `allowed_teams` attribute serves as a useful way to restrict input permissions to a subset of users without restricting the ability to create builds. Conversely, this attribute is also useful for granting input permissions to users _without_ also granting the ability create builds. ```yml - input: "Release" prompt: "Fill out the details for release" allowed_teams: - "approvers" fields: - text: "Release Name" key: "release-name" ``` ##### Input validation To prevent users from entering invalid text values in input steps (for example, to gather some deployment information), you can use input validation. If you associate a regular expression to a field, the field outline will turn red when an invalid value is entered. To do it, use the following sample syntax: ```yaml steps: - input: "Click me!" fields: - text: "Must be hexadecimal" key: hex format: "[0-9a-f]+" ``` The `format` must be a regular expression implicitly anchored to the beginning and end of the input and is functionally equivalent to the [HTML5 pattern attribute](https://developer.mozilla.org/en-US/docs/Web/HTML/Attributes/pattern). --- ### Trigger step URL: https://buildkite.com/docs/pipelines/configure/step-types/trigger-step #### Trigger step A _trigger_ step creates a build on another pipeline. You can use trigger steps to separate your test and deploy pipelines, or to create build dependencies between pipelines. A trigger step can be defined in your pipeline settings, or in your [pipeline.yml](/docs/pipelines/configure/defining-steps) file, by setting the `trigger` attribute to the [slug of the pipeline you want to trigger](#trigger). ```yml steps: - trigger: deploy-pipeline ``` ##### Permissions All builds created by a trigger step will have the same author as the parent build. This user must: * be a member of your organization * have a verified email address If you have [Teams](/docs/platform/team-management/permissions) enabled in your organization, *one* of the following conditions must be met: * The authoring user must have 'Build' permission on *every* pipeline that will be triggered * The triggering build has no creator and no unblocker, *and* the source pipeline and the target pipeline share a team that can 'Build' If neither condition is true, the build will fail, and builds on subsequent pipelines will not be triggered. If using bot users (unregistered users who are not part of any team) to trigger pipelines, make sure you have shared team which has the build permission on parent and child pipelines. If your triggering pipelines are started by an API call or a webhook, it might not be clear whether the triggering user has access to the triggered pipeline, which will cause your build to fail. To prevent that from happening, make sure that all of your GitHub user accounts that are triggering builds are [connected to Buildkite accounts](/docs/pipelines/source-control/github#connecting-buildkite-and-github). > 📘 Pipeline triggering > Pipelines associated with one [cluster](/docs/pipelines/glossary#cluster) cannot trigger pipelines associated with another cluster, unless a [rule](/docs/pipelines/security/clusters/rules) has been created to explicitly allow triggering between pipelines in different clusters. ##### Trigger step attributes Required attributes: | `trigger` | The slug of the pipeline to create a build. You can find it in the URL of your pipeline, and it corresponds to the name of the pipeline, converted to [kebab-case](https://en.wikipedia.org/wiki/Letter_case#Kebab_case). _Example:_ `"another-pipeline"` Optional attributes: | `build` | An optional map of attributes for the triggered build. Available attributes: `branch`, `commit`, `env`, `message`, `meta_data` | `label` | The label that will be displayed in the pipeline visualization in Buildkite. Supports emoji. _Example:_ `"\:rocket\: Deploy"` _Alias:_ `name` | `async` | If set to `true` the step will immediately continue, regardless of the success of the triggered build. If set to `false` the step will wait for the triggered build to complete and continue only if the triggered build passed. Note that when `async` is set to `true`, as long as the triggered build starts, the original pipeline will show that as successful. The original pipeline does not get updated after subsequent steps or after the triggered build completes. _Default:_ `false` | `branches` | The [branch pattern](/docs/pipelines/configure/workflows/branch-configuration#branch-pattern-examples) defining which branches will include this step in their builds. _Example:_ `"main stable/*"` | `if` | A boolean expression that omits the step when false. See [Using conditionals](/docs/pipelines/configure/conditionals) for supported expressions. _Example:_ `build.message != "skip me"` | `depends_on` | A list of step keys that this step depends on. This step will only run after the named steps have completed. See [managing step dependencies](/docs/pipelines/configure/depends-on) for more information. _Example:_ `"test-suite"` | `key` | A unique string to identify the trigger step. Keys can not have the same pattern as a UUID (`xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx`). _Example:_ `"trigger-deploy"` _Aliases:_ `identifier`, `id` | `allow_dependency_failure` | Whether to continue to run this step if any of the steps named in the `depends_on` attribute fail. _Default:_ `false` | `skip` | Whether to skip this step or not. Passing a string provides a reason for skipping this command. Passing an empty string is equivalent to `false`. Note: Skipped steps will be hidden in the pipeline view by default, but can be made visible by toggling the 'Skipped jobs' icon. _Example:_ `true` _Example:_ `false` _Example:_ `"My reason"` | `soft_fail` | When `true`, failure of the triggered build will not cause the triggering build to fail. _Default:_ `false` | `parallelism` | The number of parallel triggered builds to create. When set, Buildkite creates multiple triggered builds from a single trigger step. Each triggered build receives a `BUILDKITE_PARALLEL_JOB` environment variable (0-based index) and `BUILDKITE_PARALLEL_JOB_COUNT` (total number of parallel builds). _Example:_ `3` Optional `build` attributes: | `message` | The message for the build. Supports emoji. _Default:_ the label of the trigger step. _Example:_ `"Triggered build"` | `commit` | The commit hash for the build. _Default:_ `"HEAD"` _Example:_ `"ca82a6d"` | `branch` | The branch for the build. _Default:_ The triggered pipeline's default branch. _Example:_ `"production"` | `meta_data` | A map of [meta-data](/docs/pipelines/configure/build-meta-data) for the build. _Example:_ `release-version: "1.1"` | `env` | A map of [environment variables](/docs/pipelines/configure/environment-variables) for the build. _Example:_ `RAILS_ENV: "test"` ```yml - trigger: "data-generator" label: "\:package\: Generate data" build: meta_data: release-version: "1.1" ``` ###### Example: triggering parallel builds ```yml steps: - trigger: "data-generator" label: "\:package\: Generate data" parallelism: 3 build: meta_data: release-version: "1.1" ``` This creates three builds on the `data-generator` pipeline, each with a different `BUILDKITE_PARALLEL_JOB` value (0, 1, 2). ##### Agent-applied attributes These attributes are _only_ applied by the Buildkite agent when uploading a pipeline (`buildkite-agent pipeline upload`), since they require direct access to your code or repository to process correctly. > 🚧 > Agent-applied attributes are not accepted in pipelines set using the Buildkite interface. ###### if_changed | `if_changed` | A [glob pattern](/docs/pipelines/configure/glob-pattern-syntax) that omits the step from a build if it does not match any files changed in the build. _Example:_ `"{**.go,go.mod,go.sum,fixtures/**}"` From version 3.109.0 of the Buildkite agent, `if_changed` also supports lists of glob patterns and `include` and `exclude` attributes. _Minimum Buildkite agent versions:_ 3.99 (with `--apply-if-changed` flag), 3.103.0 (enabled by default), 3.109.0 (expanded syntax) For an example pipeline, demonstrating various forms of `if_changed`, see [Using `if_changed`](/docs/pipelines/configure/dynamic-pipelines/if-changed). ##### Environment variables You can use [environment variable substitution](/docs/agent/cli/reference/pipeline#environment-variable-substitution) to set attribute values: ```yml - trigger: "app-deploy" label: "\:rocket\: Deploy" branches: "main" async: true build: message: "${BUILDKITE_MESSAGE}" commit: "${BUILDKITE_COMMIT}" branch: "${BUILDKITE_BRANCH}" ``` To pass through pull request information to the triggered build, pass through the branch and pull request environment variables: ```yml - trigger: "app-sub-pipeline" label: "Sub-pipeline" build: message: "${BUILDKITE_MESSAGE}" commit: "${BUILDKITE_COMMIT}" branch: "${BUILDKITE_BRANCH}" env: BUILDKITE_PULL_REQUEST: "${BUILDKITE_PULL_REQUEST}" BUILDKITE_PULL_REQUEST_BASE_BRANCH: "${BUILDKITE_PULL_REQUEST_BASE_BRANCH}" BUILDKITE_PULL_REQUEST_REPO: "${BUILDKITE_PULL_REQUEST_REPO}" ``` > 📘 BUILDKITE_PULL_REQUEST in triggered builds > If `BUILDKITE_PULL_REQUEST` is set, the agent will check out the corresponding pull request ref (that is, `refs/pull/ID/head`) instead of the branch specified by `BUILDKITE_BRANCH`. > This behavior is part of the agent's checkout logic, and is intended to support builds from pull requests. However, such behavior may be unexpected in triggered builds where `BUILDKITE_PULL_REQUEST` is passed for reporting purposes only. > To pass pull request metadata to a triggered build without affecting the code checkout, use a custom environment variable name (for example, `MONOREPO_PULL_REQUEST` instead of `BUILDKITE_PULL_REQUEST`). To set environment variables on the build created by the trigger step, use the `env` attribute: ```yml - trigger: "release-binaries" label: "\:package\: Release" build: env: RELEASE_STREAM: "${RELEASE_STREAM:-stable}" ``` ##### Triggering specific steps in a pipeline While you cannot trigger only a specific step in a pipeline, you can use [conditionals](/docs/pipelines/configure/conditionals) or [dynamic pipelines](/docs/pipelines/configure/dynamic-pipelines) to achieve a similar effect. An example using conditionals might look like this: * Testing for [BUILDKITE_SOURCE](/docs/pipelines/configure/environment-variables) `=='trigger_job'` to find out if the build was triggered by a trigger step * Testing for [BUILDKITE_TRIGGERED_FROM_BUILD_PIPELINE_SLUG](/docs/pipelines/configure/environment-variables#BUILDKITE_TRIGGERED_FROM_BUILD_PIPELINE_SLUG) to find out which pipeline triggered the build * Custom [environment variables](#environment-variables) passed to the triggered build In the target pipeline, to run the command step only if the build was triggered by a specific pipeline, you might use something like this: ```yml steps: - command: ./scripts/tests.sh if: build.source == 'trigger_job' && build.env('BUILDKITE_TRIGGERED_FROM_BUILD_PIPELINE_SLUG') == 'the-triggering-pipeline' ``` If you also want the command step to run when the build was not triggered by the specific pipeline, you might need to do the opposite, and set conditions on the steps that you don't want to run when the build is triggered: ```yml steps: - command: ./scripts/tests.sh if: build.source != 'trigger_job' ``` ##### Canceling intermediate builds and triggers When using trigger steps that target pipelines with **Cancel Intermediate Builds** setting enabled, it's important to understand how this feature interacts with triggered builds. If a triggered build is "canceled" due to the **Cancel Intermediate Builds** setting being enabled, such trigger step will be marked as "skipped" in the triggering build. ###### Multiple triggered builds for the same pipeline When multiple pipeline builds (for instance, when multiple builds are running as a result of a single commit) trigger builds in other pipelines, you can enable the **Cancel Intermediate Builds** feature to allow only the newest build to run, thereby reducing unnecessary, duplicated pipeline builds. For example, assume a scenario with three pipelines—**Pipeline A**, **Pipeline B**, and **Pipeline C**. A commit that runs **Pipeline A** triggers a build on **Pipeline B**. The same commit runs **Pipeline C**, which also triggers a build on **Pipeline B**. When **Cancel Intermediate Builds**: * _Is enabled_, the build of **Pipeline B**, run by whichever pipeline it was triggered by _first_, is _canceled_ and the newest triggered **Pipeline B** build would be allowed to run. * _Is not enabled_, **Pipeline B** will run twice, as it will be triggered by both **Pipeline A** and **Pipeline C** without cancellation. Regardless of whether or not **Cancel Intermediate Builds** is enabled, if either **Pipeline A** or **Pipeline C** is manually canceled before their triggering steps have occurred, then the **Pipeline B** build triggered by its canceled pipeline will not run, and **Pipeline B** will only run once (triggered by the other, non-canceled pipeline). --- ### Group step URL: https://buildkite.com/docs/pipelines/configure/step-types/group-step #### Group step A group step can contain various sub-steps, and display them in a single logical group on the Build page. For example, you can group all of your linting steps or all of your UI test steps to keep the Build page less messy. Sub-groups and nested groups are not supported. The group step also helps manage dependencies between a collection of steps, for example, "step X" [`depends_on`](/docs/pipelines/configure/step-types/group-step#group-step-attributes) everything in "group Y". A group step can be defined in your pipeline settings or your [pipeline.yml](/docs/pipelines/configure/defining-steps) file. Here is an example of using the group step: ```yml steps: - group: "\:lock_with_ink_pen\: Security Audits" key: "audits" steps: - label: "\:brakeman\: Brakeman" command: ".buildkite/steps/brakeman" - label: "\:bundleaudit\: Bundle Audit" command: ".buildkite/steps/bundleaudit" - label: "\:yarn\: Yarn Audit" command: ".buildkite/steps/yarn" - label: "\:yarn\: Outdated Check" command: ".buildkite/steps/outdated" ``` This is how it's displayed in the UI: Only the first 50 jobs within a build header can ever be displayed in the UI, so you might not see all of your groups at all times. However, the jobs are fine and will still show up on the build page. ##### Group step attributes Required attributes: | `group` | Name of the group in the UI. In YAML, if you don't want a label, pass a `~`. Can also be provided in the `label` or `name` attribute if `null` is provided to the `group` attribute. If multiple are specified, `group` takes precedence. _Type:_ `string` or `null` | `steps` | A list of steps in the group; at least 1 step is required. Allowed step types: [wait](/docs/pipelines/configure/step-types/wait-step), [trigger](/docs/pipelines/configure/step-types/trigger-step), [command/commands](/docs/pipelines/configure/step-types/command-step), [block](/docs/pipelines/configure/step-types/block-step), [input](/docs/pipelines/configure/step-types/input-step). _Type:_ `array` Optional attributes: | `allow_dependency_failure` | Whether to continue to run this step if any of the steps named in the `depends_on` attribute fail. _Default:_ `false` | `depends_on` | A list of step or group keys that this step depends on. This step or group will only run after the named steps have completed. See [managing step dependencies](/docs/pipelines/configure/depends-on) for more information. _Example:_ `"test-suite"` | `if` | A boolean expression that omits the step when false. See [Using conditionals](/docs/pipelines/configure/conditionals) for supported expressions. _Example:_ `build.message != "skip me"` | `key` | A unique string to identify the step, block, or group. Keys can not have the same pattern as a UUID (`xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx`). When set on a group step, all jobs within the group will include a `group_key` field in [REST API builds endpoint](/docs/apis/rest-api/builds) responses with this value, allowing you to identify which jobs belong to this group. _Example:_ `"test-suite"` _Aliases:_ `identifier`, `id` | `label` | The label that will be displayed in the pipeline visualization in Buildkite (name of the group in the UI). Supports emoji. _Example:_ `"\:hammer\: Tests" will be rendered as ":hammer: Tests"` _Alias:_ `name` | `notify` | Allows you to [trigger](/docs/pipelines/configure/notify) build notifications to different services. You can also choose to conditionally send notifications based on pipeline events. _Example:_ `"github_commit_status:"` | `skip` | Whether to skip this step or not. Passing a string provides a reason for skipping this command. Passing an empty string is equivalent to `false`. Note: Skipped steps will be hidden in the pipeline view by default, but can be made visible by toggling the 'Skipped jobs' icon. _Example:_ `true` _Example:_ `false` _Example:_ `"My reason"` ##### Agent-applied attributes These attributes are _only_ applied by the Buildkite agent when uploading a pipeline (`buildkite-agent pipeline upload`), since they require direct access to your code or repository to process correctly. > 🚧 > Agent-applied attributes are not accepted in pipelines set using the Buildkite interface. ###### if_changed | `if_changed` | A [glob pattern](/docs/pipelines/configure/glob-pattern-syntax) that omits the step from a build if it does not match any files changed in the build. _Example:_ `"{**.go,go.mod,go.sum,fixtures/**}"` From version 3.109.0 of the Buildkite agent, `if_changed` also supports lists of glob patterns and `include` and `exclude` attributes. _Minimum Buildkite agent versions:_ 3.99 (with `--apply-if-changed` flag), 3.103.0 (enabled by default), 3.109.0 (expanded syntax) For an example pipeline, demonstrating various forms of `if_changed`, see [Using `if_changed`](/docs/pipelines/configure/dynamic-pipelines/if-changed). ##### Parallel groups If you put two or more group steps in a YAML config file consecutively, they will run in parallel. For example: ```yml #### 1.sh and 3.sh will start at the same time. #### 2.sh will start when 1.sh finishes, and 4.sh will start #### when 3.sh finishes. steps: - group: "first" steps: - command: "1.sh" - wait - command: "2.sh" - group: "second" steps: - command: "3.sh" - wait - command: "4.sh" ``` Running jobs in parallel has some limitations: * Parallel groups will be displayed ungrouped if the build's jobs are truncated because Buildkite doesn't currently store or calculate any information about the number of jobs in a non-parallel group. * If a parallel step exists within a group, parallel jobs are treated as regular jobs within a step group - so you can't have parallel groups within step groups. So, for example, a `group` that contains two `steps` each with `parallel: 4` will display eight jobs in it, with no visual indication that those eight jobs are two parallel steps. * If a parallel job group is within a named group, the groups are handled as though the parallel group isn't there. * It's impossible to have a parallel job with only some of the jobs within a group, as they're all created on the same YAML step entry. ##### Using wait steps in job groups You can have [wait steps](/docs/pipelines/configure/step-types/wait-step) in a group. Such steps operate independently of other groups. For example, both groups will operate independently here, meaning `d.sh` won't wait on `a.sh` to finish. Note also that wait steps are counted in the group step total, so both `Group01` and `Group02` contain 3 steps. ```yml steps: - group: "Group01" depends_on: "tests" steps: - command: "a.sh" - wait - command: "b.sh" - group: "Group02" depends_on: "tests" id: "toast" steps: - command: "c.sh" - wait - command: "d.sh" - command: "yay.sh" depends_on: "toast" ``` ##### Group merging If you upload a pipeline that has a `group` or `label` that matches the group of the step that uploaded it, those groups will be merged together in the Buildkite UI. This merging behavior only applies if the group step with the matching `group` or `label` is the first step within the uploaded pipeline. Note that inside a single pipeline, groups with the same `group` or `label` will not be merged in the Buildkite UI. > 📘 You can't define the same key twice > Trying to create different groups or steps with the same `key` attribute will result in an error. For example, you have a YAML file: ```yml steps: - group: "Setup" steps: - commands: - "buildkite-agent pipeline upload" - echo "start" ``` And this YAML file uploads a pipeline that has a group with the same name: ```yml steps: - group: "Setup" steps: - command: "docker build" ``` These groups will be merged into one in the UI, and the `docker build` step will be added to the existing group. Similarly, if you have a YAML file: ```yml steps: - group: ~ label: "Setup" steps: - commands: - "buildkite-agent pipeline upload" - echo "start" ``` And this YAML file uploads a pipeline that has a group with the same label: ```yml steps: - group: ~ label: "Setup" steps: - command: echo "proceed" ``` These groups will be merged into one in the UI, and the `echo "proceed"` step will be added to the existing group. ##### Example [:buildkite: Group steps An example of how to group steps in a pipeline github.com/buildkite/group-step-example](https://github.com/buildkite/group-step-example) --- ### Overview URL: https://buildkite.com/docs/pipelines/configure/dynamic-pipelines #### Dynamic pipelines When your source code projects are built with Buildkite Pipelines, you can write scripts in the same language as your source code, or another suitable language, that generate new Buildkite pipeline steps (in either YAML or JSON format), which you can then upload to the same pipeline using the [pipeline upload step](/docs/pipelines/configure/defining-steps#step-defaults-pipeline-dot-yml-file). These additional _dynamically generated_ pipeline steps are run on the same Buildkite agent, as part of the same pipeline build, and will appear as their own steps in your pipeline builds. This provides you with the flexibility to structure your pipelines however you require. For example, the following code snippet is an executable shell script that generates a list of parallel test steps based upon the `test/*` directory within your repository: ``` #!/bin/bash #### exit immediately on failure, or if an undefined variable is used set -eu #### begin the pipeline.yml file echo "steps:" #### add a new command step to run the tests in each test directory for test_dir in test/*/; do echo " - command: \"run_tests "${test_dir}"\"" done ``` To use this script, save it to the `.buildkite/` directory inside your repository (that is, `.buildkite/pipeline.sh`), ensure the script file is executable, and then update your pipeline upload step to use the new script: ```bash .buildkite/pipeline.sh | buildkite-agent pipeline upload ``` When the pipeline's build commences, this step executes the script and pipes the output to the `buildkite-agent pipeline upload` command. The upload command then inserts the steps from the script into the build immediately after this upload step. > 📘 Step ordering in the Buildkite interface > If you run the pipeline upload step multiple times in a _single command step_ (for example, by running a script file from a command step, in which the script runs the pipeline upload step multiple times), then each batch of uploaded steps will appear in reverse order in the Buildkite interface, such as the **Pipeline** view (in the sidebar) or **Table** view of the [new build page](/docs/pipelines/build-page), as well as the **Jobs** view of the classic build page, since the upload command inserts its steps immediately after the upload step. > To avoid each of your dynamically-generated pipeline upload steps appearing in reverse order, define each of these upload steps in reverse order—that is, the steps being run as part of an upload step that you want to run first should be defined last. Alternatively, you can define explicit dependencies using the `depends_on` field. In the following `pipeline.yml` example, when the build runs, it will execute the `.buildkite/pipeline.sh` script, then the test steps from the script will be added to the build _before_ the wait step and command step. After the test steps have run, the wait and command step will run. ```yml steps: - command: .buildkite/pipeline.sh | buildkite-agent pipeline upload label: "\:pipeline\: Upload" - wait - command: "other-script.sh" label: "Run other operations" ``` ##### Dynamic pipeline templates If you need the ability to use pipelines from a central catalog, or enforce certain configuration rules, you can either use dynamic pipelines and the [`pipeline upload`](/docs/agent/cli/reference/pipeline#uploading-pipelines) command to make this happen or [write custom plugins](/docs/pipelines/integrations/plugins) and share them across your organization. To use dynamic pipelines and the pipeline upload command, you'd make a pipeline that looks something like this: ```yml steps: - command: enforce-rules.sh | buildkite-agent pipeline upload label: "\:pipeline\: Upload" ``` Each team defines their steps in `team-steps.yml`. Your templating logic is in `enforce-rules.sh`, which can be written in any language that can pass YAML to the pipeline upload. In `enforce-rules.sh` you can add steps to the YAML, require certain versions of dependencies or plugins, or implement any other logic you can program. Depending on your use case, you might want to get `enforce-rules.sh` from an external catalog instead of committing it to the team repository. See how [Hasura.io](https://hasura.io) used [dynamic templates and pipelines](https://hasura.io/blog/what-we-learnt-by-migrating-from-circleci-to-buildkite/#dynamic-pipelines) to replace their YAML configuration with Go and some shell scripts. ##### Buildkite SDK Learn more about about the Buildkite SDK, which makes it easy to script the generation of steps for dynamic pipelines, on the [Buildkite SDK](/docs/pipelines/configure/dynamic-pipelines/sdk) page. --- ### Using if_changed URL: https://buildkite.com/docs/pipelines/configure/dynamic-pipelines/if-changed #### Using if_changed The `if_changed` feature is a [glob pattern](/docs/pipelines/configure/glob-pattern-syntax) that skips the step from a build if it does not match any files changed in the build. For example: `**.go,go.mod,go.sum,fixtures/**`. This feature allows you to detect changes in the repository and only build what changed. When enabled, steps containing an `if_changed` attribute are evaluated against the Git diff. If the `if_changed` glob pattern matches no files changed in the build, the step is skipped. `if_changed` can be used as an attribute of [command](/docs/pipelines/configure/step-types/command-step#agent-applied-attributes-if-changed), [group](/docs/pipelines/configure/step-types/group-step#agent-applied-attributes-if-changed), [trigger](/docs/pipelines/configure/step-types/trigger-step#agent-applied-attributes-if-changed) steps, or by using the [agent CLI](/docs/agent/cli/reference/pipeline#apply-if-changed) on the [pipeline upload command](/docs/agent/cli/reference/pipeline) of the Buildkite agent to detect the [`if_changed` attribute](/docs/pipelines/configure/step-types/command-step#agent-applied-attributes-if-changed) usage in your pipeline steps. > 🚧 > `if_changed` is an agent-applied attribute, and such attributes are not accepted in pipelines set using the Buildkite interface. When used as an agent-applied attribute, it will only be applied by the Buildkite agent when uploading a pipeline (`buildkite-agent pipeline upload`), since they require direct access to your code or repository to process correctly. The minimum Buildkite agent version required for using `if_changed` is version 3.99 (with `--apply-if-changed` flag). Starting with Buildkite agent version 3.103.0, this feature is enabled by default. From version 3.109.0 of the Buildkite agent, `if_changed` also supports lists of glob patterns and `include` and `exclude` attributes. ##### Monorepo workflows The `if_changed` feature is particularly useful for monorepo workflows, providing built-in change detection without requiring the monorepo-diff plugin. This can eliminate an extra pipeline generation cycle ("spawn a job to spawn more jobs") and simplify your pipeline configuration. For example, in a monorepo with multiple services: ```yaml steps: - label: "Frontend tests" command: "npm test" if_changed: "frontend/**" - label: "Backend tests" command: "go test ./..." if_changed: - "backend/**" - "go.{mod,sum}" - label: "Documentation build" command: "make docs" if_changed: "docs/**" ``` For more details on monorepo strategies, see [Working with monorepos](/docs/pipelines/best-practices/working-with-monorepos). ##### How change detection works The `if_changed` feature compares files against a base reference to determine what has changed (conceptually `git diff --merge-base `). The agent resolves the comparison base by checking the following in order, using the first valid value: 1. The [`--git-diff-base`](/docs/agent/cli/reference/pipeline#git-diff-base) agent configuration flag or `BUILDKITE_GIT_DIFF_BASE` environment variable 1. `origin/$BUILDKITE_PULL_REQUEST_BASE_BRANCH` (automatically set on pull request builds) 1. `origin/$BUILDKITE_PIPELINE_DEFAULT_BRANCH` (the pipeline's configured default branch) 1. `origin/main` For example, to explicitly set the comparison base, configure `BUILDKITE_GIT_DIFF_BASE` in the environment of the job that runs `buildkite-agent pipeline upload`. Since `if_changed` is evaluated during the upload, not when steps run, this variable must be available in the upload job's environment rather than in individual step definitions. You can set this in the pipeline's initial command step (the one that performs the upload): ```yaml env: BUILDKITE_GIT_DIFF_BASE: "origin/develop" steps: - label: "Upload dynamic pipeline" command: "buildkite-agent pipeline upload .buildkite/pipeline.yml" ``` Where `.buildkite/pipeline.yml` contains steps with `if_changed`: ```yaml steps: - label: "Run if backend changed" command: "make test-backend" if_changed: "backend/**" ``` Alternatively, set it through [agent configuration](/docs/agent/cli/reference/pipeline#git-diff-base) using the `--git-diff-base` flag, or as an environment variable on the agent itself. ##### What happens when steps are skipped When the `if_changed` pattern doesn't match any changed files, the step is [skipped](/docs/pipelines/configure/depends-on#how-skipped-steps-affect-dependencies). In the Buildkite Pipelines interface: - This step appears in your build with a "skipped" status - The step's dependencies and dependents are handled appropriately - The overall build continues to the next steps This is similar to using a `skip` [attribute](/docs/pipelines/configure/step-types/command-step#command-step-attributes), but the decision is made dynamically based on file changes rather than being predetermined. ##### Glob pattern reference The `if_changed` feature uses the [zzglob](https://github.com/DrJosh9000/zzglob) pattern syntax, which is similar to standard glob patterns but with some differences. For complete pattern syntax details, see [Glob pattern syntax](/docs/pipelines/configure/glob-pattern-syntax). The key pattern features are: - `**` matches any number of directories - `*` matches any characters within a single path segment - `?` matches a single character - `{option1,option2}` matches either option (brace expansion) - Character classes like `[abc]` or `[0-9]` ##### Usage examples This section covers some examples that demonstrate various forms of the `if_changed` attribute. > 🚧 Common mistake with dynamic pipelines > When using dynamic pipelines, the `if_changed` attribute must be placed in the YAML file that is uploaded during the `buildkite-agent pipeline upload` command, _not_ in the step that performs the upload. This is necessary because the agent must have access to your repository when it processes the `if_changed` attribute during the `buildkite-agent pipeline upload` command. ###### Single glob pattern The simplest form of `if_changed` uses a single glob pattern to match files. This step only runs if any `.go` file anywhere in the repository changes: ```yaml steps: - label: "Only run if a .go file anywhere in the repo is changed" if_changed: "**.go" ``` > 📘 > YAML requires some strings containing special characters to be quoted. ###### Brace expansion for multiple patterns Braces `{,}` let you combine patterns and subpatterns within a single string. This step only runs if `go.mod` or `go.sum` changes: ```yaml steps: - label: "Only run if go.mod or go.sum are changed" if_changed: go.{mod,sum} ``` > 🚧 > This syntax is whitespace-sensitive. A space within a pattern is treated as part of the file path to be matched. For example, `go.{mod, sum}` would not work as expected. You can combine recursive patterns with brace expansion. This step runs if any Go-related file changes: ```yaml steps: - label: "Run if any Go-related file is changed" if_changed: "{**.go,go.{mod,sum}}" ``` This step runs for any changes within the `app/` or `spec/` directories: ```yaml steps: - label: "Run for any changes within app/ or spec/" if_changed: "{app/**,spec/**}" ``` ###### Pattern lists Starting with Buildkite agent version 3.109, lists of patterns are supported. If any changed file matches any of the patterns, the step runs. This provides a more readable alternative to brace expansion. This step runs if any Go-related file changes: ```yaml steps: - label: "Run if any Go-related file is changed" if_changed: - "**.go" - go.{mod,sum} ``` This step runs for any changes in the `app/` or `spec/` directories: ```yaml steps: - label: "Run for any changes in app/ or spec/" if_changed: - app/** - spec/** ``` ###### Include and exclude attributes Starting with Buildkite agent version 3.109, `include` and `exclude` attributes are supported. The `exclude` attribute eliminates matching files from causing a step to run. When using `exclude`, the `include` attribute is required. This step runs for changes in `spec/`, but not for changes in `spec/integration/`: ```yaml steps: - label: "Run for changes in spec/, but not in spec/integration/" if_changed: include: spec/** exclude: spec/integration/** ``` Both `include` and `exclude` can use pattern lists. This step runs for changes in `api/` or `internal/`, but excludes `api/docs/` and any `.py` files in `internal/`: ```yaml steps: - label: "Run for api and internal, but not api/docs or internal .py files" if_changed: include: - api/** - internal/** exclude: - api/docs/** - internal/**.py ``` ###### Conditional pipeline triggers You can use `if_changed` on trigger steps to conditionally trigger downstream pipelines: ```yaml steps: - label: "Trigger deployment pipeline" trigger: "deploy-production" if_changed: - "src/**" - "Dockerfile" - "deployment/**" build: message: "Deploy changes from ${BUILDKITE_BRANCH}" commit: "${BUILDKITE_COMMIT}" branch: "${BUILDKITE_BRANCH}" ``` ##### Advanced use cases for if_changed Starting with Buildkite agent version 3.115.0, you can provide a custom list of changed files instead of relying on Git diff. This is useful when: - Working with shallow clones where Git history is limited - Using external monorepo tools (such as [Bazel](/docs/pipelines/tutorials/bazel)) that have their own change detection - Integrating with CI systems that already compute changed files upstream - Working with non-git repositories Use the `--changed-files-path` flag or `BUILDKITE_CHANGED_FILES_PATH` environment variable: ```bash #### Generate changed files list (example with custom tooling) echo "src/main.go pkg/feature/handler.go README.md" > changed-files.txt #### Upload pipeline with custom changed files buildkite-agent pipeline upload --changed-files-path changed-files.txt ``` Or using the environment variable: ```yaml steps: - label: "\:pipeline\: Upload dynamic steps" command: | # Your custom change detection nx affected:apps --plain > changed-files.txt buildkite-agent pipeline upload env: BUILDKITE_CHANGED_FILES_PATH: "changed-files.txt" ``` The file format is a newline-separated list of file paths relative to the repository root. ##### Troubleshooting In this section, you can find some of the issues that you might run into when using the `if_changed` attribute and how to solve them. ###### Step still runs when it shouldn't 1. **Check the agent version**: Ensure you're running agent version 3.103.0+ (or using `--apply-if-changed` flag with version 3.99+. See [Notes on agent version requirements](/docs/pipelines/configure/dynamic-pipelines/if-changed) at the start of this page). 1. **Verify pattern placement**: Make sure `if_changed` is in the correct YAML file (see the dynamic pipelines note above). 1. **Test the glob pattern**: The pattern is matched against file paths relative to the repository root. 1. **Check the comparison base**: The agent resolves the comparison base using a [specific order](/docs/pipelines/configure/dynamic-pipelines/if-changed#how-change-detection-works). Set `BUILDKITE_GIT_DIFF_BASE` if you need a different base. ###### Steps run unexpectedly after merging the default branch into a feature branch In monorepo workflows, developers often merge the default branch (for example, `main`) into feature branches to keep them up to date. After such a merge, `if_changed` may detect more changed files than expected, causing steps to run that otherwise wouldn't. This happens because the agent's local reference to the default branch (for example, `origin/main`) can be stale — it may point to an older commit than the actual current tip of the remote. Since `if_changed` uses `git merge-base` to find the common ancestor between the diff base and `HEAD`, an outdated local ref pushes the merge-base earlier in history, and the diff picks up extra files. To fix this, use the `--fetch-diff-base` flag or the `BUILDKITE_FETCH_DIFF_BASE` environment variable. Both options tell the agent to fetch the diff base ref from the remote before computing the diff. The `BUILDKITE_FETCH_DIFF_BASE` environment variable approach: ```yaml steps: - label: ":pipeline: Upload pipeline" command: "buildkite-agent pipeline upload" env: BUILDKITE_FETCH_DIFF_BASE: "true" ``` Or to pass the flag directly: ```bash buildkite-agent pipeline upload --fetch-diff-base ``` > 📘 > The `--fetch-diff-base` flag requires Buildkite agent version 3.117.0 or later. ###### Pattern doesn't match expected files 1. **Use the correct syntax**: The pattern uses non-bash glob or regex syntax. 1. **Mind the whitespace**: In brace expansions like `{mod,sum}`, spaces are treated as part of the pattern. 1. **Quote special characters**: In YAML, patterns starting with `*` or other special characters must be quoted. 1. **Test locally**: You can test patterns using `git diff --name-only origin/main` to see which files changed. ###### All steps run despite if_changed being set If the agent can't determine the changed files (for example, the comparison base branch doesn't exist in your repository, or you're working with a shallow clone that doesn't have the base branch), the agent disables `if_changed` and runs all steps normally, stripping the `if_changed` attributes. Check the agent logs for errors related to the git diff operation. Consider using `--changed-files-path` for shallow clone scenarios. ###### Agent shows "skipped" for all steps This can happen if no files actually changed between the current commit and the comparison base. Verify which base the agent is using by checking the [resolution order](/docs/pipelines/configure/dynamic-pipelines/if-changed#how-change-detection-works), and run `git diff --name-only --merge-base ` locally to confirm the diff is empty. If the base isn't what you expect, set `BUILDKITE_GIT_DIFF_BASE` explicitly. --- ### Buildkite SDK URL: https://buildkite.com/docs/pipelines/configure/dynamic-pipelines/sdk #### Buildkite SDK > 📘 > The Buildkite SDK feature is currently available as a preview. If you encounter any issues while using the Buildkite SDK, please raise them via a [GitHub Issue](https://github.com/buildkite/buildkite-sdk/issues). The [Buildkite SDK](https://github.com/buildkite/buildkite-sdk) is an open-source multi-language software development kit (SDK) that makes it easy to script the generation of pipeline steps for dynamic pipelines in native languages. The SDK has simple functions to output and serialize these pipeline steps to YAML or JSON format, which you can then upload to your Buildkite pipeline to execute as part of your pipeline build. Currently, the Buildkite SDK supports the following languages: - [JavaScript and TypeScript (Node.js)](#javascript-and-typescript-node-dot-js) - [Python](#python) - [Go](#go) - [Ruby](#ruby) - [C#](#c-sharp) Each of the **Installing** sub-sections below assume that your local environment already has the required language tools installed. ##### JavaScript and TypeScript (Node.js) This section explains how to install and use the Buildkite SDK for JavaScript and TypeScript ([Node.js](https://nodejs.org/en)-based) projects. ###### Installing To install the Buildkite SDK for [Node.js](https://nodejs.org/en) to your local development environment, run this command: ```bash npm install @buildkite/buildkite-sdk ``` ###### Using The following code example demonstrates how to import the Buildkite SDK into a simple TypeScript script, which then generates a Buildkite Pipelines step for a simple [command step](/docs/pipelines/configure/step-types/command-step) that runs `echo 'Hello, world!'`, and then outputs this step to either JSON or YAML format: ```typescript const { Pipeline } = require("@buildkite/buildkite-sdk"); const pipeline = new Pipeline(); pipeline.addStep({ command: "echo 'Hello, world!'", }); // JSON output // console.log(pipeline.toJSON()); // YAML output console.log(pipeline.toYAML()); ``` When you're ready to upload your output JSON or YAML steps to Buildkite Pipelines, you can do so from a currently running pipeline step: ```yaml #### For example, in your pipeline's Settings > Steps, and with ts-node installed to your agent: steps: - label: "\:pipeline\: Run dynamic pipeline steps" command: ts-node .buildkite/dynamicPipeline.ts | buildkite-agent pipeline upload ``` ###### API documentation For more detailed API documentation on the Buildkite SDK for TypeScript, consult the [Buildkite SDK's TypeScript API documentation](https://buildkite.com/docs/sdk/typescript/). ##### Python This section explains how to install and use the Buildkite SDK for Python projects. ###### Installing To install the Buildkite SDK for Python (with [uv](https://docs.astral.sh/uv/)) to your local development environment, run this command: ```bash uv add buildkite-sdk ``` ###### Using The following code example demonstrates how to import the Buildkite SDK into a simple Python script, which then generates a Buildkite Pipelines step for a simple simple [command step](/docs/pipelines/configure/step-types/command-step) that runs `echo 'Hello, world!'`, and then outputs this step to either JSON or YAML format: ```python from buildkite_sdk import Pipeline pipeline = Pipeline() pipeline.add_step({"command": "echo 'Hello, world!'"}) #### JSON output #### print(pipeline.to_json()) #### YAML output print(pipeline.to_yaml()) ``` When you're ready to upload your output JSON or YAML steps to Buildkite Pipelines, you can do so from a currently running pipeline step: ```yaml #### For example, in your pipeline's Settings > Steps: steps: - label: "\:pipeline\: Run dynamic pipeline steps" command: python3 .buildkite/dynamic_pipeline.py | buildkite-agent pipeline upload ``` ###### API documentation For more detailed API documentation on the Buildkite SDK for Python, consult the [Buildkite SDK's Python API documentation](https://buildkite.com/docs/sdk/python/). ##### Go This section explains how to install and use the Buildkite SDK for [Go](https://go.dev/) projects. ###### Installing To install the Buildkite SDK for [Go](https://go.dev/) to your local development environment, run this command: ```bash go get github.com/buildkite/buildkite-sdk/sdk/go ``` ###### Using The following code example demonstrates how to import the Buildkite SDK into a simple Go script, which then generates a Buildkite Pipelines step for a simple [command step](/docs/pipelines/configure/step-types/command-step) that runs `echo 'Hello, world!'`, and then outputs this step to either JSON or YAML format: ```go package main import ( "fmt" "github.com/buildkite/buildkite-sdk/sdk/go/sdk/buildkite" ) func main() { pipeline := buildkite.Pipeline{} pipeline.AddStep(buildkite.CommandStep{ Command: &buildkite.CommandStepCommand{ String: buildkite.Value("echo 'Hello, world!"), }, }) // JSON output // json, err := pipeline.ToJSON() // if err != nil { // log.Fatalf("Failed to serialize JSON: %v", err) // } // fmt.Println(json) // YAML output yaml, err := pipeline.ToYAML() if err != nil { log.Fatalf("Failed to serialize YAML: %v", err) } fmt.Println(yaml) } ``` When you're ready to upload your output JSON or YAML steps to Buildkite Pipelines, you can do so from a currently running pipeline step: ```yaml #### For example, in your pipeline's Settings > Steps: steps: - label: "\:pipeline\: Run dynamic pipeline steps" command: go run .buildkite/dynamic_pipeline.go | buildkite-agent pipeline upload ``` ###### API documentation For more detailed API documentation on the Buildkite SDK for Go, consult the [Buildkite SDK's Go API documentation](https://pkg.go.dev/github.com/buildkite/buildkite-sdk/sdk/go). ##### Ruby This section explains how to install and use the Buildkite SDK for [Ruby](https://www.ruby-lang.org/en/) projects. ###### Installing To install the Buildkite SDK for [Ruby](https://www.ruby-lang.org/en/) to your local development environment, run this command: ```bash gem install buildkite-sdk ``` ###### Using The following code example demonstrates how to import the Buildkite SDK into a simple Ruby script, which then generates a Buildkite Pipelines step for a simple [command step](/docs/pipelines/configure/step-types/command-step) that runs `echo 'Hello, world!'`, along with a [label](/docs/pipelines/configure/step-types/command-step#label) attribute, and then outputs this step to either JSON or YAML format: ```ruby require "buildkite" pipeline = Buildkite::Pipeline.new pipeline.add_step( label: "some-label", command: "echo 'Hello, World!'" ) #### JSON output #### puts pipeline.to_json #### YAML output puts pipeline.to_yaml ``` When you're ready to upload your output JSON or YAML steps to Buildkite Pipelines, you can do so from a currently running pipeline step: ```yaml #### For example, in your pipeline's Settings > Steps: steps: - label: "\:pipeline\: Run dynamic pipeline steps" command: ruby .buildkite/dynamic_pipeline.rb | buildkite-agent pipeline upload ``` ###### API documentation For more detailed API documentation on the Buildkite SDK for Ruby, consult the [Buildkite SDK's Ruby API documentation](https://buildkite.com/docs/sdk/ruby/). ##### C Sharp This section explains how to install and use the Buildkite SDK for [C#](https://learn.microsoft.com/en-us/dotnet/csharp/) (.NET) projects. ###### Installing To install the Buildkite SDK for [.NET](https://dotnet.microsoft.com/) to your local development environment, run this command: ```bash dotnet add package Buildkite.Sdk ``` ###### Using The following code example demonstrates how to import the Buildkite SDK into a simple C# script, which then generates a Buildkite pipeline with a build [command step](/docs/pipelines/configure/step-types/command-step), a [wait step](#c-sharp-wait-steps), and a test command step, and then outputs these steps to either JSON or YAML format: ```csharp using Buildkite.Sdk; using Buildkite.Sdk.Schema; var pipeline = new Pipeline(); pipeline.AddStep(new CommandStep { Label = "\:hammer\: Build", Command = "dotnet build" }); pipeline.AddStep(new WaitStep()); pipeline.AddStep(new CommandStep { Label = "\:test_tube\: Test", Command = "dotnet test" }); // JSON output for `buildkite-agent pipeline upload` // Console.WriteLine(pipeline.ToJson()); // YAML output for `buildkite-agent pipeline upload` Console.WriteLine(pipeline.ToYaml()); ``` When you're ready to upload your output JSON or YAML steps to Buildkite Pipelines, you can do so from a currently running pipeline step: ```yaml #### For example, in your pipeline's Settings > Steps: steps: - label: "\:pipeline\: Run dynamic pipeline steps" command: dotnet run --project .buildkite/DynamicPipeline.csproj | buildkite-agent pipeline upload ``` Also included in this section are examples of how to use the Buildkite SDK for C# with other step types, including a more complex [command step](#c-sharp-command-steps), a [block step](#c-sharp-block-steps), [wait step](#c-sharp-wait-steps), [trigger step](#c-sharp-trigger-steps), and [group step](#c-sharp-group-steps), as well as [environment variables](#c-sharp-environment-variables). ###### Command steps This code example demonstrates a more complex [command step](/docs/pipelines/configure/step-types/command-step) with additional options: ```csharp pipeline.AddStep(new CommandStep { Label = "\:dotnet\: Build", Key = "build", Command = "dotnet build --configuration Release", Agents = new AgentsObject { ["queue"] = "linux" }, TimeoutInMinutes = 30 }); ``` ###### Block steps This code example demonstrates how to implement a [block step](/docs/pipelines/configure/step-types/block-step): ```csharp pipeline.AddStep(new BlockStep { Block = "\:rocket\: Deploy to Production?", Prompt = "Are you sure?" }); ``` ###### Wait steps This code example demonstrates how to implement a [wait step](/docs/pipelines/configure/step-types/wait-step): ```csharp pipeline.AddStep(new WaitStep()); pipeline.AddStep(new WaitStep { ContinueOnFailure = true }); ``` ###### Trigger steps This code example demonstrates how to implement a [trigger step](/docs/pipelines/configure/step-types/trigger-step): ```csharp pipeline.AddStep(new TriggerStep { Trigger = "deploy-pipeline", Build = new TriggerBuild { Branch = "main" } }); ``` ###### Group steps This code example demonstrates how to implement a [group step](/docs/pipelines/configure/step-types/group-step): ```csharp pipeline.AddStep(new GroupStep { Group = "\:test_tube\: Tests", Steps = new List { new CommandStep { Label = "Unit", Command = "dotnet test" }, new CommandStep { Label = "Integration", Command = "dotnet test --filter Integration" } } }); ``` ###### Environment variables This code example demonstrates how to access [environment variables](/docs/pipelines/configure/environment-variables): ```csharp using Buildkite.Sdk; var branch = EnvironmentVariable.Branch; var commit = EnvironmentVariable.Commit; var buildNumber = EnvironmentVariable.BuildNumber; ``` ##### Developing the Buildkite SDK Since the Buildkite SDK is open source, you can make your own contributions to this SDK. Learn more about how to do this from the [Buildkite SDK's README](https://github.com/buildkite/buildkite-sdk?tab=readme-ov-file#buildkite-sdk). --- ### Annotations URL: https://buildkite.com/docs/pipelines/configure/annotations #### Annotations Buildkite Pipelines' annotations feature lets you add custom content to a build page (known as _build annotations_), which you can [create](#create-a-build-annotation) from your pipeline steps or using the REST or GraphQL APIs. Build annotations appear on the build page's main **Annotations** tab. See [Build page](/docs/pipelines/build-page) for more information about navigating this interface. You can also add annotations to individual jobs (known as _job-scoped annotations_), which you can [create](#create-a-job-scoped-annotation) from your relevant pipeline steps or using the REST or GraphQL APIs. Adding annotations can be useful for a variety of purposes, such as summarizing a build's job results to make them easier to read, for example, presenting key failure components in a failed step's job execution: ##### Create a build annotation Build annotations can be created from [within a build's job](#create-a-build-annotation-from-within-a-builds-job), as well as externally using Buildkite's [REST API](#create-a-build-annotation-externally-using-the-rest-api) and [GraphQL API](#create-a-build-annotation-externally-using-the-graphql-api). There is no limit to the amount of annotations you can create, but the maximum body size of each annotation is 1MiB. The size is measured in bytes, accounting for the underlying data encoding, where the specific encoding used can affect the size calculation. For example, if UTF-8 encoding is implemented, some characters may be encoded using up to 4 bytes each. ###### From within a build's job To create an annotation from within a build's job, use the [`buildkite-agent annotate` command](/docs/agent/cli/reference/annotate#creating-an-annotation) within the step definition for this job. For example, a step like this: ```yaml steps: - label: "\:writing_hand\: Example" command: | cat your specific pipeline. * By running the [List pipelines](/docs/apis/rest-api/pipelines#list-pipelines) REST API query to obtain this value from `slug` in the response. For example: ```bash curl -H "Authorization: Bearer $TOKEN" \ -X GET "https://api.buildkite.com/v2/organizations/{org.slug}/pipelines" ``` - `{build.number}` can be obtained: * From the number after `builds/` in your Buildkite URL, after accessing **Pipelines** in the global navigation > your specific pipeline > your specific pipeline build. * By running the [List builds for a pipeline](/docs/apis/rest-api/builds#list-builds-for-a-pipeline) REST API query to obtain this value from `number` in the response. For example: ```bash curl -H "Authorization: Bearer $TOKEN" \ -X GET "https://api.buildkite.com/v2/organizations/{org.slug}/pipelines/{pipeline.slug}/builds" ``` - For more information on how to use the `body`, `style`, and `context` fields, see [Formatting annotations](#formatting-annotations) for details on how to use these fields in relation to how they're used by the `buildkite-agent annotate` command. ###### Externally using the GraphQL API To [create a build annotation](/docs/apis/graphql/schemas/mutation/buildannotate) using the [GraphQL API](/docs/apis/graphql-api), run the following example mutation: ```graphql mutation { buildAnnotate(input: { buildID: "build-id", body: "### Example annotation\n\nThis was created using the GraphQL API.", style: INFO, context: "graphql-api-example" }) { annotation { uuid style context body { html } } } } ``` where: - `buildID` (required) can be obtained by running the [Get builds](/docs/apis/graphql/cookbooks/builds#get-builds-for-a-pipeline) GraphQL API query and obtain this value from the `id` in the response associated with the number of your build (specified by the `number` value in the response). For example: ```graphql query GetBuilds { pipeline(slug: "organization-slug/pipeline-slug") { builds(first: 10) { edges { node { id number url } } } } } ``` **Tip:** You can associate the build number with the annotation on the Buildkite interface by accessing **Pipelines** in the global navigation > your specific pipeline > your specific pipeline build, and then checking the build number after `builds/` in your Buildkite URL. - `style` can be `DEFAULT`, `ERROR`, `INFO`, `SUCCESS` or `WARNING`. - For more information on how to use the `body`, `style`, and `context` fields, see [Formatting annotations](#formatting-annotations) for details on how to use these fields in relation to how they're used by the `buildkite-agent annotate` command. ##### Create a job-scoped annotation A build's _job-scoped_ annotations appear within the **Annotations** tab of a job's details page, rather than by default on the build page's main **Annotations** tab. This makes it easier to view contextual information directly alongside the specific job that generated the annotation. For more about navigating the build interface, see the [build page](/docs/pipelines/build-page) documentation. Use cases where job-scoped annotations are particularly useful: - Test failures specific to individual jobs in a test matrix - Job-specific deployment information or Terraform plans - Results from parallel jobs that need to be viewed separately - Build matrices where each job produces different output Job-scoped annotations can be created from [within a build's job](#create-a-job-scoped-annotation-from-within-a-builds-job), as well as externally using Buildkite' [REST API](#create-a-job-scoped-annotation-externally-using-the-rest-api) and [GraphQL API](#create-a-job-scoped-annotation-externally-using-the-graphql-api). > 📘 Requirements > Job-scoped annotations require Buildkite agent v3.112 or newer and are not available in the classic build page experience. ###### From within a build's job To create a job-scoped annotation from within a build's job, use the [`buildkite-agent annotate` command](/docs/agent/cli/reference/annotate#creating-an-annotation) within the step definition for this job, along with the `--scope job` flag for this command. For example, a step like this: ```yaml steps: - label: "\:writing_hand\: Example" command: | cat 📘 > The `$BUILDKITE_JOB_ID` environment variable value can be obtained by running the [Get a build](/docs/apis/rest-api/builds#get-a-build) REST API query to obtain this value from the `id` of the relevant job in the `jobs` array of the response. To annotate a previously executed job from a subsequent job, use [`buildkite-agent meta-data`](/docs/agent/cli/reference/meta-data) to pass the job ID between steps. For example: ```yaml steps: - label: "First job" command: - "buildkite-agent meta-data set 'job_uuid' $$BUILDKITE_JOB_ID" - "buildkite-agent annotate --scope 'job' 'The first job'" key: "first_job" - label: "Second job" command: - "JOB_UUID=$$(buildkite-agent meta-data get 'job_uuid');" - "buildkite-agent annotate --scope 'job' --job $$JOB_UUID --append ' with more content'" depends_on: ["first_job"] ``` > 📘 > This method only works for steps that have already been executed. To annotate a pending step, use the [REST API](#create-a-job-scoped-annotation-externally-using-the-rest-api) or [GraphQL API](#create-a-job-scoped-annotation-externally-using-the-graphql-api) instead. ###### Externally using the REST API To create a job-scoped annotation using the [REST API](/docs/apis/rest-api), run the following example `curl` command: ```bash curl -H "Authorization: Bearer $TOKEN" \ -X POST "https://api.buildkite.com/v2/organizations/{org.slug}/pipelines/{pipeline.slug}/builds/{build.number}/jobs/{job.id}/annotations" \ -H "Content-Type: application/json" \ -d '{ "body": "### Job-scoped annotation\n\nThis was created using the REST API.", "style": "info", "context": "job-rest-api-example" }' ``` where: - `$TOKEN` is an [API access token](https://buildkite.com/user/api-access-tokens) scoped to the relevant **Organization** and **REST API Scopes** that your request needs access to in Buildkite. - `{org.slug}` can be obtained: * From the end of your Buildkite URL, after accessing **Pipelines** in the global navigation of your organization in Buildkite. * By running the [List organizations](/docs/apis/rest-api/organizations#list-organizations) REST API query to obtain this value from `slug` in the response. For example: ```bash curl -H "Authorization: Bearer $TOKEN" \ -X GET "https://api.buildkite.com/v2/organizations" ``` - `{pipeline.slug}` can be obtained: * From the end of your Buildkite URL, after accessing **Pipelines** in the global navigation > your specific pipeline. * By running the [List pipelines](/docs/apis/rest-api/pipelines#list-pipelines) REST API query to obtain this value from `slug` in the response. For example: ```bash curl -H "Authorization: Bearer $TOKEN" \ -X GET "https://api.buildkite.com/v2/organizations/{org.slug}/pipelines" ``` - `{build.number}` can be obtained: * From the number after `builds/` in your Buildkite URL, after accessing **Pipelines** in the global navigation > your specific pipeline > your specific pipeline build. * By running the [List builds for a pipeline](/docs/apis/rest-api/builds#list-builds-for-a-pipeline) REST API query to obtain this value from `number` in the response. For example: ```bash curl -H "Authorization: Bearer $TOKEN" \ -X GET "https://api.buildkite.com/v2/organizations/{org.slug}/pipelines/{pipeline.slug}/builds" ``` - `{job.id}` can be obtained by running the [Get a build](/docs/apis/rest-api/builds#get-a-build) REST API query to obtain this value from the `id` of the relevant job in the `jobs` array of the response. For example: ```bash curl -H "Authorization: Bearer $TOKEN" \ -X GET "https://api.buildkite.com/v2/organizations/{org.slug}/pipelines/{pipeline.slug}/builds/{build.number}" ``` - For more information on how to use the `body`, `style`, and `context` fields, see [Formatting annotations](#formatting-annotations) for details on how to use these fields in relation to how they're used by the `buildkite-agent annotate` command. ###### Externally using the GraphQL API To [create a job-scoped annotation](/docs/apis/graphql/schemas/mutation/jobannotate) using the [GraphQL API](/docs/apis/graphql-api), run the following example mutation: ```graphql mutation { jobAnnotate(input: { jobId: "job-id", body: "### Example job-scoped annotation\n\nThis was created using the GraphQL API.", style: INFO, context: "graphql-api-example" }) { annotation { uuid scope style context body { html } } job { id } } } ``` where: - `jobId` (required) can be obtained by querying a build's jobs. For example: ```graphql query GetBuildJobs { build(slug: "organization-slug/pipeline-slug/build-number") { jobs(first: 10) { edges { node { ... on JobTypeCommand { id label } } } } } } ``` - `style` can be `DEFAULT`, `ERROR`, `INFO`, `SUCCESS` or `WARNING`. - For more information on how to use the `body`, `style`, and `context` fields, see [Formatting annotations](#formatting-annotations) for details on how to use these fields in relation to how they're used by the `buildkite-agent annotate` command. ##### Formatting annotations Build annotations support a number of different [styles](#formatting-annotations-annotation-styles), [Markdown syntaxes](#formatting-annotations-supported-markdown-syntax), as well as [CSS classes](#formatting-annotations-supported-css-classes). You can also [embed and link to artifacts](#formatting-annotations-embedding-and-linking-artifacts-in-annotations) from within your build annotations too. ###### Annotation styles You can change the visual style of annotations using the `--style` option. This is an example pipeline showcasing the different styles of annotations: ```yaml steps: - label: "\:console\: Annotation Test" command: | buildkite-agent annotate 'Example `default` style' --context 'ctx-default' buildkite-agent annotate 'Example `info` style' --style 'info' --context 'ctx-info' buildkite-agent annotate 'Example `warning` style' --style 'warning' --context 'ctx-warn' buildkite-agent annotate 'Example `error` style' --style 'error' --context 'ctx-error' buildkite-agent annotate 'Example `success` style' --style 'success' --context 'ctx-success' ``` ###### Supported Markdown syntax Buildkite Pipelines uses CommonMark with GitHub Flavored Markdown extensions to provide consistent, unambiguous Markdown syntax. See GitHub's [Basic writing and formatting syntax](https://docs.github.com/en/get-started/writing-on-github/getting-started-with-writing-and-formatting-on-github/basic-writing-and-formatting-syntax) guide (to start with) and [GitHub Flavoured Markdown Spec](https://github.github.com/gfm/) for more details on how to implement this Markdown syntax. Annotations do not support GitHub-style syntax highlighting, task lists, user mentions, or automatic links for references to issues, pull requests or commits. CommonMark supports HTML inside Markdown blocks, but will revert to Markdown parsing on newlines. For more information about how HTML is parsed and which tags CommonMark supports please refer to the [CommonMark spec](https://spec.commonmark.org). > 🚧 HTML limitations > Annotations are sanitized for security. Only a subset of HTML tags are allowed, including ``, ``, ``, ``, ``, ``, ```, ``, `` through ``, and list elements. Arbitrary tags such as ``, ``, and `` are stripped. > > Attributes are also restricted. The `class` attribute is allowed but only for a specific allowlist of CSS class names (see [Supported CSS classes](#formatting-annotations-supported-css-classes) below). Link `href` values are limited to `http`, `https`, `mailto`, `itms-services`, and relative URL schemes. > > Inline styles (for example, `style="margin-top: 0;"`) are stripped. Some CSS classes may not work on certain HTML elements due to CSS specificity. Use the supported Basscss classes listed below instead. ###### Supported CSS classes A number of CSS classes are accepted in annotations. These include a subset of layout and formatting controls based on [Basscss](#basscss), and [colored console output](#colored-console-output). ###### Basscss [Basscss](http://basscss.com) is a toolkit of composable CSS classes which can be combined to accomplish many styling tasks. Annotations in Buildkite Pipelines accept the following parts of version 8.0 of Basscss within annotations: - [Align](http://basscss.com/#basscss-align) - [Border](http://basscss.com/#basscss-border) - [Button](https://basscss.com/v7/docs/btn/) - [Background Colors](https://basscss.com/v7/docs/background-colors/) - [Colors](https://basscss.com/v7/docs/colors/) - [Flexbox](http://basscss.com/#basscss-flexbox) * All except `sm-flex`, `md-flex` and `lg-flex` - [Margin](http://basscss.com/#basscss-margin) - [Layout](http://basscss.com/#basscss-layout) * All except Floats (Please use Flexbox instead) - [Padding](http://basscss.com/#basscss-padding) - [Typography](http://basscss.com/#basscss-typography) * `bold`, `regular`, `italic`, `caps` * `left-align`, `center`, `right-align`, `justify` * `underline`, `truncate` * `list-reset` - [Type Scale](http://basscss.com/#basscss-type-scale) An exhaustive list of classes that annotations support can be found below: ``` bold regular italic caps underline left-align center right-align justify align-baseline align-top align-middle align-bottom list-reset truncate fit inline block inline-block table table-cell overflow-hidden overflow-scroll overflow-auto ml-auto mr-auto mx-auto flex flex-column flex-wrap flex-auto flex-none items-start items-end items-center items-baseline items-stretch self-start self-end self-center self-baseline self-stretch justify-start justify-end justify-center justify-between justify-around content-start content-end content-center content-between content-around content-stretch order-0 order-1 order-2 order-3 order-last border border-top border-right border-bottom border-left border-none rounded h1 h2 h3 h4 h5 h6 m0 mt0 mr0 mb0 ml0 mx0 my0 m1 mt1 mr1 mb1 ml1 mx1 my1 m2 mt2 mr2 mb2 ml2 mx2 my2 m3 mt3 mr3 mb3 ml3 mx3 my3 m4 mt4 mr4 mb4 ml4 mx4 my4 mxn1 mxn2 mxn3 mxn4 p0 pt0 pr0 pb0 pl0 px0 py0 p1 pt1 pr1 pb1 pl1 py1 px1 p2 pt2 pr2 pb2 pl2 py2 px2 p3 pt3 pr3 pb3 pl3 py3 px3 p4 pt4 pr4 pb4 pl4 py4 px4 col-1 col-2 col-3 col-4 col-5 col-6 col-7 col-8 col-9 col-10 col-11 col-12 sm-col-1 sm-col-2 sm-col-3 sm-col-4 sm-col-5 sm-col-6 sm-col-7 sm-col-8 sm-col-9 sm-col-10 sm-col-11 sm-col-12 md-col-1 md-col-2 md-col-3 md-col-4 md-col-5 md-col-6 md-col-7 md-col-8 md-col-9 md-col-10 md-col-11 md-col-12 lg-col-1 lg-col-2 lg-col-3 lg-col-4 lg-col-5 lg-col-6 lg-col-7 lg-col-8 lg-col-9 lg-col-10 lg-col-11 lg-col-12 black gray silver white aqua blue navy teal green olive lime yellow orange red fuchsia purple maroon muted btn btn-sm btn-lg btn-primary bg-black bg-gray bg-silver bg-white bg-aqua bg-blue bg-navy bg-teal bg-green bg-olive bg-lime bg-yellow bg-orange bg-red bg-fuchsia bg-purple bg-maroon bg-muted ``` ###### Colored console output Console output in annotations can be displayed with ANSI colors when wrapped in a Markdown block marked as `term` or `terminal` syntax. There is a limit of 10 blocks per annotation. ```term \x1b[31mFailure/Error:\x1b[0m \x1b[32mexpect\x1b[0m(new_item.created_at).to eql(now) \x1b[31m expected: 2018-06-20 19:42:26.290538462 +0000\x1b[0m \x1b[31m got: 2018-06-20 19:42:26.290538000 +0000\x1b[0m \x1b[31m (compared using eql?)\x1b[0m ``` > 📘 > Make sure you escape the backticks (``) that demarcate the code block if you're echoing to the terminal, so it doesn't get interpreted as a shell interpreted command. The following pipeline prints an escaped Markdown block, adds line breaks using `\n` and formats `test` using the red ANSI code `\033[0;31m` before resetting the remainder of the output with `\033[0m`. Passing `-e` to the echo commands ensures that the backslash escapes codes are interpreted (the default is not to interpret them). ```yaml steps: - label: "Annotation Test" command: - echo -e "\`\`\`term\nThis is a \033[0;31mtest\033[0m\n\`\`\`" | buildkite-agent annotate ``` The results are piped though to the `buildkite-agent annotate` command: Or for more complex annotations, pipe an entire file to the `buildkite-agent annotate` command: ```bash printf '%b\n' "$(cat markdown-for-annotation.md)" | buildkite-agent annotate ``` If you're using our [terminal to HTML](http://buildkite.github.io/terminal-to-html/) tool, wrap the output in ```` tags, so it displays the terminal color styles but won't process it again: ```html ` terminal-to-html output ` ``` ###### Embedding and linking artifacts in annotations Uploaded artifacts can be embedded in annotations by referencing them using the `artifact://` prefix in your image source. ```yaml steps: - label: "\:console\: Annotation Test" command: | buildkite-agent artifact upload "indy.png" cat EOF ``` You can also link to uploaded artifacts as a shortcut to important files: ```yaml steps: - label: "Upload Coverage Report" command: | buildkite-agent artifact upload "coverage/*" cat your specific pipeline. * By running the [List pipelines](/docs/apis/rest-api/pipelines#list-pipelines) REST API query to obtain this value from `slug` in the response. For example: ```bash curl -H "Authorization: Bearer $TOKEN" \ -X GET "https://api.buildkite.com/v2/organizations/{org.slug}/pipelines" ``` - `{build.number}` can be obtained: * From the number after `builds/` in your Buildkite URL, after accessing **Pipelines** in the global navigation > your specific pipeline > your specific pipeline build. * By running the [List builds for a pipeline](/docs/apis/rest-api/builds#list-builds-for-a-pipeline) REST API query to obtain this value from `number` in the response. For example: ```bash curl -H "Authorization: Bearer $TOKEN" \ -X GET "https://api.buildkite.com/v2/organizations/{org.slug}/pipelines/{pipeline.slug}/builds" ``` ###### Using the GraphQL API To [list build and job-scoped annotations for a build](/docs/apis/graphql/schemas/object/annotation) using the [GraphQL API](/docs/apis/graphql-api), run the following example query: ```graphql query GetBuildAnnotations { build(uuid: "build-uuid") { number annotations(first: 10) { edges { node { uuid context style body { text html } createdAt } } } } } ``` where `build-uuid` (required) can be obtained by running the [Get builds](/docs/apis/graphql/cookbooks/builds#get-builds-for-a-pipeline) GraphQL API query and obtaining this value from the `uuid` in the response associated with the number of your build (specified by the `number` value in the response). For example: ```graphql query GetBuilds { pipeline(slug: "organization-slug/pipeline-slug") { builds(first: 10) { edges { node { uuid number } } } } } ``` ##### List annotations for a job Job-scoped annotations for a specific job can be retrieved using the [REST API](#list-annotations-for-a-job-using-the-rest-api) or [GraphQL API](#list-annotations-for-a-job-using-the-graphql-api). ###### Using the REST API To list job-scoped annotations for a specific job using the [REST API](/docs/apis/rest-api), run the following example `curl` command: ```bash curl -H "Authorization: Bearer $TOKEN" \ -X GET "https://api.buildkite.com/v2/organizations/{org.slug}/pipelines/{pipeline.slug}/builds/{build.number}/jobs/{job.id}/annotations" ``` where: - `$TOKEN` is an [API access token](https://buildkite.com/user/api-access-tokens) scoped to the relevant **Organization** and **REST API Scopes** that your request needs access to in Buildkite. - `{org.slug}` can be obtained: * From the end of your Buildkite URL, after accessing **Pipelines** in the global navigation of your organization in Buildkite. * By running the [List organizations](/docs/apis/rest-api/organizations#list-organizations) REST API query to obtain this value from `slug` in the response. For example: ```bash curl -H "Authorization: Bearer $TOKEN" \ -X GET "https://api.buildkite.com/v2/organizations" ``` - `{pipeline.slug}` can be obtained: * From the end of your Buildkite URL, after accessing **Pipelines** in the global navigation > your specific pipeline. * By running the [List pipelines](/docs/apis/rest-api/pipelines#list-pipelines) REST API query to obtain this value from `slug` in the response. For example: ```bash curl -H "Authorization: Bearer $TOKEN" \ -X GET "https://api.buildkite.com/v2/organizations/{org.slug}/pipelines" ``` - `{build.number}` can be obtained: * From the number after `builds/` in your Buildkite URL, after accessing **Pipelines** in the global navigation > your specific pipeline > your specific pipeline build. * By running the [List builds for a pipeline](/docs/apis/rest-api/builds#list-builds-for-a-pipeline) REST API query to obtain this value from `number` in the response. For example: ```bash curl -H "Authorization: Bearer $TOKEN" \ -X GET "https://api.buildkite.com/v2/organizations/{org.slug}/pipelines/{pipeline.slug}/builds" ``` - `{job.id}` can be obtained by running the [Get a build](/docs/apis/rest-api/builds#get-a-build) REST API query to obtain this value from the `id` of the relevant job in the `jobs` array of the response. For example: ```bash curl -H "Authorization: Bearer $TOKEN" \ -X GET "https://api.buildkite.com/v2/organizations/{org.slug}/pipelines/{pipeline.slug}/builds/{build.number}" ``` ###### Using the GraphQL API To list job-scoped annotations for a specific job using the [GraphQL API](/docs/apis/graphql-api), run the following example query: ```graphql query GetJobAnnotations { job(uuid: "job-uuid") { ... on JobTypeCommand { annotations(first: 10) { edges { node { uuid scope context style body { text html } } } } } } } ``` where `job-uuid` is the UUID of the job, which can be obtained by querying a build's jobs. For example: ```graphql query GetBuildJobs { build(slug: "organization-slug/pipeline-slug/build-number") { jobs(first: 10) { edges { node { ... on JobTypeCommand { uuid label } } } } } } ``` ##### Remove an annotation Build and job-scoped annotations can be removed from [within a build's job](#remove-an-annotation-from-within-a-builds-job), as well as externally using the [REST API](#remove-an-annotation-externally-using-the-rest-api). Removing an annotation using the GraphQL API is not supported. ###### From within a build's job To remove a build or job-scoped annotation from within a build's job, use the [`buildkite-agent annotation remove` command](/docs/agent/cli/reference/annotation#removing-an-annotation) within the step definition for this job. For example: ```yaml steps: - label: "\:exploding-death-star\: Remove annotation" command: buildkite-agent annotation remove --context "agent-cli-example" ``` Removing annotations like this is the most common approach, as these steps run as part of your pipeline's builds. ###### Externally using the REST API To [remove a build or job-scoped annotation](/docs/apis/rest-api/annotations#delete-an-annotation-on-a-build) using the [REST API](/docs/apis/rest-api), run the following example `curl` command: ```bash curl -H "Authorization: Bearer $TOKEN" \ -X DELETE "https://api.buildkite.com/v2/organizations/{org.slug}/pipelines/{pipeline.slug}/builds/{build.number}/annotations/{annotation.uuid}" ``` where: - `$TOKEN` is an [API access token](https://buildkite.com/user/api-access-tokens) scoped to the relevant **Organization** and **REST API Scopes** that your request needs access to in Buildkite. - `{org.slug}` can be obtained: * From the end of your Buildkite URL, after accessing **Pipelines** in the global navigation of your organization in Buildkite. * By running the [List organizations](/docs/apis/rest-api/organizations#list-organizations) REST API query to obtain this value from `slug` in the response. For example: ```bash curl -H "Authorization: Bearer $TOKEN" \ -X GET "https://api.buildkite.com/v2/organizations" ``` - `{pipeline.slug}` can be obtained: * From the end of your Buildkite URL, after accessing **Pipelines** in the global navigation > your specific pipeline. * By running the [List pipelines](/docs/apis/rest-api/pipelines#list-pipelines) REST API query to obtain this value from `slug` in the response. For example: ```bash curl -H "Authorization: Bearer $TOKEN" \ -X GET "https://api.buildkite.com/v2/organizations/{org.slug}/pipelines" ``` - `{build.number}` can be obtained: * From the number after `builds/` in your Buildkite URL, after accessing **Pipelines** in the global navigation > your specific pipeline > your specific pipeline build. * By running the [List builds for a pipeline](/docs/apis/rest-api/builds#list-builds-for-a-pipeline) REST API query to obtain this value from `number` in the response. For example: ```bash curl -H "Authorization: Bearer $TOKEN" \ -X GET "https://api.buildkite.com/v2/organizations/{org.slug}/pipelines/{pipeline.slug}/builds" ``` - `{annotation.uuid}` can be obtained by [listing annotations for a build](#list-annotations-for-a-build-using-the-rest-api) and extracting the `id` value from the response. This value is not available from the Buildkite interface. ##### Using annotations to report test results Annotations are a great way of rendering test failures that occur in different steps in a pipeline. The [junit-annotate plugin](https://github.com/buildkite-plugins/junit-annotate-buildkite-plugin) converts all the junit.xml artifacts in a build into a single annotation: ```yaml steps: - command: test.sh parallelism: 50 artifact_paths: tmp/junit-*.xml - wait: ~ continue_on_failure: true - plugins: - junit-annotate#v2.7.0: artifacts: tmp/junit-*.xml ``` If you use Bazel as your build tool, see [Creating dynamic pipelines and build annotations using Bazel](/docs/pipelines/tutorials/dynamic-pipelines-and-annotations-using-bazel) for a tutorial on generating annotations from Bazel build events. --- ### Writing build scripts URL: https://buildkite.com/docs/pipelines/configure/writing-build-scripts #### Writing build scripts One of the most common actions that Buildkite steps perform is running shell scripts. These scripts are checked in alongside your code and `pipeline.yml` file. The [Buildkite agent](/docs/agent) will run your scripts, capture and report the log output, and use the exit status to mark each job, as well as the overall build, as passed or failed. ##### Configuring Bash The shell that runs your scripts in Buildkite is a clean Bash prompt with no settings. If you rely on anything from your `~/.bash_profile` or `~/.bashrc` files when you run scripts locally, you'll need to explicitly add the relevant items to your build scripts. When writing Bash shell scripts there are a number of options you can set to help prevent unexpected errors: | `e` | Exit script immediately if any command returns a non-zero exit status. | `u` | Exit script immediately if an undefined variable is used (for example, `echo "$UNDEFINED_ENV_VAR"`). | `o pipefail` | Ensure Bash pipelines (for example, `cmd | othercmd`) return a non-zero status if any of the commands fail, rather than returning the exit status of the last command in the pipeline. | `x` | Expand and print each command before executing. See [Debugging your environment](/docs/pipelines/configure/writing-build-scripts#debugging-your-environment) for more information. Bash's built-in `set` command can be used to enable and disable options. For example, `set -u` enables the `u` option, and `set +u` disables the `u` option. You can also set multiple options at once, for example `set -ue` enables both the `u` and `e` options. The following example enables the most commonly used options for build scripts: ```bash #!/bin/bash set -euo pipefail run_tests ``` For a full list of options, see the [Bash reference manual](https://www.gnu.org/software/bash/manual/html_node/The-Set-Builtin.html). > 🚧 Unbound variable errors > Note that while enabling the `u` option is generally a good default to use for all build scripts, it can cause some tools like [rvm](https://rvm.io) to fail with “unbound variable” errors. If you encounter errors, you can either remove `u` from the list of options, or run the tool causing the error wrapped in `set +u` and `set -u` to remove the option for only that command. For example: `set +u; rvm xxx; set -u`. ##### Capturing exit status Build scripts can sometimes contain commands that shouldn't affect the overall exit status. For example, take the following script: ```bash #!/bin/bash #### Note that we don't enable the 'e' option, which would cause the script to #### immediately exit if 'run_tests' failed set -uo pipefail run_tests clean_up ``` Running this script will exit with the status returned by the final command, `clean_up`. However, what we really care about is the exit status of the first command, `run_tests`. By using a variable to store the exit status of `run_tests`, we can run additional commands while still returning the original exit status. For example: ```bash #!/bin/bash #### Note that we don't enable the 'e' option, which would cause the script to #### immediately exit if 'run_tests' failed set -uo pipefail #### Run the main command we're most interested in run_tests #### Capture the exit status TESTS_EXIT_STATUS=$? #### Run additional commands clean_up #### Exit with the status of the original command exit $TESTS_EXIT_STATUS ``` Using this technique gives you control over the exit code of your script, and the final success or failure of your build job. ##### Debugging your environment The first step in debugging your build script is to view the environment variables from the Buildkite web interface: There may be additional environment variables available in your build job that don't appear in this list, such as ones set by your [job lifecycle hooks](/docs/agent/hooks#job-lifecycle-hooks). To debug these, you can print them using `echo $SOME_VAR` before the command you're wanting to run. For example: ```bash #!/bin/bash echo "$PATH" some_command ``` If you require more environment information, you can execute `env` to print out all the environment variable names and their values. If you use `env` you should filter the output using a tool such as `grep` or `egrep`, to ensure you don't leak private keys or other information. > 🚧 Security recommendation > If you use environment variables to define sensitive data such as API keys or Secret Access Keys, you should always filter the output of `env` to ensure you're not exposing any secrets in your build log. For example, the following prints all environment variable names and values containing the words "git" or "node", using a case-insensitive search: ```bash #!/bin/bash env | grep -i -E 'git|node' some_command ``` Enabling Bash's debug mode using `set -x` can also help to debug your build scripts. This debug output can be very noisy, so it's best enable this before the command you want to debug, and then to disable it straight after. For example: ```bash #!/bin/bash set -x # Enable debugging some_command set +x # Disable debugging some_other_command ``` For more information about the `x` option and debugging in general, see the [Bash Guide for Beginners' page on debugging Bash scripts](http://tldp.org/LDP/Bash-Beginners-Guide/html/sect_02_03.html). ##### Help with linting and debugging To check your shell scripts for common errors and mistakes we highly recommend using a linting tool like [Shellcheck](https://www.shellcheck.net). Shellcheck is a shell script linter with a web-based front-end, a command line tool, and integrates directly with most code editors. For an explanation of a shell script snippet, the [explainshell](https://explainshell.com) tool is extremely useful. explainshell.com can tell you, in plain English, what a line of shell script does. It also integrates the man pages of common tools such as `ssh` and `git`. ##### Managing log output If your script is generating output that is too large, there are several strategies you can employ to reduce the output or redirect the log. Take a look at our [managing log output guide](/docs/pipelines/configure/managing-log-output) for a step by step introduction. --- ### Using conditionals URL: https://buildkite.com/docs/pipelines/configure/conditionals #### Using conditionals Using conditionals, you can run builds or steps only when specific conditions are met. Define [boolean conditions using C-like expressions](#variable-and-syntax-reference). You can define conditionals at the step level in your `pipeline.yml` or at the pipeline level in your Buildkite version control provider settings. ##### Conditionals in pipelines You can have complete control over when to trigger pipeline builds by using conditional expressions to filter incoming webhooks. You need to define conditionals in the pipeline's **Settings** page for your repository provider to run builds only when expressions evaluate to `true`. For example, to run only when a pull request is targeting the main branch: Pipeline-level build conditionals are evaluated before any other build trigger settings. If both a conditional and a branch filter are present, both filters must pass for a build to be created – first the pipeline-level limiting filter and then the conditional filter. Conditionals are supported in [Bitbucket](/docs/pipelines/source-control/bitbucket), [Bitbucket Server](/docs/pipelines/source-control/bitbucket-server), [GitHub](/docs/pipelines/source-control/github), [GitHub Enterprise](/docs/pipelines/source-control/github-enterprise), and [GitLab](/docs/pipelines/source-control/gitlab) (including GitLab Community and GitLab Enterprise). You can add a conditional on your pipeline's **Settings** page in the Buildkite interface or using the REST API. > 📘 Evaluating conditionals > Conditional expressions are evaluated at pipeline upload, not at step runtime. ##### Conditionals in steps Use the `if` attribute in your step definition to conditionally run a step. In the below example, the `tests` step will only be run if the build message does not contain the string "skip tests". ```yml steps: - command: ./scripts/tests.sh label: tests if: build.message !~ /skip tests/ ``` The `if` attribute can be used in any type of step, and with any of the supported expressions and parameters. However, it cannot be used at the same time as the `branches` attribute. Be careful when defining conditionals within YAML. Many symbols have special meaning in YAML and will change the type of a value. You can avoid this by quoting your conditional as a string. ```yml steps: - command: ./scripts/tests.sh label: tests if: "!build.pull_request.draft" ``` Multi-line conditionals can be added with the `|` character, and avoid the need for quotes: ```yml steps: - command: ./scripts/tests.sh label: tests if: | // Do not run when the message contains "skip tests" // and // Only run on feature branches build.message !~ /skip tests/ && build.branch =~ /^feature\// ``` Since `if` conditions are evaluated at the time of the pipeline upload, it's not possible to use the `if` attribute to conditionally run a step based on the result of another step. > 🚧 Plugin execution and conditionals > Step-level `if` conditions only prevent commands from running but they _do not_ affect plugins. Plugins run during the job lifecycle, before the conditional is evaluated. To conditionally run plugins, use either [group steps](#conditionally-running-plugins-with-group-steps) or [dynamic pipeline uploads](#conditionally-running-plugins-with-dynamic-uploads). To run a step based on the result of another step, upload a new pipeline based on the `if` condition set up in the [command step](/docs/pipelines/configure/step-types/command-step) like in the example below: ```yml steps: - label: "Validation check" command: ./scripts/validation_tests.sh key: "validation-check" - label: "Run regression only if validation check is passed" depends_on: "validation-check" command: | if [ $$(buildkite-agent step get "outcome" --step "validation-check") == "passed" ]; then cat | Comparators | `== != =~ !~` | Logical operators | `|| &&` | Array operators | `includes` | Integers | `12345` | Strings | `'feature-branch' "feature-branch"` | Literals | `true false null` | Parentheses | `( )` | Regular expressions | `/^v1\.0/` | Prefixes | `!` | Comments | `// This is a comment` > 🚧 Formatting regular expressions > When using regular expressions in conditionals, the regular expression must be on the right hand side, and the use of the `$` anchor symbol must be escaped to avoid [environment variable substitution](/docs/agent/cli/reference/pipeline#environment-variable-substitution). For example, to match branches ending in `"/feature"` the conditional statement would be `build.branch =~ /\/feature$$/`. ###### Variables The following variables are supported by the `if` attribute. Note that you cannot use [Build Meta-data](/docs/pipelines/configure/build-meta-data) in conditional expressions. > 🚧 Unverified commits > Note that GitHub accepts [unsigned commits](https://docs.github.com/en/authentication/managing-commit-signature-verification/about-commit-signature-verification), including information about the commit author and passes them along to webhooks, so you should not rely on these for authentication unless you are confident that all of your commits are trusted. | `build.author.email` | `String` | The **[unverified](#unverified-commits)** email address of the user who authored the build's commit | `build.author.id` | `String` | The **[unverified](#unverified-commits)** ID of the user who authored the build's commit | `build.author.name` | `String` | The **[unverified](#unverified-commits)** name of the user who authored the build's commit | `build.author.teams` | `Array` | An **[unverified](#unverified-commits)** array of the team/s which the user who authored the build's commit is a member of | `build.branch` | `String` | The branch on which this build is created from | `build.commit` | `String` | The commit number of the commit the current build is based on | `build.creator.email` | `String` | The email address of the user who created the build. The value differs depending on how the build was created: **Buildkite dashboard:** Set based on who manually created the build. **GitHub webhook:** Set from the **[unverified](#unverified-commits)** HEAD commit. **Webhook:** Set based on which user is attached to the API Key used. For conditionals to use this variable, the user set must be a verified Buildkite user. | `build.creator.id` | `String` | The ID of the user who created the build. The value differs depending on how the build was created: **Buildkite dashboard:** Set based on who manually created the build. **GitHub webhook:** Set from the **[unverified](#unverified-commits)** HEAD commit. **Webhook:** Set based on which user is attached to the API Key used. For conditionals to use this variable, the user set must be a verified Buildkite user. | `build.creator.name` | `String` | The name of the user who created the build. The value differs depending on how the build was created: **Buildkite dashboard:** Set based on who manually created the build. **GitHub webhook:** Set from the **[unverified](#unverified-commits)** HEAD commit. **Webhook:** Set based on which user is attached to the API Key used. For conditionals to use this variable, the user set must be a verified Buildkite user. | `build.creator.teams` | `Array` | An array of the teams which the user who created the build is a member of. The value differs depending on how the build was created: **Buildkite dashboard:** Set based on who manually created the build. **GitHub webhook:** Set from the **[unverified](#unverified-commits)** HEAD commit. **Webhook:** Set based on which user is attached to the API Key used. For conditionals to use this variable, the user set must be a verified Buildkite user. | `build.env()` | `String`, `null` | This function returns the value of the environment passed as the first argument if that variable is set, or `null` if the environment variable is not set. `build.env()` works with variables you've defined, and the following `BUILDKITE_*` variables: `BUILDKITE_BRANCH` `BUILDKITE_TAG` `BUILDKITE_MESSAGE` `BUILDKITE_COMMIT` `BUILDKITE_PIPELINE_SLUG` `BUILDKITE_PIPELINE_NAME` `BUILDKITE_PIPELINE_ID` `BUILDKITE_ORGANIZATION_SLUG` `BUILDKITE_TRIGGERED_FROM_BUILD_ID` `BUILDKITE_TRIGGERED_FROM_BUILD_NUMBER` `BUILDKITE_TRIGGERED_FROM_BUILD_PIPELINE_SLUG` `BUILDKITE_REBUILT_FROM_BUILD_ID` `BUILDKITE_REBUILT_FROM_BUILD_NUMBER` `BUILDKITE_REPO` `BUILDKITE_PULL_REQUEST` `BUILDKITE_PULL_REQUEST_BASE_BRANCH` `BUILDKITE_PULL_REQUEST_REPO` `BUILDKITE_MERGE_QUEUE_BASE_BRANCH` `BUILDKITE_MERGE_QUEUE_BASE_COMMIT` `BUILDKITE_GITHUB_DEPLOYMENT_ID` `BUILDKITE_GITHUB_DEPLOYMENT_TASK` `BUILDKITE_GITHUB_DEPLOYMENT_ENVIRONMENT` `BUILDKITE_GITHUB_DEPLOYMENT_PAYLOAD` `BUILDKITE_GITHUB_ACTION` `BUILDKITE_GITHUB_CHECK_RUN_CONCLUSION` `BUILDKITE_GITHUB_COMMENT_ID` `BUILDKITE_GITHUB_CHECK_RUN_NAME` `BUILDKITE_GITHUB_DEPLOYMENT_STATUS_ENVIRONMENT` `BUILDKITE_GITHUB_DEPLOYMENT_STATUS_STATE` `BUILDKITE_GITHUB_EVENT` `BUILDKITE_GITHUB_RELEASE_DRAFT` `BUILDKITE_GITHUB_RELEASE_PRERELEASE` `BUILDKITE_GITHUB_RELEASE_TAG` `BUILDKITE_GITHUB_REVIEW_ID` `BUILDKITE_GITHUB_REVIEW_STATE` | `build.id` | `String` | The ID of the current build | `build.message` | `String`, `null` | The current build's message | `build.number` | `Integer` | The number of the current build | `build.pull_request.base_branch` | `String`, `null` | The base branch that the pull request is targeting, otherwise `null` if the branch is not a pull request | `build.pull_request.id` | `String`, `null` | The number of the pull request, otherwise `null` if the branch is not a pull request | `build.pull_request.draft` | `Boolean`, `null` | If the pull request is a draft, otherwise `null` if the branch is not a pull request or the provider doesn't support draft pull requests | `build.pull_request.labels` | `Array` | An array of label names attached to the pull request | `build.pull_request.label` | `String`, `null` | The name of the specific label that was added or removed in a `labeled` or `unlabeled` pull request event, otherwise `null` | `build.pull_request.repository` | `String`, `null` | The repository URL of the pull request, otherwise `null` if the branch is not a pull request | `build.pull_request.repository.fork` | `Boolean`, `null` | If the pull request comes from a forked repository, otherwise `null` if the branch is not a pull request | `build.merge_queue.base_branch` | `String`, `null` | If a merge queue build, the target branch which the merge queue build will be merged into | `build.merge_queue.base_commit` | `String`, `null` | If a merge queue build, the [merge base](https://git-scm.com/docs/git-merge-base) of the proposed merge commit (`build.commit`) | `build.source` | `String` | The source of the event that created the build _Available sources:_ `ui`, `api`, `webhook`, `trigger_job`, `schedule` | `build.source_event` | `String`, `null` | The GitHub webhook event type that triggered the build (for example, `push`, `pull_request`, `release`). `null` for non-webhook builds | `build.source_action` | `String`, `null` | The GitHub webhook event action (for example, `opened`, `labeled`, `submitted`). `null` for non-webhook builds or events without an action | `build.state` | `String` | The state the current build is in _Available states:_ `started`, `scheduled`, `running`, `passed`, `failed`, `failing`, `started_failing`, `blocked`, `canceling`, `canceled`, `skipped`, `not_run` | `build.tag` | `String`, `null` | The tag associated with the commit the current build is based on | `pipeline.default_branch` | `String`, `null` | The default branch of the pipeline the current build is from | `pipeline.id` | `String` | The ID of the pipeline the current build is from | `pipeline.repository` | `String`, `null` | The repository of the pipeline the current build is from | `pipeline.slug` | `String` | The slug of the pipeline the current build is from | `pipeline.started_failing` | `Boolean` | Whether the pipeline transitioned from a passing state to a failing state with the current build. Only available in [build-level notification](/docs/pipelines/configure/notify#conditional-notifications) conditionals | `pipeline.started_passing` | `Boolean` | Whether the pipeline transitioned from a failing state to a passing state with the current build. Only available in [build-level notification](/docs/pipelines/configure/notify#conditional-notifications) conditionals | `organization.id` | `String` | The ID of the organization the current build is running in | `organization.slug` | `String` | The slug of the organization the current build is running in > 🚧 Using `build.env()` with custom environment variables > To access custom environment variables with the `build.env()` function, ensure that the [YAML pipeline steps editor](https://buildkite.com/changelog/32-defining-pipeline-build-steps-with-yaml) has been enabled in the Pipeline Settings menu. The following step variables are also available for [conditional notifications](#conditional-notifications) only. | `step.id` | `String` | The ID of the current step | `step.key` | `String`, `null` | The key of the current step | `step.label` | `String`, `null` | The label of the current step | `step.type` | `String` | The type of the current step _Available types:_ `command`, `wait`, `input`, `trigger`, `group` | `step.state` | `String` | The state of the current step _Available states:_ `ignored`, `waiting_for_dependencies`, `ready`, `running`, `failing`, `finished` | `step.outcome` | `String` | The outcome of the current step _Available outcomes:_ `neutral`, `passed`, `soft_failed`, `hard_failed`, `errored` ##### Example expressions To run only when the branch is `main` or `production`: ```js build.branch == "main" || build.branch == "production" ``` To run only when the branch is not `production`: ```js build.branch != "production" ``` To run only when the branch starts with `features/`: ```js build.branch =~ /^features\// ``` To run only when the branch ends with `/release-123`, such as `feature/release-123`: ```js build.branch =~ /\/release-123\$/ ``` To run only when building a tag: ```js build.tag != null ``` To run only when building a tag beginning with a `v` and ends with a `.0`, such as `v1.0`: ```js // Using the tag variable build.tag =~ /^v[0-9]+\.0\$/ // Using the env function build.env("BUILDKITE_TAG") =~ /^v[0-9]+\.0\$/ ``` To run only if the message doesn't contain `[skip tests]`, case insensitive: ```js build.message !~ /\[skip tests\]/i ``` To run only if the build was created from a schedule: ```js build.source == "schedule" ``` To run when the value of `CUSTOM_ENVIRONMENT_VARIABLE` is `value`: ```js build.env("CUSTOM_ENVIRONMENT_VARIABLE") == "value" ``` To run when the **[unverified](#unverified-commits)** build creator is in the `deploy` team: ```js build.creator.teams includes "deploy" ``` To run only non-draft pull requests: ```js !build.pull_request.draft ``` To run only on merge queue builds targeting the `main` branch: ```js build.merge_queue.base_branch == "main" ``` --- ### Depends on URL: https://buildkite.com/docs/pipelines/configure/depends-on #### Depends on All steps in pipelines have _implicit dependencies_, usually managed with [wait and block](#implicit-dependencies-with-wait-and-block) steps. However, you can manually change the dependency structure of your steps, by defining _explicit dependencies_ using the [`depends_on` attribute](#defining-explicit-dependencies). ##### Implicit dependencies with wait and block [Wait](/docs/pipelines/configure/step-types/wait-step) and [block](/docs/pipelines/configure/step-types/block-step) steps provide an implicit dependency structure to your pipeline. By adding these steps to your pipeline, the Buildkite scheduler will automatically know which steps need to be run in serial and which can be run in parallel. A [wait step](/docs/pipelines/configure/step-types/wait-step), in the following example, is dependent on all previous steps completing successfully. The `wait` won't proceed until all steps defined above it have passed. All steps following the wait step are dependent on this wait step—none of them will run until the wait step is satisfied. ```yml steps: - command: "one.sh" - command: "two.sh" - wait: ~ - command: "three.sh" - command: "four.sh" ``` A [block step](/docs/pipelines/configure/step-types/block-step) performs the same function, but also requires unblocking either manually or using an API call before the subsequent steps can be run. If you are collecting information with your block steps using the `prompt` or `fields` attributes but don't want it to implicitly depend on the steps around it, you can use an [input step](/docs/pipelines/configure/step-types/input-step). ```yml steps: - input: "Who is running this script?" fields: - text: "Your name" key: "name" ``` ##### Defining explicit dependencies The `depends_on` attribute can be added to all step types. To add a dependency on another step, add the `depends_on` attribute with the `key` of the step you're depending on: ```yml steps: - command: "tests.sh" key: "tests" - command: "build.sh" key: "build" depends_on: "tests" ``` In this example, the second command step (`build.sh`) will not run until the first command step (`tests.sh`) has completed. Without the `depends_on` attribute, and given enough agents, these steps would run in parallel. > 🚧 `depends_on` and `block` / `wait` > Note that a step with an explicit dependency specified with the `depends_on` attribute will run immediately after the dependency step has completed, without waiting for `block` or `wait` steps unless those are also explicit dependencies. Dependencies can also be added as a list of strings, or a list of steps. Both formats use the step `key` to refer to the step. ```yml steps: - command: "test-suite.sh" key: "test-suite" - command: "another-thing.sh" key: "another-thing" - command: "tests.sh" depends_on: - "test-suite" - "another-thing" ``` Or alternatively: ```yml steps: - command: "test-suite.sh" key: "test-suite" - command: "another-thing.sh" key: "another-thing" - command: "tests.sh" depends_on: - step: "test-suite" - step: "another-thing" ``` > 🚧 Explicit dependencies in uploaded steps > If a step depends on an upload step, then all steps uploaded by that step become dependencies of the original step. For example, if step B depends on step A, and step A uploads step C, then step B will also depend on step C. To ensure that a step is not dependent on any other step, add an explicit empty dependency with the `~` character (YAML), `null` (JSON) or `[]` (JSON and YAML). This also ensures that the step will run immediately regardless of [implicit dependencies](#implicit-dependencies-with-wait-and-block). For example: In YAML: ```yml steps: - command: "tests.sh" - wait: ~ - command: "lint.sh" depends_on: ~ ``` Or alternatively: ```yml steps: - command: "tests.sh" - wait: ~ - command: "lint.sh" depends_on: [] ``` In JSON: ```json { "steps": [ { "command": "tests.sh" }, { "wait": null }, { "command": "lint.sh", "depends_on": [] } ] } ``` While the second command step in these examples is defined after a wait step, its empty dependency directs this command to not depend on the `wait` step, so that both commands steps are available to run immediately at the start of the build. Explicit dependencies on block steps can be added without setting additional input values. You can use this to define a **Deploy** button, for example. ```yml steps: - command: "build.sh" key: "built" - block: "\:rocket\: Release!" key: "blocked-deploy" depends_on: - "built" - command: "release.sh" depends_on: - "built" - "blocked-deploy" ``` ##### Order of operations There are three step attributes that can each affect when a step is able to run: * `if`/`branches` * `depends_on` * `concurrency_group` If the step you're dependent on doesn't exist, the build will fail without running the step that is waiting for the dependency. However, if the step you're dependent on is excluded from the build due to an `if` condition, the dependency will be ignored and the step that depends on it will run once any other dependencies are satisfied. Steps that are in a `concurrency_group` run in the order they are created in and can be delayed in running by the `concurrency` attribute. If your step has a dependency on a step that is in a `concurrency_group`, there is an implicit dependency on the rest of the steps in the group. For more information about concurrency groups, see the [Controlling concurrency guide](/docs/pipelines/configure/workflows/controlling-concurrency#concurrency-groups). ##### Allowing dependency failures You can add the `allow_dependency_failure` attribute to any step that has dependencies. The step will then run when the depended-on jobs complete, fail, or do not run. However, if you cancel a job, any subsequent steps with `allow_dependency_failure: true` do not execute. Note that even if you continue to run the next step, the build will still fail if there are any failures. ```yml steps: - command: "tests.sh" key: "tests" - command: "build.sh" key: "build" depends_on: "tests" allow_dependency_failure: true ``` For finer control, you can explicitly allow or deny failures on an individual dependency basis using the `allow_failure` attribute with a step dependency. ```yml steps: - command: "tests.sh" depends_on: - step: "test-suite" allow_failure: true - step: "another-thing" allow_failure: false ``` This pattern is often used to run steps like code coverage or annotations to the build log that will give insight into what failed. ##### How skipped steps affect dependencies When a step is skipped (due to an `if` condition returning `false`), any steps that depend on that step will still run. Skipped steps are treated as "satisfied" dependencies. > 🚧 Skipped dependencies are treated as satisfied > When a step that another step depends on is skipped due to a conditional, the dependency is treated as satisfied and dependent steps will run. Skipped dependencies are treated as passing, which is different from failed or canceled steps that block dependent steps, unless `allow_dependency_failure` is used. The following table shows how different step states affect dependencies: | Step State | Dependency Result | Dependent Steps Behavior | |------------|------------------|---------------------------| | **Passed** | ✅ Satisfied | Run normally | | **Skipped** (due to `if` condition) | ✅ Satisfied | **Run normally** | | **Failed** (with `allow_failure: true`) | ✅ Satisfied | Run normally | | **Failed** (no `allow_failure`) | ❌ Failed | Don't run | | **Blocked** | ⏸️ Blocked | Wait for unblocking | | **Canceled/Expired** | ❌ Failed | Don't run | ###### Skipped dependency behavior In this example, when building a branch other than `main`, the **Conditional Step** will be skipped but the **Dependent Step** will still run because the skipped dependency is satisfied. ```yaml steps: - label: "Conditional Step" key: "conditional" command: "echo 'This only runs on main'" if: build.branch == "main" - label: "Dependent Step" command: "echo 'This always runs'" depends_on: "conditional" ``` ##### Allowed failure and soft fail Setting [`soft_fail`](/docs/pipelines/configure/step-types/command-step#soft-fail-attributes) on a step will also allow steps that depend upon it to run, even when [`allow_dependency_failure: false`](/docs/pipelines/configure/depends-on#allowing-dependency-failures) is set on the subsequent step. In the following example, `step-b` will run because `step-a` is soft failing. If `step-a` were to to fail with a different exit code, `step-b` would not run. ```yml steps: - key: "step-a" command: echo "soft fail" && exit 42 soft_fail: - exit_status: 42 - key: "step-b" command: echo "Running" depends_on: - "step-a" allow_dependency_failure: false ``` ##### Allowed failure and waiting states Note that steps which do not run due to failed dependencies are in the `waiting_failed` state, which is included in the scope of `allow_failure` when that is set. For example: ```yml steps: - command: echo "step-a fails" && exit 1 key: step-a - command: echo "step-b does not run" && exit 0 key: step-b depends_on: - step: step-a - command: echo "step-c runs even when step-b does not" key: step-c depends_on: - step: step-b allow_failure: true ``` --- ### Environment variables URL: https://buildkite.com/docs/pipelines/configure/environment-variables #### Environment variables When the agent invokes your build scripts it passes in a set of standard Buildkite environment variables, along with any that you've defined in your build configuration. You can use these environment variables in your [build steps](/docs/pipelines/configure/defining-steps) and [job lifecycle hooks](/docs/agent/hooks#job-lifecycle-hooks). Environment variable size limits are dependent on the operating systems the agents are run on. When a program or process is started, it can typically accept inputs as either one or more environment variables in the form of `key=value` pairs, or a list (array) of command line arguments (referred to as a vector of arguments or `argv`). Depending on the operating system, these limits could be shared size limit across all such environment variables and `argv`, whereas others impose size limits per item (such as an environment variable's size limit). For best practices and recommendations about using secrets in your environment variables, see the [Managing secrets](/docs/pipelines/security/secrets/managing) guide. ##### Buildkite environment variables The following environment variables may be visible in your commands, plugins, and hooks. > 🚧 Unverified commits > Note that GitHub accepts [unsigned commits](https://docs.github.com/en/authentication/managing-commit-signature-verification/about-commit-signature-verification), including information about the commit author and passes them along to webhooks, so you should not rely on these for authentication unless you are confident that all of your commits are trusted. | `` [#](#) **Default**: `` **Possible values:** `` This value cannot be modified | Example: `` ##### Deprecated environment variables The following environment variables have been deprecated. | `BUILDKITE_PROJECT_PROVIDER` | This has been renamed to `BUILDKITE_PIPELINE_PROVIDER`. | `BUILDKITE_PROJECT_SLUG` | This has been renamed to `BUILDKITE_PIPELINE_SLUG`. | `BUILDKITE_SCRIPT_PATH` | This has been renamed to `BUILDKITE_COMMAND` | `BUILDKITE_STEP_IDENTIFIER` | This has been renamed to `BUILDKITE_STEP_KEY` | `BUILDBOX_AGENT_ID` | This has been renamed to `BUILDKITE_AGENT_ID` | `BUILDBOX_AGENT_NAME` | This has been renamed to `BUILDKITE_AGENT_NAME` | `BUILDBOX_AGENT_META_DATA_*` | This has been renamed to `BUILDKITE_AGENT_META_DATA_*` | `BUILDBOX_AGENT_ACCESS_TOKEN` | This has been renamed to `BUILDKITE_AGENT_ACCESS_TOKEN` | `BUILDBOX_AGENT_API_URL` | This has been removed with no replacement ##### Defining your own You can define environment variables in your jobs in a few ways, depending on the nature of the value being set: - The **YAML Steps editor** in your pipeline settings, using a top-level `env` attribute before your steps — for values that are *not secret*. - [Build pipeline configuration](/docs/pipelines/configure/step-types/command-step) — for values that are *not secret*. - An `environment` or `pre-command` [agent hook](/docs/agent/hooks) — for values that are secret or agent-specific. > 🚧 Secrets in environment variables > Do not print or export secrets in your pipelines. See the [Secrets](/docs/pipelines/security/secrets/managing) documentation for further information and best practices. ##### Variable interpolation Any environment variables set by Buildkite will be interpolated by the Agent. If you're using the **YAML Steps editor** to define your pipeline, only the following subset of the environment variables are available: - `BUILDKITE_BRANCH` - `BUILDKITE_TAG` - `BUILDKITE_MESSAGE` - `BUILDKITE_COMMIT` - `BUILDKITE_PIPELINE_SLUG` - `BUILDKITE_PIPELINE_NAME` - `BUILDKITE_PIPELINE_ID` - `BUILDKITE_ORGANIZATION_SLUG` - `BUILDKITE_TRIGGERED_FROM_BUILD_PIPELINE_SLUG` - `BUILDKITE_REPO` - `BUILDKITE_PULL_REQUEST` - `BUILDKITE_PULL_REQUEST_BASE_BRANCH` - `BUILDKITE_PULL_REQUEST_REPO` - `BUILDKITE_MERGE_QUEUE_BASE_BRANCH` - `BUILDKITE_MERGE_QUEUE_BASE_COMMIT` Some variables, for example `BUILDKITE_BUILD_NUMBER`, cannot be supported in the **YAML Steps editor** as the interpolation happens before the build is created. In those cases, interpolate them at the [runtime](/docs/pipelines/configure/environment-variables#runtime-variable-interpolation). Alternatively, you can also access the rest of the Buildkite [environment variables](/docs/pipelines/configure/environment-variables#buildkite-environment-variables) by using a `pipeline.yml` file. Either define your entire pipeline in the YAML file, or you do a [pipeline upload](/docs/agent/cli/reference/pipeline) part way through your build that adds only the steps that use environment variables. See the [dynamic pipelines](/docs/pipelines/configure/dynamic-pipelines) docs for more information about adding steps with pipeline uploads. ##### Runtime variable interpolation When using environment variables that will be evaluated at run-time, make sure you escape the `$` character using `$$` or `\$`. For example: ```yml - command: "deploy.sh $$SERVER" env: SERVER: "server-a" ``` Further details about environment variable interpolation can be found in the [pipeline upload](/docs/agent/cli/reference/pipeline#environment-variable-substitution) CLI guide. ##### Environment variable precedence You can set environment variables in lots of different places, and which ones take precedence can get a little confusing. There are many different levels at which environment variables are merged together. The following walkthrough and examples demonstrate the order in which variables are combined, as if you had set variables in every available place. ###### Job environment When a job runs on an agent, the first combination of environment variables happens in the job environment itself. This is the environment you can see in a job's Environment tab in the Buildkite dashboard, and the one returned by the REST and GraphQL APIs. > 📘 > If you are not using YAML steps, the precedence of environment variables is different from the list below. > Please [migrate your pipelines](/docs/pipelines/tutorials/pipeline-upgrade) to use YAML steps. The job environment is made by merging the following sets of values, where values in each successive set take precedence: | _Pipeline_ | Optional variables set by you in the YAML Steps editor using a top-level `env` attribute | _Build_ | Optional variables set by you on the build when creating a new build in the UI or using the REST API | _Step_ | Optional variables set by you on a step in the YAML steps editor or a pipeline.yml file | _Standard_ | The set of variables provided by Buildkite to every job For example, if you had configured the following environment variables: | _Pipeline_ | `MY_ENV1="a"` | _Build_ | `MY_ENV1="b"` | _Step_ | `MY_ENV1="c"` In the final job environment, the value of `MY_ENV1` would be `"c"`. ###### Setting variables in a pipeline.yml There are two places in a pipeline.yml file that you can set environment variables: 1. In the `env` attribute of command and trigger steps. 1. In the `env` attribute at the top of the yaml file, before you define your pipeline's steps. Defining an environment variable at the top of your yaml file will set that variable on each of the command steps in the pipeline that have not already started running, and is equivalent to setting the `env` attribute on every step. This includes further pipeline uploads through `buildkite-agent pipeline upload`. > 🚧 Concurrent pipeline uploads and environment variables > Concurrent pipeline uploads with build-level environment variables can cause unpredictable behavior by modifying the environment for steps that haven't started yet. > This affects steps running after pipeline uploads, signed pipeline steps (where environment variables affect signature verification), and jobs that depend on specific environment variable values. > Issues typically occur when multiple pipeline uploads that include build-level environment variables happen at the same time or set the same environment variable to different values. ###### Setting variables in a Trigger step Environment variables are not automatically passed through to builds created with [trigger steps](/docs/pipelines/configure/step-types/trigger-step). To set build-level environment variables on triggered builds, set the trigger step's `env` attribute. ###### Agent environment Separate to the job's base environment, your `buildkite-agent` process has an environment of its own. This is made up of: - operating system environment variables - any variables you set on your agent when you started it - any environment variables that were inherited from how you started the process (for example, systemd sets some env vars for you) For a list of variables and configuration flags, you can set on your agent, see the Buildkite agent's [start command documentation](/docs/agent/cli/reference/start). > 📘 > When using the [Agent Stack for Kubernetes](/docs/agent/self-hosted/agent-stack-k8s) controller, environment variables declared as part of a PodSpec will also take precedence when the Kubernetes job is created. Learn more about this in [Kubernetes PodSpec generation](/docs/agent/self-hosted/agent-stack-k8s/podspec#kubernetes-podspec-generation). ###### Job runtime environment Once the job is accepted by an agent, more environment merging happens. Starting with the environment that we put together in the [Job Environment section](#environment-variable-precedence-job-environment), we merge in some of the variables from the agent environment. > 📘 > Not all variables from the agent are available in the job runtime. For example, we remove the agent's registration token and replace it with a build session token that has limited permissions. This new session token is used when you run the `artifact`, `meta-data` and `pipeline` commands inside the job. After the agent variables have been merged, the bootstrap script is run. The bootstrap runs any hooks that have been defined by your [agent](/docs/agent/hooks#hook-locations-agent-hooks), your [repository](/docs/agent/hooks#hook-locations-repository-hooks) or [plugins](/docs/agent/hooks#hook-locations-plugin-hooks). Variables that are set in these hooks will be merged into the runtime environment, and will override any previous values that are set. > 🚧 Take care with environment variables in hooks > Variables that are defined in hooks can override anything that exists in the environment. This is the environment your command runs in 🎉 Finally, if your job's commands make any changes to the environment, those changes will only survive as long as the script is running. --- ### Skipping builds URL: https://buildkite.com/docs/pipelines/configure/skipping #### Skipping builds Build skipping allows you to avoid unnecessary rebuilds, conserving resources and freeing up agents. ##### Skip queued intermediate builds Sometimes you may push several commits in quick succession, leading to Buildkite building each commit in turn. You can configure your pipeline to always skip these intermediate builds, and only build the latest commit. To skip pending builds on the same branch: 1. Navigate to your pipeline's **Settings**. 1. Select **Builds**. 1. Select **Skip Intermediate Builds**. 1. (Optional) Limit which branches build skipping applies to by adding branch names in the text box below **Skip Intermediate Builds**. For example, "branch-one" means Buildkite only skips intermediate builds on branch-one. You can also use not-equals: "!main" skips intermediate builds on all branches except main. You can also configure these options using the [REST API](/docs/apis/rest-api/pipelines#create-a-yaml-pipeline). ##### Ignore a commit Some code changes, such as editing a Readme, may not require a Buildkite build. If you want Buildkite to ignore a commit, add `[ci skip]`,`[skip ci]`, `[ci-skip]`, or `[skip-ci]` anywhere in the commit message. If pull request events are enabled for a given pipeline, when a pull request is created, a build will also be triggered unless `[ci skip]`,`[skip ci]`, `[ci-skip]`, or `[skip-ci]` is added to the pull request title. > 📘 > When squashing commits in a merge, any commit message that contains `[skip ci]` will be included in the squashed commit message. This means that the merge will not trigger a build. > In order to avoid this and have the merge trigger a build, you should remove the commit containing `[skip ci]` from the squashed commit message. For example, the following commit message will cause Buildkite to ignore the commit and not create a corresponding build: ``` Fix readme typos [skip ci] ``` Multi-line commit messages are also supported. For example, the following commit message will also cause Buildkite to ignore the commit: ``` Fix readme typos * Fixed the build badge * Fixed broken GitHub link [skip ci] ``` For more advanced build filtering and commit skipping, see the [Using conditionals](/docs/pipelines/configure/conditionals) guide. > 🚧 Skipping commits with Bitbucket Server > Not all webhooks from Bitbucket Server contain the commit message. When a commit message is not included in a webhook, the build will run. ##### Ignore pull requests You can skip pull requests by adding `[ci skip]`, `[skip ci]`, `[ci-skip]`, or `[skip-ci]` anywhere in the title of a pull request. Refer to [Running builds on pull requests](/docs/pipelines/source-control/github#running-builds-on-pull-requests) for more information. ##### Ignore branches You can choose to always ignore certain branches. Refer to [Branch configuration](/docs/pipelines/configure/workflows/branch-configuration) for more information. ##### Skip builds using conditionals You can use conditionals to skip builds at both the pipeline and step level. Refer to [Conditionals](/docs/pipelines/configure/conditionals) for more information. ##### Skip builds with existing commits Sometimes you don't want to trigger a new build for a commit that's already passed validation, regardless of the branch. For example, when [using merge queues in GitHub](/docs/pipelines/tutorials/github-merge-queue). To skip a build with existing commits: 1. From your Buildkite dashboard, select your pipeline. 1. Select **Settings** > **GitHub**. 1. In the **GitHub Settings** section, select the **Skip builds with existing commits** checkbox. --- ### Canceling builds URL: https://buildkite.com/docs/pipelines/configure/canceling-builds #### Canceling builds Buildkite Pipelines provides several ways to cancel builds and jobs, either automatically or manually. ##### Cancel running intermediate builds Sometimes you may push several commits in quick succession, leading to Buildkite Pipelines building each commit in turn. You can configure your pipeline to cancel these running builds and only build the latest commit. To cancel running builds on the same branch: 1. Navigate to your pipeline's **Settings**. 1. Select **Builds**. 1. Select **Cancel Intermediate Builds**. 1. (Optional) Limit which branches build canceling applies to by adding branch names in the text box below **Cancel Intermediate Builds**. For example, `branch-one` means Buildkite Pipelines only cancels intermediate builds on branch-one. You can also use not-equals: `!main` cancels intermediate builds on all branches except main. You can also configure these options using the [REST API](/docs/apis/rest-api/pipelines#create-a-yaml-pipeline). > 🚧 Using **Cancel Intermediate Builds** and re-running earlier builds > If an earlier build has started running again (for example, due to a job being retried) while the newest build is already running, then this earlier build will not be canceled. > If, however, an earlier build has started running again _before_ a new build starts running, then the earlier build will be canceled. ##### Manually cancel a job If your pipeline has multiple command steps, you can manually cancel a step, which will cause the build to fail. If you do _not_ want the build to fail when you cancel a specific step, you can set [`soft_fail`](/docs/pipelines/configure/step-types/command-step#soft-fail-attributes). To manually cancel a job: 1. From your Buildkite dashboard, select your pipeline. 2. Select the running build. 3. Select the job (step) you want to cancel. 4. Select **Cancel**. ##### Cancel a build using the agent CLI You can cancel a build using the [`buildkite-agent build cancel` command](/docs/agent/cli/reference/build#canceling-a-build). This is a job-level command, meaning it runs within the context of a job and authenticates using the `$BUILDKITE_AGENT_ACCESS_TOKEN` environment variable that Buildkite Pipeline automatically provides to every running job—on both [self-hosted](/docs/agent/self-hosted) and [Buildkite hosted](/docs/agent/buildkite-hosted) agents. ```shell buildkite-agent build cancel ``` This cancels the build associated with the current job's context. You can also target a specific build using the [`--build` flag](/docs/agent/cli/reference/build#build) with the build UUID, or by setting the `$BUILDKITE_BUILD_ID` environment variable. This command is typically called from within a pipeline step script. If you are using Buildkite hosted agents, you can also run the command interactively from a [terminal session](/docs/agent/buildkite-hosted/terminal-access) open on a running job. This is a separate browser-based feature for investigating the job environment. --- ### Retry URL: https://buildkite.com/docs/pipelines/configure/retry #### Retry The `retry` attribute of a [command step](/docs/pipelines/configure/step-types/command-step) controls whether and how a job can be retried. You can configure automatic retries for transient failures, manual retries for user-initiated reruns, or both. ```yml steps: - label: "Tests" command: "tests.sh" retry: automatic: true - wait: ~ - label: "Deploy" command: "deploy.sh" retry: manual: false ``` ##### Retry behavior If you retry a job, the information about the failed job(s) remains, and a new job is created. The history of retried jobs is preserved and immutable. For automatic retries, the number of possible retries can be set with a [`limit` attribute](/docs/pipelines/configure/retry#retry-attributes-automatic-retry-attributes) on the job's step. When a limit is not specified, the default limit is two. You can also see when a job has been retried and whether it was retried automatically or by a user. Such jobs are hidden by default—you can expand and view all the hidden retried jobs. In the Buildkite web interface, there is a [Job Retries Report section](https://buildkite.com/organizations/~/reports/job-retries) where you can view a graphic report on jobs retried manually or automatically within the last 30 days. This can help you understand flakiness and instability across all of your pipelines. ##### Retry attributes The `retry` attribute requires one of the following attributes: | `[automatic](#retry-attributes-automatic-retry-attributes)` | Whether to allow a job to retry automatically. This field accepts a boolean value, individual retry conditions, or a list of multiple different retry conditions. If set to `true`, the retry conditions are set to the default value. _Default value:_ `exit_status: "*"` `signal: "*"` `signal_reason: "*"` `limit: 2` _Example:_ `true` | `[manual](#retry-attributes-manual-retry-attributes)` | Whether to allow a job to be retried manually. This field accepts a boolean value, or a single retry condition. _Default value:_ `true` _Example:_ `false` Conditions on retries can be specified. For example, it's possible to set steps to be retried automatically if they exit with particular exit codes, or prevent retries on important steps like deployments. The following example shows different retry configurations: ```yml steps: - label: "Tests" command: "tests.sh" retry: automatic: - exit_status: 5 limit: 2 - exit_status: "*" limit: 4 - wait: ~ - label: "Deploy" command: "deploy.sh" branches: "main" retry: manual: allowed: false reason: "Deploys shouldn't be retried" ``` ###### Automatic retry attributes The `retry.automatic` attribute has the following optional attributes: | `exit_status` | The exit status number or numbers that cause this job to be retried. This attribute accepts a single integer, an array of integers, or `"*"` (wildcard). Valid exit status values are between 0 and 255, plus `-1` (the value returned when an agent is lost and Buildkite no longer receives contact from the agent). A `"*"` matches any value between 1 and 255 (excluding `0`). _Default value:_ `"*"` _Examples:_ `"*"` `2` `-1` `[1, 5, 42, 255]` | `signal` | The signal that causes this job to be retried. This attribute accepts a string, an array of strings, or `"*"` (wildcard). This signal only appears if the agent sends a signal to the job and an interior process does not handle the signal. `SIGKILL` propagates reliably because it cannot be handled, and is a useful way to differentiate graceful cancelation and timeouts. Signal matching is case-insensitive and the `SIG` prefix is optional (for example, `SIGKILL` and `kill` are equivalent). Use `"none"` to match jobs that received no signal. _Default value:_ `"*"` _Examples:_ `"*"` `"none"` `kill` `SIGINT` | `signal_reason` | The reason associated with a job failure. This attribute accepts a string, an array of strings, or `"*"` (wildcard). Use `"none"` to match jobs with no signal reason. Some signal reasons represent cases where a running job was signaled to stop, for example, `cancel` or `agent_stop`. Other signal reasons indicate that the job never ran in the first place, for example, `signature_rejected`, `agent_incompatible`, or `stack_error`. _Default value:_ `"*"` _Available values:_ `"*"` — matches any signal reason `none` — matches jobs with no signal reason `cancel` — the job was canceled or timed out `agent_stop` — the agent was stopped while running the job `agent_refused` — the agent refused the job `agent_incompatible` — the agent was incompatible with the job `process_run_error` — the process failed to start `signature_rejected` — the job signature was rejected `stack_error` — an error occurred provisioning infrastructure for the job | `limit` | The number of times this job can be retried. The maximum value this can be set to is 10. Each retry rule tracks its own count independently. _Default value:_ `2` _Example:_ `3` You can also set this value to `0` to prevent a job from being retried. This is useful if, for example, the job returns a `signal_reason` of `stack_error`. Learn more about this in the [Retry attributes](/docs/apis/agent-api/stacks#finish-a-job-retry-attributes) section of the [Stacks API](/docs/apis/agent-api/stacks). When a single retry rule specifies multiple conditions (`exit_status`, `signal`, and `signal_reason`), all conditions must match for that rule to trigger a retry. If you define multiple retry rules, they are evaluated in the order they appear, and the first matching rule is applied. Exit statuses not matched by any rule are not retried, so you don't need to explicitly set `limit: 0` for unmatched statuses. ```yml steps: - label: "Tests" command: "tests.sh" retry: automatic: - exit_status: -1 # Agent was lost limit: 2 - exit_status: 255 # Forced agent shutdown limit: 2 ``` > 📘 -1 exit status > A job will fail with an exit status of -1 if communication with the agent has been lost (for example, the agent has been forcefully terminated, or the agent machine was shut down without allowing the agent to disconnect). See [Exit codes](/docs/agent/lifecycle#exit-codes) for information on other such codes. The following example shows a step with combined retry conditions. The first rule retries up to three times when the agent refuses the job (both the exit status and signal reason must match). The second rule retries up to two times for any other failure. ```yml steps: - label: "Tests" command: "tests.sh" retry: automatic: - exit_status: -1 signal_reason: agent_refused limit: 3 - exit_status: "*" limit: 2 ``` ###### Manual retry attributes The `retry.manual` attribute has the following optional attributes: | `allowed` | A boolean value that defines whether or not this job can be retried manually. _Default value:_ `true` _Example:_ `false` | `permit_on_passed` | A boolean value that defines whether or not this job can be retried after it has passed. _Example:_ `false` | `reason` | A string displayed in a tooltip on the **Retry** button in Buildkite. This only appears if the `allowed` attribute is set to false. _Example:_ `"No retries allowed on deploy steps"` ```yml steps: - label: "Tests" command: "tests.sh" retry: manual: permit_on_passed: true - wait: ~ - label: "Deploy" command: "deploy.sh" retry: manual: allowed: false reason: "Sorry, you can't retry a deployment" ``` --- ### Soft fail URL: https://buildkite.com/docs/pipelines/configure/soft-fail #### Soft fail The `soft_fail` attribute of a [command step](/docs/pipelines/configure/step-types/command-step) allows a step to exit with a non-zero status without failing the build. The step is marked as passed, and the build continues as normal. ```yml steps: - label: "Smoke tests" command: "smoke-test.sh" soft_fail: true ``` ##### Soft fail attributes The `soft_fail` attribute has the following optional attributes: | `exit_status` | The exit status number that triggers a soft fail. Accepts a single integer or `"*"` (wildcard) to match any non-zero exit status. _Example:_ `1` _Example:_ `"*"` ###### Allow all non-zero exit statuses Set `soft_fail: true` to allow any non-zero exit status to pass without failing the build: ```yml steps: - label: "Lint" command: "lint.sh" soft_fail: true ``` ###### Allow specific exit statuses Pass an array of `exit_status` values to only soft fail on particular exit codes. In this example, the **Tests** step soft fails on an exit code of `1`, whereas the **Multiple exit statuses** step soft fails on either `1` or `42`: ```yml steps: - label: "Tests" command: "tests.sh" soft_fail: - exit_status: 1 - label: "Multiple exit statuses" command: "other-tests.sh" soft_fail: - exit_status: 1 - exit_status: 42 ``` Use `exit_status: "*"` to match any non-zero exit status, which in this example, allows **Tests** to soft fail on any exit status: ```yml steps: - label: "Tests" command: "tests.sh" soft_fail: - exit_status: "*" ``` ##### Soft fail and dependencies Setting `soft_fail` on a step also allows steps that depend on it to run, even when [`allow_dependency_failure: false`](/docs/pipelines/configure/dependencies#allowing-dependency-failures) is set on the subsequent step. In the following example, `step-b` runs because `step-a` is soft failing. If `step-a` were to fail with a different exit code, `step-b` would not run. ```yml steps: - key: "step-a" command: echo "soft fail" && exit 42 soft_fail: - exit_status: 42 - key: "step-b" command: echo "Running" depends_on: "step-a" ``` ##### Soft fail in a build matrix You can use `soft_fail` within a [build matrix](/docs/pipelines/configure/workflows/build-matrix) `adjustments` block to soft fail specific matrix combinations: ```yml steps: - label: "Tests" command: "tests.sh" matrix: setup: os: ["linux", "windows"] arch: ["amd64", "arm64"] adjustments: - with: os: "windows" arch: "arm64" soft_fail: true ``` See also [Matrix adjustments](/docs/pipelines/configure/step-types/command-step#matrix-attributes) on the [Command step](/docs/pipelines/configure/step-types/command-step) page for more information. --- ### Build artifacts URL: https://buildkite.com/docs/pipelines/configure/artifacts #### Build artifacts Buildkite can store and retrieve build outputs as _artifacts_. In this guide, you'll learn what artifacts are, what they're used for, and how to upload and download them. An artifact is a file's contents and metadata, such as its original file path, an integrity verification hash, and details of the build that uploaded it. Buildkite agents upload artifacts to a storage service during a build. You can use artifacts to: - Pass files from one pipeline step to another. For example, you can build a binary in one step, then download and run that binary in a later step. - Store final assets produced by a pipeline, such as logs, reports, archives, and images. For example, you can build a static site, store the result as an archive, and fetch it later for deployment. You can choose to keep artifacts in a Buildkite-managed storage service or a third-party cloud storage service. There are several methods you can use to upload and download artifacts, summarized in the table: | | Upload | Download | Command step | Yes | No | Buildkite agent | Yes | Yes | REST API | No | Yes You can upload artifacts [using a pipeline step](#upload-artifacts-with-a-command-step) or by [running the `buildkite-agent artifact upload` command](#upload-artifacts-with-the-buildkite-agent). When you upload an artifact, Buildkite saves the file's contents, the complete path the file was uploaded from, and details of the build step it originated from, so you can retrieve artifacts by name, path, or build. You can download artifacts by [running the `buildkite-agent artifact download` command](#download-artifacts-with-the-buildkite-agent) or by [making a request to the artifacts REST API](#download-artifacts-with-the-buildkite-rest-api). ##### Upload artifacts with a command step Set the `artifact_paths` attribute of [a command step](/docs/pipelines/configure/step-types/command-step) to upload artifacts after the command step has finished its work. The `artifact_paths` attribute can contain an array of file paths or [glob patterns](/docs/agent/cli/reference/artifact#uploading-artifacts-artifact-upload-glob-syntax) to upload. The following example shows a command step configured to upload all of the files in the `logs` and `coverage` directories and their subdirectories: ```yaml steps: - label: ":hammer: Tests" command: - "npm install" - "tests.sh" artifact_paths: - "logs/**/*" - "coverage/**/*" ``` ##### Upload artifacts with the Buildkite agent Within a build, run the `buildkite-agent artifact upload` command to upload artifacts. The agent's `upload` command arguments are one or more file paths and [glob patterns](/docs/agent/cli/reference/artifact#uploading-artifacts-artifact-upload-glob-syntax). The following example uploads a `build.tar.gz` file from the `pkg` directory: ```shell buildkite-agent artifact upload pkg/build.tar.gz ``` The `buildkite-agent artifact upload` command supports several options and environment variables. For complete usage instructions, read the [`buildkite-agent artifact upload`](/docs/agent/cli/reference/artifact#uploading-artifacts) documentation. ##### Download artifacts with the Buildkite agent Within a build, run the `buildkite-agent artifact download` command to download artifacts from a script. The agent's `download` command arguments are a file path or [glob pattern](/docs/agent/cli/reference/artifact#uploading-artifacts-artifact-upload-glob-syntax) and a destination path. The `buildkite-agent artifact download` command supports several options and environment variables. For complete usage instructions, read the [`buildkite-agent artifact download`](/docs/agent/cli/reference/artifact#downloading-artifacts) documentation. > 📘 Pipeline artifact access > Pipelines associated with one [cluster](/docs/pipelines/glossary#cluster) cannot access artifacts from pipelines associated with another cluster, unless a [rule](/docs/pipelines/security/clusters/rules) has been created to explicitly allow artifact access between pipelines in different clusters. ###### Example: download one artifact The agent's `download` command can fetch another job's artifact and save it to a destination path. The following example downloads an artifact from a previous job — a file named `build.tar.gz` that was in the job's `pkg` directory — to the destination `archives` directory in the working directory of the current job: ```shell buildkite-agent artifact download pkg/build.tar.gz archives ``` ###### Example: download many artifacts The agent's `download` command can download many artifacts using a glob pattern. If needed, the agent can mirror the artifact's directory structure in the destination directory. The following example downloads all of the files uploaded from the `logs` directory to the `local-logs` directory: ```shell buildkite-agent artifact download 'logs/**' local-logs/ ``` ###### Example: download an artifact from a specific step By default, the agent downloads the most recent matching artifact, no matter which build step uploaded it. If you want to get an artifact from a specific build step, use the `--step` option. The following example downloads `build.zip` from the `build` step: ```shell buildkite-agent artifact download build.zip tmp/ --step build ``` ###### Example: download an artifact from a triggering build To download artifacts from the build that [triggered](/docs/pipelines/configure/step-types/trigger-step) the current build, pass the `$BUILDKITE_TRIGGERED_FROM_BUILD_ID` [environment variable](/docs/pipelines/configure/environment-variables) to the download command: ```shell buildkite-agent artifact download "*.jpg" images/ --build $BUILDKITE_TRIGGERED_FROM_BUILD_ID ``` ##### Download artifacts with the Buildkite REST API If you want to download an artifact from outside the context of a running build or without the use of the Buildkite agent, then use the [artifacts REST API](/docs/apis/rest-api/artifacts) to list and download artifacts. ##### Storage providers, encryption, and retention Buildkite agents upload artifacts directly to artifact storage, where they're encrypted by the storage platform. If you're using Buildkite-managed artifact storage, then your artifacts are stored in Amazon S3. At rest, artifacts are AES-256 encrypted with keys managed by AWS Key Management Service. Buildkite retains artifacts for six months before deletion. Alternatively, you can use a self-managed storage provider. Read these guides for details: - [Amazon S3](/docs/agent/cli/reference/artifact#using-your-private-aws-s3-bucket) - [Google Cloud Storage](/docs/agent/cli/reference/artifact#using-your-private-google-cloud-bucket) - [Azure Blob Storage](/docs/agent/cli/reference/artifact#using-your-private-azure-blob-container) - [Artifactory](/docs/agent/cli/reference/artifact#using-your-artifactory-instance) If you manage your own artifact storage, then you are responsible for encryption and retention planning. To track the actions of users with access to your artifacts, use the [API Access Audit](https://buildkite.com/organizations/~/api-access-audit). ##### Troubleshooting artifacts The following suggestions resolve common issues with using artifacts. ###### Multiple artifacts were found for query The `buildkite-agent artifact download` command can fail with the following error message: ``` Failed to download artifacts: GET https://agent.buildkite.com/v3/builds/776402f5-90a8-458f-9a2c-57e67c50a888/artifacts/search?query=ambiguous-file-name.txt&state=finished: 400 Multiple artifacts were found for query: `ambiguous-file-name.txt`. Try scoping by the job ID or name. ``` The error occurs when the agent tries to download a specific file by name, but cannot find a unique match. In other words, the file path was ambiguous and did not identify a single artifact with that name in the current the build. For example, two previous steps uploaded a file with the same name. To fix this error, specify the step or build that uploaded the artifact. Use the `--step` or `--build` options to narrow the search for artifacts. For an example, read [download an artifact from a specific step](#download-artifacts-with-the-buildkite-agent-example-download-an-artifact-from-a-specific-step). Alternatively, download the most recent matching file by using a glob pattern. For an example, read [download many artifacts](#download-artifacts-with-the-buildkite-agent-example-download-many-artifacts). ###### Artifacts are missing from retried jobs Artifacts from retried jobs are excluded by default, so the `buildkite-agent artifact download` command won't find them. To include artifacts from retried jobs in your search results, use `--include-retried-jobs` in the command. --- ### Build timeouts URL: https://buildkite.com/docs/pipelines/configure/build-timeouts #### Build timeouts Build timeouts limit how long a job can run before being canceled, or how long a job can wait before being picked up by an agent. If a job exceeds the time limit, the job will automatically be canceled and the build will fail. You can set timeouts on your builds through: - Command step timeouts for running jobs - Scheduled job expiration for jobs yet to be picked up Organization-level timeouts can be set in your organization's [**Pipeline Settings**](https://buildkite.com/organizations/~/pipeline-settings): ##### Command timeouts There is no separate pipeline-level timeout in Buildkite Pipelines as all timeouts are applied per [command step](/docs/pipelines/configure/step-types/command-step), not to the build as a whole. You can specify timeouts for individual command steps using the [`timeout_in_minutes`](/docs/pipelines/configure/step-types/command-step#timeout_in_minutes) attribute, or set the default and maximum timeouts at the organization or pipeline level. The **Default Command Step Timeout** sets the default timeout in minutes for all command steps in a pipeline. This timeout can still be overridden in a command step. The **Maximum Command Step Timeout** sets the maximum timeout in minutes for all command steps in a pipeline. Any command step without a timeout or with a timeout greater than this value will be set to this value. Timeout precedence in the order of priority: step-level timeout → pipeline default → organization default. This behavior is distinct from [scheduled job expiration](#scheduled-job-expiration). Timeouts apply to the whole job lifecycle, including hooks and artifact uploads. If a timeout is triggered while a command or hook is running, there's a 10-second grace period by default. You can change the grace period by setting the [`cancel-grace-period`](/docs/agent/self-hosted/configure#cancel-grace-period) flag. Note that command step timeouts don't apply to [trigger steps](/docs/pipelines/configure/step-types/trigger-step) and [block steps](/docs/pipelines/configure/step-types/block-step). ###### Updating timeouts during a job You can dynamically update a command job's timeout before it is finished, using the [`buildkite-agent job update` command](/docs/agent/cli/reference/job). This is useful when a job learns more about how long it should take during execution, for example, after completing a setup phase. > 📘 Minimum Buildkite agent version requirement > To update a job's timeout, version 3.118.0 or later of the Buildkite agent is required. Using earlier versions of the Buildkite agent will result in pipeline failures. To update the timeout for the current job to 20 minutes: ```bash buildkite-agent job update timeout 20 ``` The value (20 minutes) is relative to the job's start time, not the current time. To extend the timeout of the current job: ```yml steps: - label: ":timer_clock:" command: | echo "Extending job timeout by 10 minutes" buildkite-agent job update timeout "$$(( BUILDKITE_TIMEOUT + 10 ))" timeout_in_minutes: 1 ``` To reduce the timeout of the current job: ```yml steps: - label: ":timer_clock:" command: | echo "Reducing job timeout by 10 minutes" buildkite-agent job update timeout "$$(( BUILDKITE_TIMEOUT - 10 ))" timeout_in_minutes: 30 ``` To set the timeout of job: ```yml steps: - label: ":timer_clock:" command: | echo "Setting job's timeout to 50 minutes" buildkite-agent job update timeout 50 ``` This command can be used to reduce an existing timeout, extend it (by up to an hour), or set a timeout on a job that doesn't have one. Updated timeouts are enforced on the server and can take up to two minutes to be enforced. Existing timeouts cannot be removed. Timeout updates are recorded in the job's activity timeline, showing the previous and new timeout values. The following limits apply to updating a job's timeout: - The timeout value must be a positive integer, specified in minutes. The timeout is relative to the job's start time. - Only command jobs can be updated. - Jobs can only be updated before they finish. Once a job reaches a terminal state, the timeout can no longer be changed. - Timeouts cannot be removed. - The updated timeout can't exceed the pipeline's **Maximum Command Step Timeout**, the organization's **Maximum Command Step Timeout**, or four hours on the Personal plan (Pro and Enterprise plans have no plan-level limit). If the updated timeout exceeds any of these limits, the update is rejected. - Jobs that have an initial timeout (`$BUILDKITE_TIMEOUT`) can extend their timeout by up to 60 minutes beyond that initial value. The initial timeout can come from a step-level `timeout_in_minutes`, a pipeline or organization default, a maximum timeout, or a plan-level limit. For example, a job with an initial timeout of 90 minutes can be extended to a maximum of 150 minutes. Repeated updates can't exceed this limit. Jobs without an initial timeout are not subject to this limit. ##### Scheduled job expiration Scheduled job expiration helps you avoid having lingering jobs that are never assigned to an agent or run. This expiration time is calculated from when a job is created, not scheduled. By default, jobs expire (are canceled) when not picked up for 30 days. This will cause the corresponding build to fail. You can override the default by setting a shorter value in your organization's [**Pipeline Settings**](https://buildkite.com/organizations/~/pipeline-settings) page. Scheduled job expiration limits should not be confused with [scheduled builds](/docs/pipelines/configure/workflows/scheduled-builds). A scheduled build's jobs will still go through the [build states](/docs/pipelines/configure/defining-steps#build-states), and the timeout will apply once its individual jobs are in the scheduled state waiting for agents. > 📘 Delays in job expiration > The job expiration process runs hourly at 5 minutes past the hour. If a job's scheduled expiration time hasn't been reached when the process runs, the job will only expire when the process runs again in the next hour. --- ### Pipeline tags URL: https://buildkite.com/docs/pipelines/configure/tags #### Pipeline tags Pipeline tags allow you to tag and search for your pipelines using the search bar. Tags are beneficial when you have many pipelines and would like to group and filter through them quickly. ##### Using tags You can assign each pipeline up to 10 unique tags. A tag can comprise emoji and text, up to 128 characters. It is recommended using an emoji to make the tag stand out, and to keep the text short and clear. You can tag a pipeline by navigating to the pipeline's **Settings** or using the API. In REST, use the `tags` property on the [Pipeline REST API](/docs/apis/rest-api/pipelines). In GraphQL, use the `tag` field on the [`pipelineUpdate` mutation](/docs/apis/graphql/schemas/mutation/pipelineupdate). To use the same tag across multiple pipelines, you must create the same tag on each pipeline. --- ### Build retention URL: https://buildkite.com/docs/pipelines/configure/build-retention #### Build retention Each [Buildkite plan](https://buildkite.com/pricing) has a maximum build retention period. Once builds reach the retention period, their data is removed from Buildkite. The following diagram shows the lifecycle of build data by plan. ##### Retention periods | Plan | Retention period | Supports build exports | Personal plan | 90 days | No | Pro plan | 1 year | No | Enterprise plan | 1 year | Yes Retention periods are set according to an organization's plan, as shown in the previous table. Per-pipeline retention settings are not supported. ##### Exporting build data > 📘 Enterprise plan feature > Exporting build data is only available on an [Enterprise](https://buildkite.com/pricing) plan. If you need to retain build data beyond the retention period in your [Buildkite plan](https://buildkite.com/pricing), you can have Buildkite export the data to a private Amazon S3 bucket or Google Cloud Storage (GCS) bucket. As build data is removed, Buildkite exports JSON representations of the builds to the bucket you provide. To learn more, see [Build exports](/docs/pipelines/governance/build-exports). --- ### Public pipelines URL: https://buildkite.com/docs/pipelines/configure/public-pipelines #### Public pipelines If you're working on an open source project, and want anyone to be able to see your builds, you can make your pipeline public. > 📘 Prerequisites > Before a pipeline can be made public, a Buildkite organization administrator must enable public pipeline creation in the **Organization Settings**. To do so, go to the [**Organization Settings**](https://buildkite.com/organizations/~/settings) page, select **Pipelines** > **Settings**, scroll down to the **Public Pipelines** section, and select **Enable Public Pipeline Creation**. Making a pipeline public provides read-only public/anonymous access to: - Pipeline build pages - Pipeline build logs - Pipeline build artifacts - Pipeline build environment config - Agent version and name ##### Make a pipeline public using the UI Make a pipeline public in the pipeline's **Settings** > **General** page: ##### Create a public pipeline using the GraphQL API Use the following mutation in the [GraphQL API](/docs/apis/graphql-api) to create a new public pipeline: ```graphql mutation { pipelineCreate(input: { organizationId: $organizationID, name: $pipelineName, visibility: PUBLIC, repository: { url: "git@github.com:blerp/goober.git" }, steps: { yaml: "steps:\n- command: true" } }) { pipeline { public # true visibility # PUBLIC organization { public # true } } } } ``` --- ### Using build meta-data URL: https://buildkite.com/docs/pipelines/configure/build-meta-data #### Using build meta-data In this guide, we'll walk through using the Buildkite agent's [meta-data command](/docs/agent/cli/reference/meta-data) to store and retrieve data between different steps in a build pipeline. Meta-data is intended to store data to be used across steps. For example, you can tag a build with the software version it deploys so that you can later identify which build deployed a particular version. Meta-data is stored at the build level, not the job level. When a job sets a meta-data key, that key-value pair is shared across the entire build. If multiple jobs set the same key, the build retains whichever value was written last. Meta-data values are each restricted to a maximum of 100 kilobytes (kb). However, meta-data values larger than 1 kb are discouraged. For any such values over 1 kb, use an [artifact](/docs/pipelines/configure/artifacts) instead. > 🚧 > You should not store secrets or other sensitive information in build meta-data, as it is not a secure medium and its contents can be viewed through the Buildkite interface. Instead, please follow the guidance in [Managing pipeline secrets](/docs/pipelines/security/secrets/managing) for best practices on storing and using secrets in your pipelines. ##### Setting data The agent's `meta-data` command is the only method for setting meta-data. You can run the command from the command line or in a script. To set meta-data in the meta-data store, use the `set` command with a key/value pair: ```bash buildkite-agent meta-data set "release-version" "1.1" ``` This command results in the value "1.1" being associated with the key "release-version" in the meta-data store. Once meta-data is set for a build, it cannot be deleted. It can only be updated using the `set` command. ##### Getting data You can retrieve data from the meta-data store either using the command line or in a script. The same as when setting data, both of these methods use the `buildkite-agent` cli with the `meta-data` command. Values can only be retrieved from the store after it has been set - ensure that any steps that are getting data are guaranteed to run after the completion of the step that sets the data. One way to ensure workflows in this way is to use a [wait step](/docs/pipelines/configure/step-types/wait-step). To retrieve meta-data, use the `get` command with the previously set key: ```bash buildkite-agent meta-data get "release-version" ``` Assuming that the "release-version" key was set with the value from the Setting Data example, this command will return "1.1". If there are no keys matching the name "release-version", it will return an error. > 📘 Default values > The `get` command has a `default` flag. You can use this to return a value in the case that the key has not been set. ##### Using meta-data on the dashboard You can add `meta_data[…]=…` query parameters to most builds' URLs to filter down the list of builds shown to only those with certain meta-data. For example, to list builds in a pipeline which have a "release-version" of "1.1" you can use: ```url https://buildkite.com/{my-organization}/{my-pipeline}/builds?meta_data[release-version]=1.1 ``` You can also append `/meta-data` to the URL of a build to access a page that lists all the meta-data associated with that build: ```url https://buildkite.com/{my-organization}/{my-pipeline}/builds/{build-number}/meta-data ``` ##### Using meta-data in the REST API You can use meta-data to identify builds when searching for builds in the REST API. For more information, see the [Builds API in the Buildkite REST API documentation](/docs/apis/rest-api/builds). ##### Using build input parameters When a pipeline's steps begin with a `block` or `input` step, any fields will be rendered in the **New Build** dialog. For example, a pipeline with the slug `activities` in an organization whose slug is `demo` has the following definition: ```yaml steps: - block: What would you like to see? fields: - text: Which city? key: city - select: What activities? key: activities multiple: true options: - label: Restaurants value: restaurants - label: Museums value: museums - label: Sports value: sports ``` The **New Build** dialog will include the `block` or `input` step fields, and will set the meta-data fields on the new build. Meta-data fields can also be pre-populated using query string parameters. ``` https://buildkite.com/organizations/{organization-slug}/pipelines/{pipelines-slug}/builds/new?meta_data[{key}]={value} ``` You can pre-populate the input fields of such pipelines' URLs, which you can bookmark for subsequent use: ``` https://buildkite.com/organizations/demo/pipelines/activities/builds/new?meta_data[city]=Melbourne&meta_data[activities]=restaurants,sports ``` Using meta-data to pre-populate fields in this way carries some considerations regarding how the input step behaves. Learn more about this in the [Input step](/docs/pipelines/configure/step-types/input-step) page. ##### Special meta-data Meta-data keys starting with `buildkite:` are reserved for special values provided by Buildkite. These may be generated on request. ###### buildkite:webhook The special `buildkite:webhook` meta-data key can be used to get the body of the webhook which triggered the current build. For example, you can access the [GitHub](/docs/pipelines/source-control/github) push webhook payload in a command step: ```yaml steps: - command: | WEBHOOK="$(buildkite-agent meta-data get buildkite:webhook)" STARGAZERS="$(jq .repository.stargazers_count <<< "$WEBHOOK")" echo "The current repository has $STARGAZERS stargazers 💫" ``` This value will only be available for builds triggered by a webhook, and only as long as the full webhook body remains cached — typically for 7 days. ##### Further documentation See the [Buildkite agent build meta-data documentation](/docs/agent/cli/reference/meta-data) for a full list of options and details of Buildkite's meta-data support. --- ### Managing log output URL: https://buildkite.com/docs/pipelines/configure/managing-log-output #### Managing log output Buildkite uses our open-source [terminal-to-html](https://github.com/buildkite/terminal-to-html) tool to provide you with the best possible terminal rendering experience for your build logs, including ANSI terminal emulation to ensure spinners, progress bars, colors and emojis are rendered beautifully. ##### Grouping log output You can organize your build output into collapsible sections using different grouping methods, each providing a distinct visual presentation and default behavior. Build output appears under the most recently defined heading until you define a new heading. ###### Collapsed groups Use `---` to create collapsed groups that users can expand to view details: ```bash echo "--- A section of the build" ``` ###### De-emphasized groups Use `~~~` to create groups that by default are collapsed and visually de-emphasized through the use of non-bold text (can be useful for less important output): ```bash echo "~~~ An unimportant section of the build" ``` ###### Expanded groups Use `+++` to create groups that are open by default: ```bash echo "+++ A section of the build" ``` If no group is explicitly expanded (`+++`), then the last collapsed regular group (`---`) gets expanded instead. If you _really_ want all groups to be collapsed, add an empty expanded group (using a single space character) at the end of your build: ```bash echo -e "+++ \032" #### The \032 escape sequence creates a single space character ``` ###### Advanced grouping techniques This section covers build log output grouping methods that go beyond formatting, collapsing, or expanding, and can be used for a better visual filtering of information, especially when it comes to long logs. ###### Opening previous groups If you'd like to open the previously defined group, use `^^^ +++`. This is useful if a command within a group fails, and you'd like to have the group already open when you view the log. ```bash echo "--- Bundling" bundle if [[ $? -ne 0 ]]; then echo "^^^ +++" echo "Bundler failed, oh no!!" fi ``` ###### Creating section boundaries Different group types can be combined to create defined start and end markers for your log output. This is useful for creating distinct sections with clear boundaries: ```bash echo "--- Starting deployment..." ./scripts/deployment.sh echo "~~~ Deployment complete!" echo "--- Running tests..." ./scripts/tests.sh echo "~~~ Tests succeeded!" ``` You can even include colors and emojis! ```bash echo -e "--- Running \033[33mspecs\033[0m \:cow\:\:bell\:" ``` ##### ANSI timestamps and disabling them By default, each line of log output begins with an ANSI timestamp. If you are running [self-hosted agents](/docs/pipelines/architecture#self-hosted-hybrid-architecture), you can prevent them for generating ANSI timestamps at the start of each line of log output, by starting these agents with the [`--no-ansi-timestamps` option](/docs/agent/cli/reference/start#no-ansi-timestamps). ##### Log output limits If your build output exceeds 2MB then we'll only show the last 2MB of it in the rendered terminal output on your build page. In addition, your log file must not exceed 100MB else it may fail to upload. If your log exceeds 2MB then we highly recommend reconfiguring your build tools to filter out unnecessary lines. Sometimes this isn't always possible, so you can use the below techniques to store and filter your log. ##### Storing the original log One method for storing the original log is the Unix `tee` command. It allows you to store the output stream of a command to a file and passing it straight through unchanged to the next command. ```bash #!/bin/bash set -euo pipefail your_build_command | tee build.log | ``` When this script is run it will store the original output of `your_build_command` to the file `build.log`. To store this file alongside your build, add the `artifact_paths` attribute to the command step running your script: ```yaml steps: - command: build.sh artifact_paths: "build.log" ``` When your build is finished the agent will upload `build.log` as a build artifact, which will be downloadable from the "Artifacts" tab on your build page. > 📘 > The `tee` command almost always exits with a code of `0`, and so this command won't report on the preceding command. Capturing the status of the preceding command with `"${PIPESTATUS[0]}"` may help with error debugging. ##### Filtering with grep Grep is a Unix tool to help you filter lines of text that match a pattern. For example, the following script only sends Buildkite the matching lines as your log output, whilst storing the original log for artifact uploading. ```bash #!/bin/bash set -euo pipefail your_build_command | tee build.log | grep 'some pattern' ``` ##### Truncating with tail Tail is a Unix tool that returns the last portion of a file. This is useful if your log output is exceeding our hard limit of 100MB. For example, the following script only sends Buildkite the last 90MB as your log output, whilst storing the original log for artifact uploading. ```bash #!/bin/bash set -euo pipefail your_build_command | tee build.log | tail -c90000000 ``` ##### Improving Xcode logs with xcpretty [xcpretty](https://github.com/supermarin/xcpretty) is an open-source tool that helps to reduce, format and color-code your [Xcode](http://developer.apple.com/xcode) build output. Once you've installed xcpretty you can pipe the output of xcodebuild into it: ```bash #!/bin/bash set -euo pipefail xcodebuild | tee -a build.log | xcpretty -c ``` Make sure to set the `-o pipefail` option in your buildscript as above, otherwise the build failure status might not be passed through correctly. ##### Encryption and security Buildkite has zero access to your source code in the pipelines and only receives and stores the log output of the builds and build artifacts in encrypted form. Logs are AES-encrypted, and the build artifacts are encrypted in transit and at rest using AWS encryption (KMS or S3 SSE). As a result, the keys cannot be extracted on the Buildkite's side, and the AWS solutions mitigate against zero-day attacks and other security issues. Beyond this, the control over security measures within your infrastructure is up to you. If you choose to [host your build artifacts](/docs/agent/cli/reference/artifact#using-your-private-aws-s3-bucket) yourself, they end up in your private AWS bucket. If you are a Buildkite customer on the [Enterprise](https://buildkite.com/pricing) plan, you can also set up a private AWS S3 build log archive location and store the logs in your private bucket. To further tighten the security in a Buildkite organization, you can use the [API Access Audit](https://buildkite.com/organizations/~/api-access-audit) to track the actions of the users who have API access tokens that can access your organization's data using the REST and GraphQL API. ##### Redacted environment variables Agents can redact the values of environment variables whose names match common patterns for passwords and other secure information before the build log is uploaded to Buildkite. If the environment variable's value is shorter than the minimum length of 6 bytes, then this value will not be redacted. The default environment variable name patterns are: - `*_PASSWORD` - `*_SECRET` - `*_TOKEN` - `*_PRIVATE_KEY` - `*_ACCESS_KEY` - `*_SECRET_KEY` - `*_CONNECTION_STRING` (added in Agent v3.53.0) With these defaults, if you have an environment variable `MY_SECRET="topsecret"` and run a command that outputs `This is topsecret info`, the log output will be `This is [REDACTED] info`. You can append additional patterns or replace the default patterns entirely by [setting redacted-vars](/docs/agent/self-hosted/configure#redacted-vars) on your agent. For example, if you wanted to redact the value of `FOO` in your log output and keep the existing default patterns, the configuration setting should look like the following: ```sh redacted-vars="*_PASSWORD, *_SECRET, *_TOKEN, *_PRIVATE_KEY, *_ACCESS_KEY, *_SECRET_KEY, *_CONNECTION_STRING, *_SOME_VALUE, FOO" ``` > 📘 Setting environment variables > Note that if you _set_ or _interpolate_ a secret environment variable in your `pipeline.yml` it is not redacted, but doing that is [not recommended](/docs/pipelines/security/secrets/risk-considerations#storing-secrets-in-your-pipeline-dot-yml). ##### Private build log archive storage By default, build logs are stored in encrypted form in Buildkite's managed Amazon S3 buckets, but you can instead store the archived build logs in your private AWS S3 bucket. If you decide to store the logs in your S3 bucket, they're encrypted using SSE-S3. SSE-KMS encryption is not supported. After storing the logs in your S3 bucket, Buildkite does not retain a copy of the logs. > 📘 Enterprise plan feature > This feature is only available to customers on the [Enterprise](https://buildkite.com/pricing) plan and is applied at the Buildkite organization level. If you have multiple organizations, send support a list of the organizations where this feature should be enabled. The folder structure and file format are as follows and are not customizable: ```text {ORGANIZATION_UUID}/{BUILDKITE_PIPELINE_ID}/{BUILDKITE_BUILD_ID}/{BUILDKITE_JOB_ID}.log ``` To set up a private build log archive storage: 1. Create an Amazon S3 bucket in *us-east-1* location (the only region that is currently supported). 2. Provide *read* and *write* access permission policy for the Buildkite's AWS account `032379705303`. Here's an example policy that contains an Amazon S3 bucket configuration with Buildkite's account number in it. Replace `my-bucket` and `my-prefix` placeholders with your Amazon S3 bucket information: ```json { "Version": "2012-10-17", "Statement": [ { "Sid": "AllowBuildkiteToWriteObjectsInLogsPrefix", "Effect": "Allow", "Principal": { "AWS": "arn\:aws\:iam::032379705303:root" }, "Action": "s3:PutObject", "Resource": "arn\:aws\:s3:::my-bucket/my-prefix/*", "Condition": { "StringEquals": { "s3:x-amz-acl": "bucket-owner-full-control" } } }, { "Sid": "AllowBuildkiteToReadObjectsInLogsPrefix", "Effect": "Allow", "Principal": { "AWS": "arn\:aws\:iam::032379705303:root" }, "Action": "s3:GetObject", "Resource": "arn\:aws\:s3:::my-bucket/my-prefix/*" }, { "Sid": "AllowBuildkiteToDeleteObjectsInLogsPrefix", "Effect": "Allow", "Principal": { "AWS": "arn\:aws\:iam::032379705303:root" }, "Action": "s3:DeleteObject", "Resource": "arn\:aws\:s3:::my-bucket/my-prefix/*" }, { "Sid": "AllowBuildkiteToListBucketInLogsPrefix", "Effect": "Allow", "Principal": { "AWS": "arn\:aws\:iam::032379705303:root" }, "Action": "s3:ListBucket", "Resource": "arn\:aws\:s3:::my-bucket", "Condition": { "StringLike": { "s3:prefix": "my-prefix/*" } } } ] } ``` 3. Reach out to [support@buildkite.com](mailto:support@buildkite.com) and provide the address of your Amazon S3 bucket. The Buildkite engineering team will continue the configuration to complete the setup. --- ### Links and images in log output URL: https://buildkite.com/docs/pipelines/configure/links-and-images-in-log-output #### Links and images in log output You can embed links, external images, and images generated by your builds directly into your logs using special ANSI escape codes. ANSI escape codes are used for many tasks in UNIX terminals, such as setting text color uses an ANSI escape code. Buildkite supports most standard ANSI escape codes and a few special ones, thanks to our terminal output processor, [`terminal-to-html`](https://github.com/buildkite/terminal-to-html). ##### Links You can embed clickable links to Buildkite or other web pages by using ANSI escape code `1339`. The following Bash function takes a URL and optional link text and will output the correct escape code sequence: ```bash function inline_link { LINK=$(printf "url='%s'" "$1") if [ $# -gt 1 ]; then LINK=$(printf "%s;content='%s'" "$LINK" "$2") fi printf '\033]1339;%s\a\n' "$LINK" } ``` You can use it like so: ```bash inline_link 'https://buildkite.com/' ``` Or, to use a custom label: ```bash inline_link 'https://buildkite.com/' 'Buildkite' ``` You can also link to an uploaded artifact by using the `artifact://` URL syntax: ```bash inline_link 'artifact://tmp/images/omg.gif' ``` ##### Images ###### Syntax for inlining images The syntax for inlining images uses ANSI escape code `1338`. A `url` is required, and you can optionally specify an `alt` attribute to describe what the image is. The following Bash function takes an image URL and alternative text and will output the correct escape code sequence: ```bash function inline_image { printf '\033]1338;url='"$1"';alt='"$2"'\a\n' } ``` You can use it like so: ```bash inline_image 'https://media0.giphy.com/media/8Ry7iAVwKBQpG/giphy.gif' 'Rainbows' ``` When rendered in Buildkite (using our open-source [Terminal tool](http://buildkite.github.io/terminal-to-html/)) it will look like this: When you run the script locally you won't see any output because your terminal will ignore the escape code. If you pipe your build script to `more` you can see the raw escape codes. ###### Inlining build artifact images You can inline artifact images by using the `artifact://` URL syntax. For example you can inline the following artifact: by using the following URL: ```bash inline_image 'artifact://tmp/images/omg.gif' 'OMG' ``` Be careful to ensure the part of the URL after `artifact://` exactly matches the path you see in the Artifacts tab (for example, it can't have a preceding `./`). The image artifact does not have to be uploaded at the time it's written to the build log. If the artifact has not been uploaded you'll see a loading placeholder, and as soon as it's ready the image will automatically appear. > 📘 > If you are using private artifacts, your images need to be [base64-encoded](#images-base64-encoded-images) so that Buildkite can access and inline them. ###### Base64-encoded images If you want to embed an image encoded in base64, you can use [iTerm's image format](https://iterm2.com/documentation-images.html), but be mindful of the [log output limits](/docs/pipelines/configure/managing-log-output#log-output-limits). Unless you're embedding images for a specific reason, it's better to upload the image as a [build artifact](/docs/pipelines/configure/artifacts) and reference it using the `artifact://` URL. ###### Library support The [capybara-inline-screenshot](https://github.com/buildkite/capybara-inline-screenshot) Ruby gem will automatically inline screenshots of your integration test failures and also supports the iTerm image format for viewing failures directly in your terminal. When run under CI it automatically uses the `artifact://` URL format. --- ### Notify URL: https://buildkite.com/docs/pipelines/configure/notify #### Notify The `notify` attribute allows you to trigger build notifications to different services. You can also choose to conditionally send notifications based on pipeline events like build state. Add notifications to your pipeline with the `notify` attribute. This sits at the same level as `steps` in your pipeline YAML. For example, to send a notification email every time a build is created: ```yaml steps: - command: "tests.sh" notify: - email: "dev@acmeinc.com" ``` Available notification types: - [Basecamp](#basecamp-campfire-message): Post a message to a Basecamp Campfire. Requires a Basecamp Chatbot to be configured in your Basecamp organization. - [Email](#email): Send an email to the specified email address. - [GitHub commit status](#github-commit-status): Create a GitHub commit status. - [GitHub check](#github-check): Create a GitHub check status. - [PagerDuty](#pagerduty-change-events) - [Slack](#slack-channel-and-direct-messages): Post a message to the specified Slack Channel. Requires a Slack Workspace or individual Slack notification services to be enabled for each channel. - [Webhooks](#webhooks): Send a notification to the specified webhook URL. These types of notifications are available at the following levels. | Build | Step | Basecamp | Basecamp | Email | | GitHub commit status | GitHub commit status | GitHub check | GitHub check | PagerDuty | | Slack | Slack | Webhook | ##### Conditional notifications To only trigger notifications under certain conditions, add the `if` attribute. For example, the following email notification will only be triggered if the build passes: ```yaml steps: - command: "tests.sh" notify: - email: "dev@acmeinc.com" if: build.state == "passed" ``` > 📘 > `build.state` conditionals cannot be used on step-level notifications as a step cannot know the state of the entire build. See [Supported variables](/docs/pipelines/configure/conditionals#variable-and-syntax-reference-variables) for more conditional variables that can be used in the `if` attribute. ###### Step-level conditional notifications You can use conditional notifications at the step level to send notifications only when specific step outcomes occur. This is useful for immediate notifications when individual steps complete: ```yaml steps: - command: "important-validation.sh" notify: - slack: channels: ["#engineering"] message: "Critical validation failed, please fix." if: step.outcome == "hard_failed" ``` See [Supported variables](/docs/pipelines/configure/conditionals#variable-and-syntax-reference-variables) for more conditional variables that can be used in the `if` attribute. > 🚧 > To trigger conditional notifications to a Slack channel, you will first need to configure [Conditional notifications for Slack](/docs/pipelines/integrations/notifications/slack#conditional-notifications). ##### Basecamp Campfire message To send notifications to a Basecamp Campfire, you'll need to set up a chatbot in Basecamp as well as adding the notification to your `pipeline.yml` file. Basecamp admin permission is required to setup your chatbot. > 🚧 > Campfire messages can only be sent using Basecamp 3. 1. Add a [chatbot](https://m.signalvnoise.com/new-in-basecamp-3-chatbots/) to the Basecamp project or team that you'll be sending notifications to. 1. Set up your chatbot with a name and an optional URL. If you'd like to include an image, you can find the Buildkite logo in our [Brand assets](https://buildkite.com/brand-assets). 1. On the next page of the chatbot setup, copy the URL that Basecamp provides in the `curl` code snippet. 1. Add a Basecamp notification to your pipeline using the `basecamp_campfire` attribute of the `notify` YAML block and the URL copied from your Basecamp chatbot: ```yaml steps: - command: "tests.sh" notify: - basecamp_campfire: "https://3.basecamp.com/1234567/integrations/qwertyuiop/buckets/1234567/chats/1234567/lines" ``` You can also add Basecamp notifications at the step level: ```yaml steps: - label: "Example Test" command: "tests.sh" notify: - basecamp_campfire: "https://3.basecamp.com/1234567/integrations/qwertyuiop/buckets/1234567/chats/1234567/lines" ``` The `basecamp_campfire` attribute accepts a single URL as a string. Build-level Basecamp notifications happen at the following [events](/docs/apis/webhooks/pipelines#events), unless you restrict them using [conditionals](/docs/pipelines/configure/notify#conditional-notifications): - `build created` - `build started` - `build blocked` - `build finished` - `build skipped` Step-level Basecamp notifications happen at the following [events](/docs/apis/webhooks/pipelines#events): - `step.finished` - `step.failing` ##### Email Add an email notification to your pipeline using the `email` attribute of the `notify` YAML block: ```yaml notify: - email: "dev@acmeinc.com" ``` You can only send email notifications on entire pipeline [events](/docs/apis/webhooks/pipelines#events), specifically upon `build.failing` and `build.finished`. Restrict notifications to finished builds by adding a [conditional](#conditional-notifications): ```yaml notify: - email: "dev@acmeinc.com" if: build.state != "failing" ``` The `email` attribute accepts a single email address as a string. To send notifications to more than one address, add each address as a separate email notification attribute: ```yaml steps: - command: "tests.sh" notify: - email: "dev@acmeinc.com" - email: "sre@acmeinc.com" - email: "qa@acmeinc.com" ``` ##### GitHub commit status Pipelines using [a GitHub repository](/docs/pipelines/source-control/github) have built-in [GitHub commit status](https://docs.github.com/en/rest/commits/statuses) integration. However, you can add custom commit statuses using notifications. GitHub commit statuses appear as simple pass/fail indicators on commits and pull requests. For more advanced features like detailed output and annotations, consider using a [GitHub check](#github-check) instead. > 📘 Requirements > GitHub notifications require a full 40-character commit SHA. Builds with short commit SHA values or `HEAD` references will not trigger notifications until the commit SHA is resolved. > For more information on customizing commit statuses, see [Customizing commit statuses](/docs/pipelines/source-control/github#customizing-commit-statuses) in the GitHub integration documentation. Add a GitHub commit status notification to your pipeline using the `github_commit_status` attribute of the `notify` YAML block: ```yaml steps: - command: "tests.sh" notify: - github_commit_status: context: "buildkite/test" ``` You can also add GitHub commit status notifications at the step level: ```yaml steps: - label: "Tests" command: "tests.sh" notify: - github_commit_status: context: "buildkite/tests" ``` ###### GitHub commit status attributes The `github_commit_status` attribute supports the following options: - `context`: A string label to differentiate this status from other statuses. Defaults to `buildkite/[pipeline-slug]` for build-level notifications. For step-level notifications, the context is automatically generated based on the step. - `blocked_builds_as_pending`: A boolean value that determines how blocked builds are reported. When `true`, blocked builds are reported as "pending". When `false`, blocked builds are reported as "success". Defaults to `false`. This option is only available for build-level notifications. To report blocked builds as pending: ```yaml notify: - github_commit_status: context: "buildkite/deploy" blocked_builds_as_pending: true steps: - command: "tests.sh" - block: "Deploy to production" - command: "deploy.sh" ``` Build-level GitHub commit status notifications happen at the following [events](/docs/apis/webhooks/pipelines#events), unless you restrict them using [conditionals](/docs/pipelines/configure/notify#conditional-notifications): - `build.failing` - `build.finished` Step-level GitHub commit status notifications happen at the following [events](/docs/apis/webhooks/pipelines#events): - `step.failing` - `step.finished` ##### GitHub check Create a [GitHub check](https://docs.github.com/en/rest/checks) to provide detailed feedback on builds and steps with rich formatting, annotations, and summaries. This requires the pipeline is configured to use [a GitHub repository](/docs/pipelines/source-control/github) with the GitHub App integration. GitHub checks provide richer status information than commit statuses, including the ability to display detailed output, annotations, and custom formatting. Unlike commit statuses, GitHub checks can show step-by-step progress, include formatted text and links, and provide inline code annotations. > 📘 Requirements > GitHub checks require the GitHub App integration. If you're using OAuth-based GitHub integration, use [GitHub commit status](#github-commit-status) notifications instead. > GitHub notifications require a full 40-character commit SHA. Builds with short commit SHA values or `HEAD` references will not trigger notifications until the commit SHA is resolved. Add a GitHub check notification to your pipeline using the `github_check` attribute of the `notify` YAML block: ```yaml steps: - command: "tests.sh" notify: - github_check: name: "Test Suite" ``` You can also add GitHub check notifications at the step level: ```yaml steps: - label: "Tests" command: "tests.sh" notify: - github_check: name: "Unit Tests" output: title: "Test Results" summary: "Detailed test execution summary" ``` ###### GitHub check attributes The `github_check` attribute supports the following options: - `name`: The name of the check. Defaults to the pipeline name for build-level notifications, or auto-generated based on the step label/key for step-level notifications. - `output`: An object containing detailed output information: `title` (a short title for the check output), `summary` (a summary of the check results), `text` (detailed information about the check results, supports Markdown), and `annotations` (an array of annotation objects for inline code comments). ###### GitHub check annotations For step-level notifications, you can include annotations that appear as inline comments on specific lines of code in pull requests: ```yaml steps: - label: "Lint" command: "lint.sh" notify: - github_check: name: "Code Linting" output: annotations: - path: "src/main.js" start_line: 15 end_line: 15 annotation_level: "warning" message: "Missing semicolon" ``` Each annotation object supports: - `path`: The file path relative to the repository root - `start_line`: The line number where the annotation starts - `end_line`: The line number where the annotation ends - `annotation_level`: The level of the annotation (`notice`, `warning`, or `failure`) - `message`: The annotation message - `start_column` (optional): The column number where the annotation starts - `end_column` (optional): The column number where the annotation ends ###### Dynamic GitHub check updates For step-level GitHub check notifications, you can dynamically update the check output during step execution using the `buildkite-agent step update` command: ```bash #### Update the check title buildkite-agent step update "notify.github_check.output.title" "Updated Title" #### Update the check summary buildkite-agent step update "notify.github_check.output.summary" "Build completed successfully" #### Update the check text with detailed results buildkite-agent step update "notify.github_check.output.text" "## Test Results\n\n✅ All tests passed" #### Add annotations (append mode) buildkite-agent step update "notify.github_check.output.annotations" '[{"path":"src/main.js","start_line":10,"end_line":10,"annotation_level":"warning","message":"Consider refactoring this function"}]' --append ``` This is particularly useful for displaying test results, code analysis findings, or other dynamic content that becomes available during the build process. Build-level GitHub check notifications happen at the following [events](/docs/apis/webhooks/pipelines#events), unless you restrict them using [conditionals](/docs/pipelines/configure/notify#conditional-notifications): - `build.finished` - `build.failing` Step-level GitHub check notifications happen at the following [events](/docs/apis/webhooks/pipelines#events): - `step.failing` - `step.finished` ##### PagerDuty change events If you've set up a [PagerDuty integration](/docs/pipelines/integrations/notifications/pagerduty) you can send change events from your pipeline using the `pagerduty_change_event` attribute of the `notify` YAML block: ```yaml steps: - command: "tests.sh" notify: - pagerduty_change_event: "636d22Yourc0418Key3b49eee3e8" ``` Email notifications happen at the following [event](/docs/apis/webhooks/pipelines#events): - `build finished` Restrict notifications to passed builds by adding a [conditional](#conditional-notifications): ```yaml steps: - command: "tests.sh" notify: - pagerduty_change_event: "636d22Yourc0418Key3b49eee3e8" if: "build.state == 'passed'" ``` ##### Slack channel and direct messages You can set notifications: - On step status and other non-build events, by extending your Slack or Slack Workspace notification service with the `notify` attribute in your `pipeline.yml`. - On build status events in the Buildkite interface, by using your Slack notification service's **Build state filtering** settings. Before adding a `notify` attribute to your `pipeline.yml`, ensure a Buildkite organization admin has set up either the [Slack Workspace](/docs/pipelines/integrations/notifications/slack-workspace) notification service (a once-off configuration for each workspace), or the required [Slack](/docs/pipelines/integrations/notifications/slack) notification services, to send notifications to a channel or a user. Buildkite customers on the [Enterprise](https://buildkite.com/pricing) plan can also select the [**Manage Notifications Services**](https://buildkite.com/organizations/~/security/pipelines) checkbox to allow their users to create, edit, or delete notification services. - The _Slack Workspace_ notification service requires a once-off configuration (only one per Slack workspace) in Buildkite, and then allows you to notify specific Slack channels or users, or both, directly within relevant pipeline steps. - The _Slack_ notification service requires you to first configure one or more of these services for a channel or user, along with the pipelines, branches and build states that these channels or users receive notifications for. Once configured, your pipelines will generate automated notifications whenever the conditions in these notification services are met. You can also use the `notify` attribute in your `pipeline.yml` file for more fine grained control, by mentioning specific channels and users in these attributes, as long as Slack notification services have been created for these channels and users. If you mention any channels or users in a pipeline `notify` attribute for whom a Slack notification service has not yet been configured, the notification will not be sent. For a simplified configuration experience, use the [Slack Workspace](/docs/pipelines/integrations/notifications/slack-workspace) notification service instead. Learn more about these different [Slack Workspace](/docs/pipelines/integrations/notifications/slack-workspace) and [Slack](/docs/pipelines/integrations/notifications/slack) notification services within [Other integrations](/docs/pipelines/integrations). Once a Slack channel or workspace has been configured in your organization, add a Slack notification to your pipeline using the `slack` attribute of the `notify` YAML block. > 🚧 > When using only a channel name, you must specify this name in quotes. Otherwise, the `#` will cause the channel name to be treated as a comment. > If you have a Slack notification service configured for a given Slack channel and you either rename this channel, or change the channel's visibility from public to private, then you will need to set up a new Slack notification service to accommodate this modification. This issue does not affect the Slack Workspace notification service, since only one service needs to be configured for a given Slack workspace. ###### Notify a channel in all workspaces You can notify a channel in all workspaces by providing the channel name in the `pipeline.yml`. Build-level notifications to the `#general` channel of all configured workspaces: ```yaml steps: - command: "tests.sh" notify: - slack: "#general" ``` Step-level notifications to the `#general` channel of all configured workspaces: ```yaml steps: - label: "Example Test - pass" command: echo "Hello!" notify: - slack: "#general" ``` > 📘 Step-level vs build-level notifications > A step-level notify step will ignore the requirements of a build-level notification. If a build-level notification condition is that it runs only on `main`, a step-level notification without branch conditionals will run on all branches. ###### Notify a user in all workspaces You can notify a user in all workspaces configured through your Slack or Slack Workspace notification services by providing their username or user ID, respectively, in the `pipeline.yml`. > 📘 > Unlike Slack notification service notifications, which are sent directly to the user's Slack account, the Slack Workspace notification service sends notifications to the user's **Buildkite Builds** app in Slack. ###### Build-level notifications When using [Slack notification services](/docs/pipelines/integrations/notifications/slack), specify the user's handle (for example, `@someuser`) to notify this user about a build. The user will receive a notification in all Slack workspaces they have been configured for with this service type. For example: ```yaml notify: - slack: "@someuser" ``` or: ```yaml notify: - slack: channels: ["@someuser"] ``` or: ```yaml notify: - slack: channels: - "@someuser" ``` When using the [Slack Workspace notification service](/docs/pipelines/integrations/notifications/slack-workspace), specify the user's user ID (for example, `U12AB3C456D`) instead of their user handle (`@someuser`), to notify this user about a build in the configured Slack workspace. For example: ```yaml notify: - slack: "U12AB3C456D" ``` or: ```yaml notify: - slack: channels: ["U12AB3C456D"] ``` or: ```yaml notify: - slack: channels: - "U12AB3C456D" ``` ###### Step-level notifications When using the [Slack notification services](/docs/pipelines/integrations/notifications/slack), specify the user's handle (for example, `@someuser`) to notify this user about this step's job. The user will receive a notification in all Slack workspaces they have been configured for with this service type. For example: ```yaml steps: - label: "Example Test - pass" command: echo "Hello!" notify: - slack: "@someuser" ``` When using the [Slack Workspace notification service](/docs/pipelines/integrations/notifications/slack-workspace), specify the user's user ID (for example, `U12AB3C456D`) instead of their user handle (`@someuser`), to notify this user about this step's job in the configured Slack workspace. For example: ```yaml steps: - label: "Example Test - pass" command: echo "Hello!" notify: - slack: "U12AB3C456D" ``` ###### Notify a channel in one workspace You can notify one particular workspace and channel by specifying the workspace name. Build-level notifications: ```yaml steps: - command: "tests.sh" notify: # Notify channel - slack: "buildkite-community#general" ``` Step-level notifications: ```yaml steps: - label: "Example Test - pass" command: echo "Hello!" notify: # Notify channel - slack: "buildkite-community#general" ``` ###### Notify multiple teams and channels You can specify multiple teams and channels by listing them in the `channels` attribute. Build-level notifications: ```yaml notify: - slack: channels: - "buildkite-community#sre" - "buildkite-community#announcements" - "buildkite-team#monitoring" - "#general" ``` Step-level notifications: ```yaml steps: - label: "Example Test - pass" command: echo "Hello!" notify: - slack: channels: - "buildkite-community#sre" - "buildkite-community#announcements" - "buildkite-team#monitoring" - "#general" ``` ###### Custom messages You can define a custom message to send in the notification using the `message` attribute. Build-level notifications: ```yaml notify: - slack: channels: - "buildkite-community#sre" message: "SRE related information here..." - slack: channels: - "buildkite-community#announcements" message: "General announcement for the team here..." ``` Step-level notifications: ```yaml steps: - label: "Example Test - pass" command: echo "Hello!" notify: - slack: channels: - "buildkite-community#sre" message: "SRE related information here..." - slack: channels: - "buildkite-community#announcements" message: "General announcement for the team here..." ``` > 📘 > You can also send notifications with custom messages to specific users with the relevant syntax mentioned in [Notify a user in all workspaces](#slack-channel-and-direct-messages-notify-a-user-in-all-workspaces). Employ the appropriate user notification syntax based on your configured the Slack or Slack Workspace notification service(s). ###### Custom messages with user mentions To mention a specific user in a custom message within a notification, use the `` annotation, substituting `userid` with the Slack user ID of the person to mention. See the [Slack documentation on mentioning users](https://api.slack.com/reference/surfaces/formatting#mentioning-users) for more details, including how to find a particular user's user ID. You can even mention user groups using the `` annotation (where the first `subteam` is literal text)! See the [Slack documentation on mentioning user groups](https://api.slack.com/reference/surfaces/formatting#mentioning-groups) for more information. Build-level notifications: ```yaml notify: - slack: channels: - "#general" message: "This message will ping the user with ID U024BE7LH !" ``` Step-level notifications: ```yaml steps: - label: "Slack mention" command: echo "Sending a notification with a mention" notify: - slack: channels: - "#general" message: "This message will ping the group with ID SAZ94GDB8 !" ``` > 🚧 Build creator environment variable > You cannot substitute `user` with the build creator environment variable value. ###### Conditional Slack notifications You can also add [conditionals](/docs/pipelines/configure/notify#conditional-notifications) to restrict the events on which notifications are sent: ```yaml notify: - slack: "#general" if: build.state == "passed" ``` See [Supported variables](/docs/pipelines/configure/conditionals#variable-and-syntax-reference-variables) for more conditional variables that can be used in the `if` attribute. You are able to use `pipeline.started_passing` and `pipeline.started_failing` in your if statements if you are using the [Slack Workspace](/docs/pipelines/integrations/notifications/slack-workspace) integration. Build-level Slack notifications happen at the following [event](/docs/apis/webhooks/pipelines#events): - `build.finished` - `build.failing` Step-level Slack notifications happen at the following [events](/docs/apis/webhooks/pipelines#events): - `step.finished` - `step.failing` An example to deliver slack notification when a step is [soft-failed](/docs/pipelines/configure/soft-fail): ```yaml steps: - command: exit -1 soft_fail: true notify: - slack: channels: ["#general"] message: "Step has soft failed." if: step.outcome == "soft_failed" ``` ###### Notify only on first failure The `pipeline.started_failing` conditional is designed to only send notifications when a pipeline transitions from a passing state to a failing state - not for every failed build. This prevents excessive notifications, while ensuring teams are immediately alerted when something goes wrong. ```yaml notify: - slack: "#builds" if: build.branch == "main" && pipeline.started_failing ``` ###### When to use The `pipeline.started_failing` conditional might be valuable for teams that: - Want immediate alerts when something breaks but don't want repeated notifications for consecutive failures. - Have flaky tests or environments where builds might fail multiple times in a row. - Implement workflows where quick feedback on state changes is more important than being notified about every individual failure. ###### Notify only on first pass The `pipeline.started_passing` conditional is designed to only send notifications when a pipeline transitions from a failing state to a passing state - not for every successful build. This prevents excessive notifications, while ensuring teams are immediately alerted when issues are resolved. ```yaml notify: - slack: "#builds" if: build.branch == "main" && pipeline.started_passing ``` ###### When to use The `pipeline.started_passing` conditional might be valuable for teams that: - Need to track when build issues are resolved after failures. - Prefer to avoid notifications for builds that were already passing. ###### Notify on all failures and first successful pass This combined pattern sends notifications for all failed builds and the first successful build after failures. It provides comprehensive failure coverage, while avoiding excessive notifications for consecutive successful builds. ```yaml notify: - slack: "#builds" if: build.state == "failed" || pipeline.started_passing ``` You can add a branch filter to this conditional pattern to target specific branches: ```yaml notify: - slack: "#critical-alerts" if: (build.state == "failed" || pipeline.started_passing) && build.branch == "main" ``` Different messages can also be used to differentiate between failures and recoveries: ```yaml notify: - slack: channels: ["#team-alerts"] message: "🔴 Build failed on ${BUILDKITE_BRANCH}" if: build.state == "failed" - slack: channels: ["#team-alerts"] message: "✅ Build recovered on ${BUILDKITE_BRANCH}" if: pipeline.started_passing ``` ###### When to use These conditionals might be valuable for teams that want to be notified about each build failure but avoid notifications for consecutive successful builds. ##### Webhooks Send a notification to a webhook URL from your pipeline using the `webhook` attribute of the `notify` YAML block: ```yaml steps: - command: "tests.sh" notify: - webhook: "https://webhook.site/32raf257-168b-5aca-9067-3b410g78c23a" ``` The `webhook` attribute accepts a single webhook URL as a string. To send notifications to more than one endpoint, add each URL as a separate webhook attribute: ```yaml steps: - command: "tests.sh" notify: - webhook: "https://webhook.site/82n740x6-168b-5aca-9067-3b410g78c23a" - webhook: "https://webhook.site/32raf257-81b6-9067-5aca-78s09m6102b4" - webhook: "https://webhook.site/27f518bw-9067-5aca-b681-102c847j917z" ``` Webhook notifications happen at the following [events](/docs/apis/webhooks/pipelines#events), unless you restrict them using [conditionals](/docs/pipelines/configure/notify#conditional-notifications): - `build created` - `build started` - `build blocked` - `build finished` ##### Build states A build state can be one of of the following values: `creating`, `scheduled`, `running`, `passed`, `failing`, `failed`, `blocked`, `canceling`, `canceled`, `skipped`, `not_run`. You can query for `finished` builds to return builds in any of the following states: `passed`, `failed`, `blocked`, or `canceled`. > 🚧 > When a [triggered build](/docs/pipelines/configure/step-types/trigger-step) fails, the step that triggered it will be stuck in the `running` state forever. > When all the steps in a build are skipped (either by using the skip attribute or by using the `if` condition), the build state will be marked as `not_run`. > By default, all steps depend on the step that uploads them. They will not run until the uploading step is finished. > Unlike the [`notify` attribute](/docs/pipelines/configure/notify), the build state value for a [`steps` attribute](/docs/pipelines/configure/defining-steps) may differ depending on the state of a pipeline. For example, when a build is blocked within a `steps` section, the `state` value in the [API response for getting a build](/docs/apis/rest-api/builds#get-a-build) retains its last value (for example, `passed`), rather than having the value `blocked`, and instead, the response also returns a `blocked` field with a value of `true`. See the full [build states diagram](/docs/pipelines/configure/defining-steps#build-states) for more information on how builds transition between states. --- ### Glob pattern syntax URL: https://buildkite.com/docs/pipelines/configure/glob-pattern-syntax #### Glob pattern syntax A glob pattern is a representation of a file name and optionally its path, and is a compact way of specifying multiple files with a single pattern. You can use a glob pattern to find all files in paths that match that pattern. This syntax is used for glob patterns supported in pipelines for artifact uploads (using either [`artifact_paths`](/docs/pipelines/configure/step-types/command-step#command-step-attributes) in a pipeline or [`buildkite-agent artifact upload`](/docs/agent/cli/reference/pipeline)), and `if_changed` conditions on [command](/docs/pipelines/configure/step-types/command-step#agent-applied-attributes), [trigger](/docs/pipelines/configure/step-types/trigger-step#agent-applied-attributes) or [group](/docs/pipelines/configure/step-types/group-step#agent-applied-attributes) pipeline steps. > 📘 Full path matching > Glob patterns must match whole path strings, and cannot be used to represent substrings. However, glob patterns are evaluated relative to the current directory. ##### Syntax elements Characters match themselves only, with the following syntax elements having special meaning. | Syntax element | Meaning | `` | ###### On Windows The path separator on Windows is `\`, and therefore, `/` is the escape character when the agent performing the action is running on Windows. On other operating system platforms, `/` is the standard path separator and `\` is the standard escape character for the agent. ###### Character classes Character classes (`[abc]`) and negated character classes (`[^abc]`) currently do _not_ support ranges, and `-` is treated literally. For example, `[c-g]` only matches one of `c`, `g`, or `-`. ##### Examples | Pattern | Explanation | `` | --- ### Example pipelines URL: https://buildkite.com/docs/pipelines/configure/example-pipelines #### Example pipelines This page lists core example pipelines used throughout this documentation, and to help you improve your understanding of Buildkite Pipelines for different use cases. You can also [browse the full example pipeline gallery](https://buildkite.com/resources/examples), which covers a much wider range of technologies and use cases. ##### Languages and frameworks ##### Build systems and package managers ##### Pipeline step-types and techniques ##### Hooks and permissions ##### Packages ##### Third-party integrations ##### template.yml files All of the examples contain a `buildkite/template.yml` file so that you can add the project to your Buildkite account using the 'Add to Buildkite' button in the readme. You don't need this file in your own projects. --- ### Job priority URL: https://buildkite.com/docs/pipelines/configure/workflows/job-priority #### Job priority By default, jobs are dispatched (taken from the queue and assigned to an agent) on a first-in-first-out basis. However, job priority and pipeline upload time can affect that order. This is not the case for [Buildkite hosted agents](/docs/agent/buildkite-hosted), where jobs are assigned and dispatched at the time they are run. ##### Prioritizing specific jobs Job `priority` is 0 by default, you can prioritize or deprioritize jobs by assigning them a higher or lower integer value. For example: ```yml steps: - command: "will-run-last.sh" priority: -1 - command: "will-run-first.sh" priority: 1 ``` Job priority is considered before jobs are dispatched to [agent queues](/docs/agent/queues), so jobs with higher priority are assigned before jobs with lower priority, regardless of which has been longest in the queue. Priority only applies to command jobs, including plugin commands. ##### Prioritizing whole builds The `priority` key can be set as a top-level value, which applies it to all steps in the pipeline that do not have their own `priority` key set. This is useful when an entire pipeline requires a higher priority than others. For example: ```yml priority: 100 steps: - label: "emergency fix" command: "run_this_now.sh" - wait: ~ - label: "this can wait" command: "tests.sh" priority: 1 ``` The `emergency fix` step runs before _any step of any other running pipeline_ within your organization, unless one of these other pipeline steps has a priority greater than 100. If all available agents are running jobs, an appropriate agent will run the `emergency fix` step _only_ after its current job completes running. Prioritizing whole builds comes in handy when you need to reduce the number of agents (for example, to reduce costs over a weekend due to fewer available team members) but want to ensure any builds created on a critical pipeline are not left waiting for agents to run their jobs. ##### Job dispatch precedence Jobs are dispatched in the following order: 1. Job priority in descending order, highest number to lowest (`priority`) 1. Date and time scheduled in ascending order, oldest to most recent (`scheduled_at`). Note that jobs inherit `scheduled_at` from pipeline upload jobs, meaning jobs that are uploaded by a pipeline in an older build will be dispatched before builds created after that, and the value of `scheduled_at` cannot be modified. 1. Upload order in pipeline, first to last. 1. Internal id in ascending order, used as a tie breaker if all other value are the same, meaning older jobs will be dispatched first. ##### Example Here's an example of prioritizing jobs running on a default branch before pull request jobs: ```yaml steps: - label: "\:pipeline\:" agents: {queue: uploaders} command: | if [[ "$${BUILDKITE_BRANCH}" == "$${BUILDKITE_PIPELINE_DEFAULT_BRANCH}" ]]; then export PRIORITY=1 else export PRIORITY=0 fi buildkite-agent pipeline upload < 🚧 I'm seeing an error about a missing `concurrency_group_id` when I run my pipeline upload > This error is caused by a missing `concurrency_group` attribute. Add this attribute to the same step where you defined the `concurrency` attribute. ##### Concurrency groups Concurrency groups are labels that group together Buildkite jobs when applying concurrency limits. When you add a group label to a step the label becomes available to all Pipelines in that organization. These group labels are checked at job runtime to determine which jobs are allowed to run in parallel. Although concurrency groups are created on individual steps, they represent concurrent access to shared resources and can be used by other pipelines. A concurrency group works like a queue; it returns jobs in the order they entered the queue (oldest to newest). The concurrency group only cares about jobs in "active" states, and the group becomes "locked" when the concurrency limit for jobs in these states is reached. Once a job moves from an active state to a terminal state (`finished` or `canceled`), the job is removed from the queue, opening up a spot for another job to enter. If a job's state is `limited`, it is waiting for another job ahead of it in the same concurrency group to finish. The full list of "active" [job states](/docs/pipelines/configure/defining-steps#job-states) is `limiting`, `limited`, `scheduled`, `waiting`, `assigned`, `accepted`, `running`, `canceling`, `timing out`. The following is an example [command step](/docs/pipelines/configure/step-types/command-step) that ensures deployments run one at a time. If multiple builds are created with this step, each deployment job will be queued up and run one after the other in the order they were created. ```yaml - command: 'deploy.sh' label: '\:rocket\: Deploy production' branches: 'main' agents: deploy: true concurrency: 1 concurrency_group: 'our-payment-gateway/deploy' ``` Make sure your `concurrency_group` names are unique, unless they're accessing a shared resource like a deployment target. For example, if you have two pipelines that each deploy to a different target but you give them both the `concurrency_group` label `deploy`, they will be part of the same concurrency group and will not be able to run at the same time, even though they're accessing separate deployment targets. Unique concurrency group names such as `our-payment-gateway/deployment`, `terraform/update-state`, or `my-mobile-app/app-store-release`, will ensure that each one is part of its own concurrency group. Concurrency groups guarantee that jobs will be run in the order that they were created in. Jobs inherit the creation time of their parent. Parents of jobs can be either a build or a pipeline upload job. As pipeline uploads add more jobs to the build after it has started, the jobs that they add will inherit the creation time of the pipeline upload rather than the build. > 🚧 Troubleshooting and using `concurrency_group` with `block` / `input` steps > When a build is blocked by a concurrency group, you can check which jobs are in the queue and their state using the [`getConcurrency` GraphQL query](/docs/apis/graphql/cookbooks/jobs#get-all-jobs-in-a-particular-concurrency-group). > > Be aware that both the [`block`](/docs/pipelines/configure/step-types/block-step) and [`input`](/docs/pipelines/configure/step-types/input-step) steps cause these steps to be uploaded and scheduled at the same time, which breaks concurrency groups. These two steps prevent jobs being added to the concurrency group, although these steps do not affect the jobs' ordering once they are allowed to continue. The concurrency group won't be added to the queue until the `block` or `input` step is allowed to continue, and once this happens, the timestamp will be from the pipeline upload step. ##### Concurrency and parallelism Sometimes you need strict concurrency while also having jobs that would benefit from parallelism. In these situations, you can use _concurrency gates_ to control which jobs run in parallel and which jobs run one at a time. Concurrency gates come in pairs, so when you open a gate, you have to close it. > 🚧 > Since [`block`](/docs/pipelines/block-step) and [`input`](/docs/pipelines/input-step) steps [prevent jobs being added to concurrency groups](#troubleshooting-and-using-concurrency-group-with-block-slash-input-steps), you cannot use these two steps inside concurrency gates. In the following setup, only one build at a time can _enter the concurrency gate_, but within that gate up to three e2e tests can run in parallel, subject to Agent availability. Putting the `stage-deploy` section in the gate as well ensures that every time there is a deployment made to the staging environment, the e2e tests are carried out on that deployment: ```yaml steps: - command: echo "Running unit tests" key: unit-tests - command: echo "--> Start of concurrency gate" concurrency_group: gate concurrency: 1 key: start-gate depends_on: unit-tests - wait - command: echo "Running deployment to staging environment" key: stage-deploy depends_on: start-gate - command: echo "Running e2e tests after the deployment" parallelism: 3 depends_on: [stage-deploy] key: e2e - wait - command: echo "End of concurrency gate <--" concurrency_group: gate concurrency: 1 key: end-gate - command: echo "This and subsequent steps run independently" depends_on: end-gate ``` ###### Controlling command order By default, steps that belong to the same concurrency group are run in the order that they are added to the pipeline. For example, if you have two steps: * Step `A` in concurrency group `X` with a concurrency of `1` at time 0 * Step `B` with the same concurrency group `X` and also a concurrency of `1` at time 1 Step A will always run before step B. This is the default behavior (`ordered`), and most helpful for deployments. However, in some cases concurrency groups are used to restrict access to a limited resource, such as a SaaS service like Sauce Labs. In that case, the default ordering of the jobs can work against you, as one step waits for the next before taking up another concurrency slot. If your resource usage time is very different, for example if tests in pipeline A take 1 minute to run and tests in pipeline B take 10 minutes to run, the default ordering is not helpful because it means that the limited resource you're controlling concurrency for is not fully utilized. In that case, setting the concurrency method to `eager`, removes the ordering condition for that resource. ```yaml steps: - command: echo "Using a limited resource, only 10 at a time, but we don't care about order" concurrency_group: saucelabs concurrency: 10 concurrency_method: eager ``` ###### Concurrency and prioritization If you're using `eager` concurrency and [job prioritization](/docs/pipelines/configure/workflows/job-priority), higher priority jobs will always take precedence when a concurrency slot becomes available. --- ### Build matrix URL: https://buildkite.com/docs/pipelines/configure/workflows/build-matrix #### Build matrix Build matrices help you simplify complex build configurations by expanding a step template and array of matrix elements into multiple jobs. The following [command step](/docs/pipelines/configure/step-types/command-step) attributes can contain matrix values for interpolation: * [environment variables](/docs/pipelines/configure/environment-variables) * [labels](/docs/pipelines/configure/step-types/command-step#label) * [commands](/docs/pipelines/configure/step-types/command-step#command-step-attributes) * [plugins](/docs/pipelines/configure/step-types/command-step#plugins) * [agents](/docs/pipelines/configure/step-types/command-step#agents) You can't use matrix values in other attributes, including step keys and [concurrency groups](/docs/pipelines/configure/workflows/controlling-concurrency#concurrency-groups). For example, instead of writing three separate jobs for builds on macOS, Linux, and Windows, like the following build configuration (which does not use a build matrix): ```yaml steps: - label: "macOS build" command: "GOOS=darwin go build" - label: "Linux build" command: "GOOS=linux go build" - label: "Windows build" command: "GOOS=windows go build" ``` Use a build matrix to expand a single step template into three steps by interpolating the matrix values into the following build configuration: ```yaml steps: - label: "{{matrix}} build" command: "GOOS={{matrix}} go build" env: os: "{{matrix}}" matrix: - "darwin" - "Linux" - "Windows" ``` All jobs created by a build matrix are marked with the **Matrix** badge in the Buildkite interface. > 📘 Matrix and Parallel steps > Matrix builds are not compatible with explicit [parallelism in steps](/docs/pipelines/tutorials/parallel-builds#parallel-jobs). You can use a `matrix` and `parallelism` in the same build, as long as they are on separate steps. For more complex builds, add multiple dimensions to `matrix.setup` instead of the `matrix` array: ```yaml steps: - label: "💥 Matrix Build" command: "echo {{matrix.os}} {{matrix.arch}} {{matrix.test}}" agents: queue: "builder-{{matrix.arch}}" matrix: setup: arch: - "amd64" - "arm64" os: - "windows" - "linux" test: - "A" - "B" ``` Each dimension you add is multiplied by the other dimensions, so two architectures (`matrix.setup.arch`), two operating systems (`matrix.setup.os`), and two tests (`matrix.setup.test`) create an eight job build (`2 * 2 * 2 = 8`): If you're using `matrix.setup`, you can also use the `adjustments` key to change specific entries in the build matrix, or add new combinations. You can set the `skip` attribute to exclude them from the matrix, or `soft_fail` attributes to allow them to fail without breaking the build. ```yaml steps: - label: "💥 Matrix build with adjustments" command: "echo {{matrix.os}} {{matrix.arch}} {{matrix.test}}" matrix: setup: arch: - "amd64" - "arm64" os: - "windows" - "linux" test: - "A" - "B" adjustments: - with: os: "windows" arch: "arm64" test: "B" soft_fail: true - with: os: "linux" arch: "arm64" test: "B" skip: true ``` ##### Adding combinations to the build matrix To add an extra combination that isn't present in the `matrix.setup`, use the `adjustments` key and make sure to define all of the elements in the matrix. For example, to add a build for [Plan 9](https://en.wikipedia.org/wiki/Plan_9_from_Bell_Labs) (on `arm64`, and test suite `B`) to the previous example, use: ```yaml adjustments: - with: os: "Plan 9" arch: "arm64" test: "B" ``` This results in nine jobs, (`2 * 2 * 2 + 1 = 9`). ##### Excluding combinations from the build matrix To exclude a combination from the matrix, add it to the `adjustments` key and set `skip: true`: ```yaml adjustments: - with: os: "linux" arch: "arm64" test: "B" skip: true ``` ##### Matrix limits Each build matrix has the following limits: * **6 dimensions** maximum * **25 elements** per dimension * **128 bytes** maximum size for each individual matrix element (both keys and values) * **12 adjustments** total * **50 jobs** created per `matrix` configuration on a `command` step ##### Grouping matrix elements If you're using the [new build page experience](/docs/pipelines/build-page), matrix jobs are automatically grouped under the matrix step you define in your pipeline. This makes them easier to use and work with. However, if you're using the classic build page with many matrix jobs, then you may want to consider [grouping](/docs/pipelines/configure/step-types/group-step) them together manually with a group step, for a tidier view. To do that, indent the matrix steps inside a [group step](/docs/pipelines/configure/step-types/group-step): ```yaml steps: - group: "📦 Build" steps: - label: "💥 Matrix build with adjustments" command: "echo {{matrix.os}} {{matrix.arch}} {{matrix.test}}" matrix: setup: arch: - "amd64" - "arm64" os: - "windows" - "linux" test: - "A" - "B" ``` --- ### Branch configuration URL: https://buildkite.com/docs/pipelines/configure/workflows/branch-configuration #### Branch configuration You can use branch patterns to ensure pipelines are only built when necessary. This guide shows you how to set up branch patterns for whole pipelines and individual build steps. In step-level and pipeline-level branch filtering, you can use `*` as a wildcard, and `!` for not, as shown in the [examples](#branch-pattern-examples). If you want a full range of regular expressions that operate on more than branch names, take a look at the [conditionals](/docs/pipelines/configure/conditionals) page. ##### Pipeline-level branch filtering By default, a pipeline triggers builds for all branches (`*` or blank). In your pipeline settings, you can set specific branch patterns for the entire pipeline. If a commit doesn't match the branch pattern, no build is created. ##### Additional branch filtering for pull request builds Builds created for pull requests ignore any pipeline-level branch filters. If you want to limit the branches that can build pull requests, add an additional branch filter in your pipeline's source control settings. Find this filter under 'Build pull requests' if you have chosen the 'Trigger builds after pushing code' option. ##### Step-level branch filtering As with pipeline-level branch filtering, you can set branch patterns on individual steps. Steps that have branch filters will only be added to builds on branches matching the pattern. For example, this `pipeline.yml` file demonstrates the use of different branch filters on its steps: ```yaml steps: - label: ":hammer: Build" command: - "npm install" - "tests.sh" branches: "main feature/* !feature/beta release/*" - block: "Release notes" prompt: "Please add notes for this release" fields: - text: "Notes" key: "notes" branches: "release/*" - label: "Deploy Preparation" command: "deploy-prep.sh" branches: "main" - wait - trigger: "app-deploy" label: "\:shipit\:" branches: "main" ``` The `branches` attribute cannot be used at the same time as the `if` attribute. See more in [Conditionals in steps](/docs/pipelines/configure/conditionals#conditionals-in-steps). > 📘 > Step-level branch filters will only affect the step that they are added to. Subsequent steps without branch filters will still be added to the pipeline. ##### Branch pattern examples When combining positive and negative patterns, any positive pattern must match, and every negative pattern must not match. The following are examples of patterns, and the branches that they will match: * `main` will match `main` only * `'!production'` will match any branch that's not `production` * `'main features/*'` will match `main` and any branch that starts with `features/` * `'*-test'` will match any branch ending with `-test`, such as `rails-update-test` * `'stages/* !stages/production'` will match any branch starting with `stages/` except `stages/production`, such as `stages/demo` * `'v*.0'` will match any branch that begins with a `v` and ends with a `.0`, such as `v1.0` * `'v* !v1.*'` will match any branch that begins with a `v` unless it also begins with `v1.`, such as `v2.3`, but not `v1.1` If your branch pattern contains any special characters like `!` or `*`, then enclose the entire pattern in a pair of quotation marks (either `''` or `""`) to ensure the pattern is treated as a string, and mitigate any YAML parsing issues. For more advanced step filtering, see the [Using conditionals](/docs/pipelines/configure/conditionals) guide. ##### Alternative methods [Queues](/docs/agent/queues) are another way to control what work is done. You can use queues to determine which pipelines and steps run on particular agents. --- ### Scheduled builds URL: https://buildkite.com/docs/pipelines/configure/workflows/scheduled-builds #### Scheduled builds Build schedules automatically create builds at specified intervals. For example, you can use scheduled builds to run nightly builds, hourly integration tests, or daily ops tasks. You can create and manage schedules in the **Schedules** section of your pipeline's **Settings**. You can also create and manage schedules using the [pipeline schedules REST API](/docs/apis/rest-api/pipeline-schedules) or the [Buildkite GraphQL API](/docs/apis/graphql-api). ##### Cron job permission consideration When setting up a cron job in your parent pipeline, it's important to ensure that the same team has been assigned to the corresponding child pipeline. Failure to match the team between the parent and child pipelines may result in an error with the following message: **Error:** **Could not find a matching team that includes both pipelines, each having a minimum "Build" access level.** This error is indicative of a mismatch in team assignments and highlights the importance of maintaining consistent team configurations across interconnected pipelines to avoid permission-related issues. ##### Schedule intervals The interval defines when the schedule will create builds. Schedules run in UTC time by default, and can be defined using either predefined intervals or standard crontab time syntax. > 🚧 Interval granularity > Buildkite only guarantees that scheduled builds run within 10 minutes of the scheduled time, and therefore does not support intervals less than 10 minutes. ###### Predefined intervals Buildkite supports 6 predefined intervals: | Interval | Description | Crontab Equivalent | `@hourly` | At the start of every hour | `0 * * * *` | `@daily` or `@midnight` | Every day at midnight UTC | `0 0 * * *` | `@weekly` | Every week at midnight Sunday UTC | `0 0 * * 0` | `@monthly` | Every month, at midnight UTC on the first day | `0 0 1 * *` | `@yearly` | Every year, at midnight UTC on the first day | `0 0 1 1 *` ###### Crontab time syntax Intervals can be defined using a variant of the crontab time syntax: ``` ┌───────────── minute (0 - 59) │ ┌───────────── hour (0 - 23) │ │ ┌───────────── day of month (1 - 31) │ │ │ ┌───────────── month (1 - 12) │ │ │ │ ┌───────────── day of week (0 - 6) (Sunday to Saturday) │ │ │ │ │ ┌─── time zone name or offset (optional) │ │ │ │ │ │ * * * * * Australia/Melbourne ``` A time zone can optionally be specified as the last segment, either as an [IANA Time Zone name](https://en.wikipedia.org/wiki/List_of_tz_database_time_zones) like `Australia/Melbourne` or `Europe/Berlin`, or as an offset from UTC like `+09:00` or `-05:00`. If no time zone is given, the schedule will run in UTC. ###### Supported extensions Buildkite supports several extensions to the standard POSIX cron syntax. ###### The / operator The slash operator allows you to specify step values within ranges. For example, `*/10 * * * *` would run every ten minutes. ###### L or last token Using `L` or `last` in the "day of month" field represents the last day. For example, `0 0 L * *` represents midnight on the last day of the month, and `0 0 -2-L * *` represents the last two days of the month. ###### Modulo Using the modulo extension allows you to create schedules for less common sets of weekdays. Modulo can only be used with the "day of week" field. For example, `0 0 * * 0` represents midnight on every Sunday. Adding a modulo of 3 creates a schedule that runs at midnight on every third Sunday: `0 0 * * 0%3`. You can also use the offset + operator alongside a modulo value. For instance, adding an offset of 1 to our previous example `0 0 * * 0%3+1` will create a schedule to run a build every third Sunday that is an odd calendar number. Modulo is calculated based on the time since 2019-01-01. For more information on how modulo works, see the official documentation of [Fugit](https://github.com/floraison/fugit?tab=readme-ov-file#the-modulo-extension), which is used for extending the POSIX cron syntax in Buildkite. ###### Examples | `*/10 * * * *` | Every 10 minutes | `*/30 * * * *` | Every 30 minutes | `30 * * * *` | Every 30th minute of every hour | `0 */4 * * *` | Every 4 hours | `0 */12 * * *` | Every 12 hours | `0 0 */2 * * +01:00` | Every other day at midnight UTC+1 | `0 8 * * *` | Every day at 8am UTC | `0 8 * * * America/Vancouver` | Every day at 8am in Vancouver | `0 16 * * SUN` | Every Sunday at 4pm UTC | `0 0 * * 1-5` | Every weekday at midnight UTC | `0 0 L * *` | Midnight UTC on the last day of the month | `0 0 1 */2 *` | Every other month, at midnight UTC on the first day | `0 16 L * *` | The last day of the month at 4pm UTC | `0 0 * * 2%2+1` | The start of every odd Tuesday --- ### Archive and delete URL: https://buildkite.com/docs/pipelines/configure/workflows/archiving-and-deleting-pipelines #### Archiving and deleting pipelines You can archive and delete pipelines from the dashboard. ##### Archiving pipelines You can archive/unarchive a pipeline if you're an administrator of the Buildkite organization or in a team that has Full Access to the pipeline. Archiving a pipeline preserves all builds, job logs, artifacts, and history for the pipeline. Archived Pipelines are hidden on the Pipelines page and won't run new builds. To archive or unarchive a pipeline: 1. Navigate to the pipeline. 1. Select the pipeline's **Settings** > **General** page. 1. In the **Pipeline Management** section, select **Archive Pipeline**/**Unarchive Pipeline**. 1. Read the warnings. 1. Type in the slug of the pipeline. 1. Select **Archive Pipeline**/**Unarchive Pipeline**. You can view archived pipelines using the team selector on the Pipelines dashboard. ##### Deleting pipelines You can delete a pipeline if you're an administrator of the Buildkite organization or in a team that has Full Access to the pipeline. Deleting a pipeline deletes all associated builds, job logs, artifacts, and history for this pipeline. To delete a pipeline: 1. Navigate to the pipeline. 1. Select the pipeline's **Settings** > **General** page. 1. In the **Pipeline Management** section, select **Delete Pipeline**. 1. Read the warnings. 1. Type in the slug of the pipeline. 1. Select **Delete Pipeline**. > 🚧 Builds from deleted pipelines are not exported > When a pipeline is deleted, all of its associated builds are also deleted and will _not_ be exported as part of the [build export](/docs/pipelines/governance/build-exports) process. > If you need to [retain builds](/docs/pipelines/configure/build-retention) to preserve their data and be able to export them, [archive the pipeline](/docs/pipelines/configure/workflows/archiving-and-deleting-pipelines#archiving-pipelines) instead. --- ### Overview URL: https://buildkite.com/docs/pipelines/security #### Security overview Customer security is paramount to Buildkite. By design, sensitive data, such as source code and secrets, remain within your own environment and are not seen by Buildkite. The hybrid-SaaS model used by Buildkite allows you to maintain tight control over build agents without compromising on scalability. Buildkite implements a number of measures and mechanisms, both on the control plane and agent, to ensure that customer data remains safe. ##### Data flow Data flows through different systems when a build triggers, both in Buildkite and in environments you manage. The following diagram shows the typical flow of data when a build triggers. The diagram shows that: 1. Buildkite receives a webhook from your SCM when code changes. 1. An agent running on your infrastructure polls Buildkite and detects a job to run. 1. An agent accepts the job and reports that to Buildkite. 1. The agent checks out your source code to run the job. 1. The agent sends the job logs to Buildkite. 1. Any artifacts are managed by the agent and your artifact store. 1. The agent reports that the job finished to Buildkite. 1. Buildkite posts the status update to your SCM. ##### Infrastructure All of Buildkite's services run in the cloud. Buildkite does not run its own routers, load balancers, DNS servers, or physical servers. ##### Data encryption All data transferred in and out of Buildkite is encrypted using hardened TLS. Buildkite is also protected by HTTP Strict Transport Security and is pre-loaded in major browsers. Additionally, data transferred to and from Buildkite's backend database is encrypted using TLS. Finally, all data is encrypted at rest. ##### User logins We protect against brute force attacks with rate-limiting technology. All sensitive data such as passwords and API tokens are filtered out of logs and exception trackers. User passwords are never stored in Buildkite's database - only their salted cryptographic hash. ##### Software dependencies Buildkite keeps up to date with software dependencies and has automated tools scanning for common security issues, including Cross-Site Scripting (XSS), Cross-Site Request Forgery (CSRF), and SQL Injection. ##### Code review and testing process All pull requests are reviewed by senior engineers with security best practice training before being deployed to production systems. [Two-factor authentication (2FA)](/docs/platform/tutorials/2fa) is enabled across GitHub and Buildkite organizations for added security. An extensive set of automated testing procedures is run for every code change. ##### Development and QA environments Development and QA environments are physically separated from Buildkite's production environment. No customer data is ever used in the development or QA environments. ##### Penetration testing Buildkite performs regular penetration test audits with a contracted third party. --- ### Overview URL: https://buildkite.com/docs/pipelines/security/secrets #### Secrets overview Buildkite supports a number of mechanisms by which you can manage secrets that your pipelines must use to interact with 3rd party systems during the build process or for deployment. Some of these mechanisms emphasize greater security over convenience to set up. Others emphasize set up convenience over security. This section of the Buildkite Docs provides guidelines on how to manage and configure secrets to suit your particular requirements. - [Managing pipeline secrets](/docs/pipelines/security/secrets/managing), provides guidance and best practices for managing your secrets in either a [hybrid Buildkite architecture](/docs/pipelines/architecture#self-hosted-hybrid-architecture) with self-hosted agents, or with [Buildkite hosted agents](/docs/agent/buildkite-hosted). - [Risk considerations](/docs/pipelines/security/secrets/risk-considerations) and practices to avoid exposing your secrets, which could compromise the security of your 3rd party systems. - [Buildkite secrets](/docs/pipelines/security/secrets/buildkite-secrets), an encrypted key-value store secrets management service offered by Buildkite for use with either Buildkite hosted or self-hosted agents. - [Buildkite secrets policies](/docs/pipelines/security/secrets/buildkite-secrets/access-policies), to provide agent access control for your secrets, ensuring that only authorized agents can access them during builds. --- ### Managing URL: https://buildkite.com/docs/pipelines/security/secrets/managing #### Managing pipeline secrets This page provides guidance on best practices for managing your secrets in a [hybrid Buildkite architecture](/docs/pipelines/architecture#self-hosted-hybrid-architecture) with [self-hosted agents](/docs/agent/self-hosted) in your own infrastructure, or using [Buildkite hosted agents](/docs/agent/buildkite-hosted). These secrets may be required by your Buildkite pipelines to access 3rd party systems as part of your build or deployment processes. However, these best practice guidelines help ensure that your secrets stay safely within your infrastructure and are never stored in, or sent to Buildkite. ##### Using a secrets storage service The best practice for managing secrets with Buildkite is to house your secrets within your own _secrets storage service_, such as [AWS Secrets Manager](https://aws.amazon.com/secrets-manager/) or [Hashicorp Vault](https://www.vaultproject.io). Buildkite provides various [plugins](/docs/pipelines/integrations/plugins) that integrate reading and exposing secrets to your build steps using secrets storage services, such as the following. If a plugin for the service you use is not listed below or in the [Buildkite plugins directory](https://buildkite.com/resources/plugins), please contact support. A secrets storage service can be used with either self-hosted agents in your own infrastructure, as part of a [hybrid Buildkite architecture](/docs/pipelines/architecture#self-hosted-hybrid-architecture), or with [Buildkite hosted agents](/docs/agent/buildkite-hosted). | Service | Plugin | AWS SSM | [aws-assume-role-with-web-identity-buildkite-plugin](https://github.com/buildkite-plugins/aws-assume-role-with-web-identity-buildkite-plugin) | GC Secrets | [gcp-workload-identity-federation-buildkite-plugin](https://github.com/buildkite-plugins/gcp-workload-identity-federation-buildkite-plugin) | Hashicorp Vault | [vault-secrets-buildkite-plugin](https://github.com/buildkite-plugins/vault-secrets-buildkite-plugin) ##### Without a secrets storage service While using a [secrets storage service](#using-a-secrets-storage-service) is the best practice for managing your secrets, if you don't or cannot use such a service and you use self-hosted agents in your own infrastructure, as part of a [hybrid Buildkite architecture](/docs/pipelines/architecture#self-hosted-hybrid-architecture), this section provides alternative approaches to managing your pipeline secrets. ###### Exporting secrets with environment hooks You can use the Buildkite agent's `environment` hook to export secrets to a job. The `environment` hook is a shell script that is sourced at the beginning of a job. It runs within the job's shell, so you can use it to conditionally run commands and export secrets within the job. By default, the `environment` hook file is stored in the agent's `hooks` directory. The path to this directory varies by platform; read the [installation instructions](/docs/agent/self-hosted/install) for the path on your platform. The path can also be overridden by the [`hooks-path`](/docs/agent/hooks#hook-locations-agent-hooks) setting. For example, to expose a Test Engine API token to a specific pipeline, create an `environment` script in your agent's `hooks` directory that checks for the pipeline slug before exporting the secret: ```bash #!/bin/bash set -euo pipefail if [[ "$BUILDKITE_PIPELINE_SLUG" == "pipeline-one" ]]; then export BUILDKITE_ANALYTICS_TOKEN="oS3AG0eBuUJMWRgkRvek" fi ``` Adding conditional checks, such as the pipeline slug and step identifier, helps to limit accidental disclosure of secrets. For example, suppose you have a step that runs a script expecting a `SECRET_DEPLOYMENT_ACCESS_TOKEN` environment variable, like this one: ```yml steps: - command: scripts/trigger-deploy key: trigger-deploy ``` In your `environment` hook, you can export the deployment token only when when the job is the deployment step in a specific pipeline: ```bash #!/bin/bash set -euo pipefail if [[ "$BUILDKITE_PIPELINE_SLUG" == "my-app" && "$BUILDKITE_STEP_KEY" == "trigger-deploy" ]]; then export SECRET_DEPLOYMENT_ACCESS_TOKEN="bd0fa963610b..." fi ``` The script exports `SECRET_DEPLOYMENT_ACCESS_TOKEN` only for the named pipeline and step. Since this script runs for every job, you can extend it to selectively export all of the secrets used on that agent. ###### Storing secrets with the Elastic CI Stack for AWS When using the [Elastic CI Stack for AWS](https://github.com/buildkite/elastic-ci-stack-for-aws) with your own AWS account and environment, you can store your secrets inside your stack's encrypted S3 bucket. Unlike hooks defined in [agent `hooks-path`](/docs/agent/hooks#hook-locations-agent-hooks), the Elastic CI Stack for AWS's `env` hooks are defined per-pipeline. For example, to expose a `GITHUB_MY_APP_DEPLOYMENT_ACCESS_TOKEN` environment variable to a step with identifier `trigger-github-deploy`, you would create the following `env` file on your local development machine: ```bash #!/bin/bash set -euo pipefail if [[ "$BUILDKITE_STEP_KEY" == "trigger-github-deploy" ]]; then export GITHUB_MY_APP_DEPLOYMENT_ACCESS_TOKEN="bd0fa963610b..." fi ``` You then upload the `env` file, encrypted, into the secrets S3 bucket with the following command: ```bash #### Upload the env aws s3 cp --acl private --sse aws:kms env "s3://elastic-ci-stack-my-stack-secrets-bucket/env" #### Remove the original file rm env ``` See the [Elastic CI Stack for AWS](https://github.com/buildkite/elastic-ci-stack-for-aws) readme for more information and examples. --- ### Risk considerations URL: https://buildkite.com/docs/pipelines/security/secrets/risk-considerations #### Risk considerations This page covers some of the risks associated with managing secrets with Buildkite Pipelines, and _practices you should avoid_ to mitigate these risks. When appropriate, some guidance is provided on alternative approaches to mitigate these risks. ##### Storing secrets in your pipeline settings You should never store secrets on your Buildkite Pipeline Settings page. Not only does this expose the secret value to Buildkite, but pipeline settings are often returned in REST and GraphQL API payloads. > 📘 Never store secret values in your Buildkite pipeline settings. ##### Storing secrets in your pipeline.yml You should never store secrets in the `env` block at the top of your pipeline steps, whether it's in a `pipeline.yml` file or the YAML steps editor. ```yml env: # Security risk! The secret will be sent to and stored by Buildkite, and # be available in the "Uploaded Pipelines" list in the job's Timeline tab. GITHUB_MY_APP_DEPLOYMENT_ACCESS_TOKEN: "bd0fa963610b..." steps: - command: scripts/trigger-github-deploy ``` > 📘 Never store secrets in the `env` section of your pipeline. ##### Referencing secrets in your pipeline YAML You should never refer to secrets directly in your `pipeline.yml` file, as they may be interpolated during the [pipeline upload](/docs/agent/cli/reference/pipeline#uploading-pipelines) and sent to Buildkite. For example: ```yaml steps: # Security risk! The environment variable containing the secret will be # interpolated into the YAML file and then sent to Buildkite. - command: | curl \ --header "Authorization: token $GITHUB_MY_APP_DEPLOYMENT_ACCESS_TOKEN" \ --header "Content-Type: application/json" \ --request POST \ --data "{\"ref\": \"$BUILDKITE_COMMIT\"}" \ https://api.github.com/repos/my-org/my-app/deployments ``` Referencing secrets in your steps risks them being interpolated, uploaded to Buildkite, and shown in plain text in the "Uploaded Pipelines" list in the job's Timeline tab. The Buildkite agent does [redact strings](/docs/pipelines/configure/managing-log-output#redacted-environment-variables) that match the values off of environment variables whose names match common password patterns such as `*_PASSWORD`, `*_SECRET`, `*_TOKEN`, `*_PRIVATE_KEY` , `*_ACCESS_KEY`, `*_SECRET_KEY`, and `*_CONNECTION_STRING` . To prevent the risk of interpolation, it is recommended that you replace the command block with a script in your repository, for example: ```yml steps: - command: scripts/trigger-github-deploy ``` > 📘 > Use [build scripts](/docs/pipelines/configure/writing-build-scripts) instead of `command` blocks for steps that use secrets. If you must define your script in your steps, you can prevent interpolation by using the `$$` syntax: ```yml steps: # By using $$ the value of the secret is never sent to Buildkite. This is # still not best practice, as it's easy to forget the additional $ character # and expose the secret. - command: | curl \ --header "Authorization: token $$GITHUB_MY_APP_DEPLOYMENT_ACCESS_TOKEN" \ --header "Content-Type: application/json" \ --request POST \ --data "{\"ref\": \"$$BUILDKITE_COMMIT\"}" \ https://api.github.com/repos/my-org/my-app/deployments ``` --- ### Overview URL: https://buildkite.com/docs/pipelines/security/secrets/buildkite-secrets #### Buildkite secrets _Buildkite secrets_ is an encrypted key-value store secrets management service offered by Buildkite for use by the Buildkite agent. These secrets can be accessed using the [`buildkite-agent secret get` command](/docs/agent/cli/reference/secret) or within a job's environment variables by defining `secrets` on relevant steps within a pipeline YAML configuration. The secrets are encrypted both at rest and in transit, and are decrypted on Buildkite's application servers when accessed by the agent. Buildkite secrets: - Are scoped within a given [cluster](/docs/pipelines/security/clusters), and are accessible to all agents within that cluster only, since each cluster has its own unique secrets encryption key. The secrets are decrypted by the Buildkite control plane and then sent to the agent. - Are available to both [Buildkite hosted](/docs/agent/buildkite-hosted) as well as self-hosted agents. ##### Access control In addition to being scoped within a cluster, access to Buildkite secrets is managed through agent access policies. These policies restrict which agents can access secrets during builds. For detailed information about policy structure and examples, see [Access policies for Buildkite secrets](/docs/pipelines/security/secrets/buildkite-secrets/access-policies). ##### Create a secret Buildkite secrets can only be created by [cluster maintainers](/docs/pipelines/security/clusters/manage#manage-maintainers-on-a-cluster), as well as [Buildkite organization administrators](/docs/pipelines/security/permissions#manage-teams-and-permissions-organization-level-permissions). ###### Using the Buildkite interface To create a new Buildkite secret using the Buildkite interface: 1. Select **Agents** in the global navigation to access the **Clusters** page. 1. Select the cluster in which to create the new Buildkite secret. 1. Select **Secrets** to access the **Secrets** page, then select **New Secret**. 1. Enter a **Key** for the secret, which can only contain letters, numbers, and underscores, as valid characters. The secret's _key_ is what you use to reference the secret, typically from your pipeline configurations. **Notes:** * The maximum permitted length for a key is 255 characters. * If you attempt to use any other characters for the key, or you begin your key with `buildkite` or `bk` (regardless of case), your secret will not be created when selecting **Create Secret**. 1. Enter an optional **Description** for the secret, which appears just under the secret's key value on the main **Secrets** page. 1. Enter the **Value** for the secret. This value can be any number of valid UTF-8 characters up to a maximum of 32 kilobytes. Be aware that once the secret is created, this value will no longer be visible through the Buildkite interface and will be redacted when output in build logs. 1. Select **Create Secret** to create your new secret, which can now be accessed within jobs through the `buildkite-agent secret get` command. ##### Update a secret's value Buildkite secrets can only be updated by [cluster maintainers](/docs/pipelines/security/clusters/manage#manage-maintainers-on-a-cluster), as well as [Buildkite organization administrators](/docs/pipelines/security/permissions#manage-teams-and-permissions-organization-level-permissions). ###### Using the Buildkite interface To update an existing Buildkite secret's value using the Buildkite interface: 1. Select **Agents** in the global navigation to access the **Clusters** page. 1. Select the cluster where the secret you wish to update is located. 1. Select **Secrets** to access the **Secrets** page, then select **Edit** in the row of the secret you wish to update. 1. Modify the optional **Description** for the secret, which appears just under the secret's key value on the main **Secrets** page. 1. Enter a new **Value** for your secret. This value can be any number of valid UTF-8 characters up to a maximum of 32 kilobytes. Be aware that once the secret's value is updated, it will no longer be visible through the Buildkite interface and will be redacted when output in build logs. 1. Select **Update Secret** to update the secret's value. > 📘 > While a secret's **Value** can be modified, the **Key** value cannot be changed. ##### Use a Buildkite secret in a job ###### From within a pipeline YAML configuration > 📘 Minimum Buildkite agent version requirement > To use Buildkite secrets in a job, defined by its pipeline YAML configuration, version 3.106.0 or later of the Buildkite agent is required. Using earlier versions of the Buildkite agent will result in pipeline failures. Once you've [created a secret](#create-a-secret), you can specify secrets in your pipeline YAML configuration, which will be injected into your job environment. Secrets can be specified for all steps in a build and per command step. For example, to load the `API_ACCESS_TOKEN` secret in all jobs for your build: ```yaml steps: - command: do_something.sh - command: api_call.sh secrets: - API_ACCESS_TOKEN ``` Or to load it for only the jobs that need it: ```yaml steps: - command: do_something.sh - command: api_call.sh secrets: - API_ACCESS_TOKEN ``` The value of the secret `API_ACCESS_TOKEN` is retrieved when the job starts up, and is injected into the job's environment variables as the value of the environment variable `API_ACCESS_TOKEN`. The environment variable is available to all of the job's hooks, plugins, and commands. If you need to limit the scope of secret exposure to a specific part of a job, you can use `buildkite-agent secret get` to retrieve the secret's value within the phase of the job the secret is required for. ###### Custom environment variable names for secrets To use a custom environment variable name, you can specify `secrets` as a hash with an environment variable name as the key and the secret's key as the value. ```yaml - command: do_something.sh - command: api_call.sh secrets: MY_APP_ACCESS_TOKEN: API_ACCESS_TOKEN ``` This will inject the value of the secret `API_ACCESS_TOKEN` into the environment variable `MY_APP_ACCESS_TOKEN`. Custom environment variable names for secrets cannot start with `BUILDKITE` or `BK` (with the exception of `BUILDKITE_API_TOKEN`). ###### From a build script or hook Once you've [created a secret](#create-a-secret), the [`buildkite-agent secret get` command](/docs/agent/cli/reference/secret) can be used within the Buildkite agent to print the secret's value to standard out (stdout). You can use this command within standard bash-like commands to redirect the secret's output into an environment variable, a file, or your own tool that uses the Buildkite secret's value directly, for example: - Setting a Buildkite secret with the key `secret_name` into an environment variable called `SECRET_VAR`: `SECRET_VAR=$(buildkite-agent secret get secret_name)` - Redirecting the value of a Buildkite secret with the key `secret_name` into a file called `secret.txt`: `buildkite-agent secret get secret_name > secret.txt` - Passing the output of your Buildkite secret (using the `buildkite-agent secret get` command) to your own tool named `cli-tool` that accepts a secret via its `-token` option: `cli-tool —token $(buildkite-agent secret get secret_name)` Here’s a simple example of how one of these commands might appear in a Buildkite Pipeline step: ```yaml steps: - agents: { queue: "hosted" } command: - buildkite-agent secret get secret_name > secret.txt ``` ##### Redaction If any Pipeline, script or your own tool (accidentally) prints out the value of any Buildkite secret to standard out, this value is automatically redacted from the build logs. If for any reason you detect a secret value that isn't redacted, please rotate your secrets and contact security@buildkite.com. ##### Security controls Buildkite secrets are designed, with the following controls in place: - Secrets are encrypted in transit using TLS. - Secrets are always stored encrypted at rest. - All access to the secrets are logged. - Employee access to secrets is strictly limited and audited. ##### Manage secrets using the REST API You can manage Buildkite secrets programmatically using the [Buildkite REST API](/docs/apis/rest-api/clusters/secrets). The API endpoint allows you to: - List all secrets in a cluster - Get details for a specific secret - Create new secrets - Update secret details (description and access policy) - Update secret values - Delete secrets For detailed information about available endpoints, authentication, and examples, see the [cluster's secrets endpoint of the REST API documentation](/docs/apis/rest-api/clusters/secrets). ##### Best practices Buildkite secrets are stored by Buildkite, and Buildkite manages the keys used to encrypt and decrypt these secrets stored in its secrets management service, both at rest and in transit. You should implement additional controls to manage the lifecycle of secrets stored within Buildkite secrets, in addition to any monitoring capability you may require. For example: - All credentials should be rotated regularly. - Track the secrets stored in Buildkite secrets within your own asset management processes. - Enable logging for services that are accessed using the secrets stored in Buildkite secrets. - Should you detect a compromise or are concerned about the security of your secrets, please contact security@buildkite.com immediately. --- ### Access policies URL: https://buildkite.com/docs/pipelines/security/secrets/buildkite-secrets/access-policies #### Access policies for Buildkite secrets Access policies for Buildkite secrets: - Control access to [Buildkite secrets](/docs/pipelines/security/secrets/buildkite-secrets) based on build attributes. Policies are written in YAML and configured through the Buildkite interface. - Restrict access to Buildkite secrets based on build context. You can specify conditions such as the branch, pipeline, or user who triggered the build. During a build, the policy is evaluated against the build's context. If any of the rules of the policy match, access to the secret is granted. If none of the rules of the policy match, access to the secret is denied. ##### Policy schema Policies are defined as a list of policy rules in YAML. Each _policy rule_ (beginning with a `-`) specifies one or more _claims_, all of which must be met for a build to access the Buildkite secret. Each claim must specify one or more _conditions_. Conditions can be a single item or a list of strings. When a list of condition strings is provided on a claim, at least one of these string values must match for the claim to be met. **A single _condition_ on each _claim_ within a _policy rule_:** ``` #### All claims must be met. The build must be from the "frontend-pipeline" pipeline and built on the "main" branch. - pipeline_slug: "frontend-pipeline" # Claim 1 with a single condition build_branch: "main" # Claim 2 with a single condition ``` **Multiple _conditions_ on each _claim_ within a _policy rule_:** This policy will grant access to builds running in the `frontend-pipeline` and the `backend-pipeline` running on either the `main` or `develop` branch. ``` #### This rule grants access if: #### - The pipeline is EITHER "frontend-pipeline" OR "backend-pipeline" #### AND #### - The branch is EITHER "main" OR "develop" - pipeline_slug: # Claim 1 with two conditions - "frontend-pipeline" - "backend-pipeline" build_branch: # Claim 1 with two conditions - "main" - "develop" ``` **Multiple _policy rules_:** This policy will grant access to builds in the `frontend-pipeline` running on the `main` branch, and will grant access to builds in the `backend-pipeline` running on the `develop` branch. It will not grant access to builds running in the `frontend-pipeline` on the `develop` branch. ``` #### This rule grants access if: #### - The pipeline is "frontend-pipeline" #### AND #### - The branch is "main" - pipeline_slug: "frontend-pipeline" build_branch: "main" #### This rule grants access if: #### - The pipeline is "backend-pipeline" #### AND #### - The branch is "develop" - pipeline_slug: "backend-pipeline" build_branch: "develop" ``` ###### First-party claims First-party claims are ones whose values are generated by Buildkite. This makes these claims more secure than [third-party claims](#policy-schema-third-party-claims). | `pipeline_id` | The unique identifier of the build pipeline. _Example:_ `pipeline_id: ""f47ac10b-58cc-4372-a567-0e02b2c3d479""` | `build_source` | The source of the build trigger (e.g., webhook, API). _Example:_ `build_source: "webhook"` | `cluster_queue_id` | The unique identifier of the cluster's queue where the job is running. _Example:_ `cluster_queue_id: "01928e5a-1234-5678-9abc-def0123456789"` ###### Third-party claims Third-party claims are ones whose values are provided by users or third-party tools. While these claims can be useful for controlling access, they are not as secure as [first-party claims](#policy-schema-first-party-claims), which are generated by the Buildkite platform. | `cluster_queue_key` | The key of the cluster's queue where the job is running. _Example:_ `cluster_queue_key: "default"` | `pipeline_slug` | The slug of the build pipeline. _Example:_ `pipeline_slug: "my-pipeline"` | `build_branch` | The branch being built. _Example:_ `build_branch: "main"` | `build_creator` | The email of the user who triggered the build. _Example:_ `build_creator: "user@example.com"` | `build_creator_team` | The UUIDs of the teams the build creator belongs to. _Example:_ `build_creator_team: "123e4567-e89b-12d3-a456-426614174000"` ###### Example access policy The following example access policy contains three rules with different levels of access control. Access to the secret is granted if the build matches all conditions in *any one of these rules*. ```yaml #### This rule grants access if the build matches all five claims and their conditions. - pipeline_slug: "my-pipeline" build_branch: "main" build_creator: "user@example.com" build_source: "webhook" build_creator_team: "123e4567-e89b-12d3-a456-426614174000" #### This rule grants access if: #### - pipeline_slug is "frontend-pipeline" OR "backend-pipeline" #### AND #### - build_branch is either "main" OR "develop" - pipeline_slug: - "frontend-pipeline" - "backend-pipeline" build_branch: - "main" - "develop" #### This rule grants access if: #### - pipeline_slug is "public-pipeline" #### AND #### - build_branch is "main" or "release" #### AND #### - build_creator is "admin@example.com" or "deployer@example.com" - pipeline_slug: "public-pipeline" build_branch: - "main" - "release" build_creator: - "admin@example.com" - "deployer@example.com" #### This rule grants access if: #### - build_branch is "main" #### AND #### - cluster_queue_key is "deploy" - build_branch: "main" cluster_queue_key: "deploy" ``` ###### Use case examples Access policies can be tailored to fit a wide range of security and workflow requirements. Here are some practical examples of rules to help you get started. ###### Restrict secret access to the main branch of a pipeline ```yaml - pipeline_slug: "my-pipeline" build_branch: "main" ``` ###### Restrict secret access to only Github merge queue builds ```yaml - pipeline_slug: "my-pipeline" build_branch: "gh-readonly-queue/*" ``` ###### Only allow a chosen team to deploy from the main branch ```yaml - build_branch: "main" build_creator_team: "e2b7c3f4-1a5d-4e6b-9c8d-2f3a4b5c6d7e" ``` ###### Restrict secret access to jobs running on a specific queue of the cluster ```yaml - cluster_queue_key: "production" ``` ###### Restrict secret access to jobs running on specific queues by the queue ID ```yaml - cluster_queue_id: - "01928e5a-1234-5678-9abc-def0123456789" - "01928e5a-5678-9abc-1234-def0123456789" ``` ##### Add an access policy A Buildkite secret's access policy can only be added (and modified) by [cluster maintainers](/docs/pipelines/security/clusters/manage#manage-maintainers-on-a-cluster), as well as [Buildkite organization administrators](/docs/pipelines/security/permissions#manage-teams-and-permissions-organization-level-permissions). 1. Select **Agents** in the global navigation to access the **Clusters** page. 1. Select the cluster in which to create the new Buildkite secret. 1. Select the secret you want to secure with an access policy. 1. Select the secret's **Access** tab. 1. In the **Agent access** section, select **Restrict access to agents matching a policy**. 1. Add your policy to the **Policy** field in YAML format. 1. Select **Update agent access** to save your changes. --- ### Overview URL: https://buildkite.com/docs/pipelines/security/clusters #### Clusters overview Clusters is a Buildkite Pipelines feature used to manage and organize agents and queues, and provides the following benefits: - Allows [teams](/docs/platform/team-management/permissions) to self-manage their Buildkite agent pools. - Allows [cluster maintainers](/docs/pipelines/security/clusters/manage#manage-maintainers-on-a-cluster) and [Buildkite organization administrators](/docs/platform/team-management/permissions#manage-teams-and-permissions-organization-level-permissions) to create isolated sets of agents and pipelines within the one Buildkite organization. - Helps make agents and queues more discoverable across your Buildkite organization. - Provides easily accessible [queue metrics](/docs/pipelines/insights/queue-metrics) and operational [cluster insights](/docs/pipelines/insights/clusters) such as queue wait times (available on [Enterprise](https://buildkite.com/pricing/) plans only). - Allows easier agent management through [queue pausing](/docs/agent/queues/managing#pause-and-resume-a-queue). - Allows you to easily [create queues for Buildkite hosted agents](/docs/agent/queues/managing#create-a-buildkite-hosted-queue). - Allows the management of [Buildkite secrets](/docs/pipelines/security/secrets/buildkite-secrets). Clusters create logical boundaries between different parts of your build infrastructure, enhancing security, discoverability, and manageability. The following diagram shows the architecture of a Buildkite organization's clusters, along with their pipelines and queues. Clusters encapsulate groups of agents and pipelines, enabling the following: - Clusters are viewable to your entire Buildkite organization, allowing engineers to better understand the agents and queues available for their pipelines. - Individual users or teams can maintain their own clusters. Cluster maintainers can manage queues and agent tokens, and add and remove pipelines. - Pipelines must be assigned to a cluster, ensuring their builds run only on the agents connected to this cluster. These pipelines can also trigger builds only on other pipelines in the same cluster. ##### Clusters and queues best practices ###### How should I structure my clusters In a small to medium organization, a single default cluster will often suffice. There is no need to create extra clusters. When your organization grows, the most common patterns seen for cluster configurations are based on team/department ownership: - Product lines: companies with multiple products often have a cluster configured for each individual product. - Type of work: open source, infrastructure, frontend, backend vs everything else. You can create as many clusters as you require for your setup. However, keep in mind that different clusters generally do not share pipelines. Learn more about working with clusters in [Manage clusters](/docs/pipelines/security/clusters/manage). > 📘 Pipeline triggering and artifact access > Pipelines associated with one cluster cannot trigger or access artifacts from pipelines associated with another cluster, unless a [rule](/docs/pipelines/security/clusters/rules) has been created to explicitly allow triggering or artifact access between pipelines in different clusters. Be aware that if you are using the the [Agent Stack for Kubernetes](/docs/agent/self-hosted/agent-stack-k8s) to run your Buildkite agents in a Kubernetes environment (with Kubernetes clusters), a Kubernetes cluster is unrelated to a Buildkite cluster. ###### How should I structure my queues The most common queue attributes are based on infrastructure set-ups, such as: - Architecture (x86, arm64, Apple silicon, etc.) - Size of agents (small, medium, large, extra large) - Type of machine (macOS, Linux, Windows, GPU, etc.) Therefore, an example queue would be `small_mac_silicon`. Having individual queues according to these breakdowns allows you to scale a set of similar agents, which Buildkite can then report on. Learn more about working with queues in [Manage queues](/docs/agent/queues/managing). ##### Queue metrics Clusters provides additional, easy to access queue metrics that are available only for queues within a cluster. Learn more in [Queue metrics in clusters](/docs/pipelines/insights/queue-metrics). ##### Cluster insights The cluster insights page provides an overview on the overall health of your cluster and agent set-up. Learn more in [Cluster insights](/docs/pipelines/insights/clusters). ##### Accessing your agents and pipelines If you only have one cluster with one queue, selecting **Agents** in the global navigation takes you directly to your single queue in this cluster. This is typically the case with newly created organizations. If you have multiple clusters, selecting **Agents** in the global navigation takes you to the **Clusters** page, where you can access your individual clusters and within each one, the details and configurations of the cluster's individual queues, agents tokens, pipelines and other settings. Any agents and pipelines which are not yet associated with a cluster are known as _unclustered agents_ and _unclustered pipelines_, respectively. From the **Clusters** page: - To access a specific cluster's agents, their associated agent tokens, as well as the cluster's queues and pipelines, select the relevant cluster (or its **queue** or **pipelines** link) from this page. - To access your unclustered agents, their associated agent tokens, as well as their pipelines, select **Unclustered** (or its **pipelines** link) from this page. --- ### Manage clusters URL: https://buildkite.com/docs/pipelines/security/clusters/manage #### Manage clusters This page provides details on how to manage [clusters](/docs/pipelines/glossary#cluster) within your Buildkite organization. Learn more about on how to set up queues within a cluster in [Manage queues](/docs/agent/queues/managing). ##### Setting up clusters When a new Buildkite organization is created, a single default cluster (initially named **Default cluster**) is also created. For smaller organizations, working on smaller projects, this default cluster may be sufficient. However, it's usually more convenient for large organizations to manage projects in separate clusters, when these projects require different: - Source code visibility, such as open-source versus closed-source code projects. - Expertise and ownership, such as Android developers, macOS developers, Windows developers, Machine Learning expert etc. - Multiple projects, for example, different product lines. Once your clusters are set up, you can set up one or more [queues](/docs/agent/queues/managing) within each cluster. ##### Create a cluster New clusters can be created by a [_Buildkite organization administrator_](/docs/pipelines/security/permissions#manage-teams-and-permissions-organization-level-permissions) using the [**Clusters** page](#create-a-cluster-using-the-buildkite-interface), as well as Buildkite's [REST API](#create-a-cluster-using-the-rest-api) or [GraphQL API](#create-a-cluster-using-the-graphql-api). Once the cluster has been created, a Buildkite organization administrator can then make other members of their Buildkite organization a [_maintainer_](#manage-maintainers-on-a-cluster) of the cluster. These people can then administer the cluster on behalf of the Buildkite organization administrator. ###### Using the Buildkite interface To create a new cluster using the Buildkite interface: 1. Select **Agents** in the global navigation to access the **Clusters** page. 1. Select **Create a Cluster**. 1. On the **New Cluster** page, enter the mandatory **Name** for the new cluster. 1. Enter an optional **Description** for the cluster. This description appears under the name of the cluster in its tile on the **Clusters** page. 1. Enter an optional **Emoji** and **Color** using the recommended syntax. This emoji appears next to the cluster's name and the color (in hex code syntax, for example, `#FFE0F1`) provides the background color for this emoji. 1. Select **Create Cluster**. The new cluster's page is displayed on its **Queues** page, indicating the cluster's name and its default queue, named **queue**. From this page, you can set up one or more additional [queues](/docs/agent/queues/managing) within this cluster. ###### Using the REST API To [create a new cluster](/docs/apis/rest-api/clusters#clusters-create-a-cluster) using the [REST API](/docs/apis/rest-api), run the following example `curl` command: ```bash curl -H "Authorization: Bearer $TOKEN" \ -X POST "https://api.buildkite.com/v2/organizations/{org.slug}/clusters" \ -H "Content-Type: application/json" \ -d '{ "name": "Open Source", "description": "A place for safely running our open source builds", "emoji": "\:technologist\:", "color": "#FFE0F1" }' ``` where: - `$TOKEN` is an [API access token](https://buildkite.com/user/api-access-tokens) scoped to the relevant **Organization** and **REST API Scopes** that your request needs access to in Buildkite. - `{org.slug}` can be obtained: * From the end of your Buildkite URL, after accessing **Pipelines** in the global navigation of your organization in Buildkite. * By running the [List organizations](/docs/apis/rest-api/organizations#list-organizations) REST API query to obtain this value from `slug` in the response. For example: ```bash curl -H "Authorization: Bearer $TOKEN" \ -X GET "https://api.buildkite.com/v2/organizations" ``` - `name` (required) is the name for the new cluster. - `description` (optional) is the description that appears under the name of cluster in its tile on the **Clusters** page. - `emoji` (optional) is the emoji that appears next to the cluster's name in the Buildkite interface and uses the example syntax above. - `color` (optional) provides the background color for this emoji and uses hex code syntax (for example, `#FFE0F1`). > 📘 A default queue is not automatically created > Unlike creating a new cluster through the [Buildkite interface](#create-a-cluster-using-the-buildkite-interface), a default queue is not automatically created using this API call. To create a new/default queue for any new cluster created through an API call, you need to manually [create a new queue](/docs/agent/queues/managing#create-a-self-hosted-queue). ###### Using the GraphQL API To [create a new cluster](/docs/apis/graphql/schemas/mutation/clustercreate) using the [GraphQL API](/docs/apis/graphql-api), run the following example mutation: ```graphql mutation { clusterCreate( input: { organizationId: "organization-id" name: "Open Source" description: "A place for safely running our open source builds" emoji: "\:technologist\:" color: "#FFE0F1" } ) { cluster { id uuid name description emoji color defaultQueue { id } createdBy { id uuid name email avatar { url } } } } } ``` where: - `organizationId` (required) can be obtained: * From the **GraphQL API Integration** section of your **Organization Settings** page, accessed by selecting **Settings** in the global navigation of your organization in Buildkite. * By running a `getCurrentUsersOrgs` GraphQL API query to obtain the organization slugs for the current user's accessible organizations, followed by a [getOrgId](/docs/apis/graphql/schemas/query/organization) query, to obtain the organization's `id` using the organization's slug. For example: Step 1. Run `getCurrentUsersOrgs` to obtain the organization slug values in the response for the current user's accessible organizations: ```graphql query getCurrentUsersOrgs { viewer { organizations { edges { node { name slug } } } } } ``` Step 2. Run `getOrgId` with the appropriate slug value above to obtain this organization's `id` in the response: ```graphql query getOrgId { organization(slug: "organization-slug") { id uuid slug } } ``` **Note:** The `organization-slug` value can also be obtained from the end of your Buildkite URL, by selecting **Pipelines** in the global navigation of your organization in Buildkite. - `name` (required) is the name for the new cluster. - `description` (optional) is the description that appears under the name of cluster in its tile on the **Clusters** page. - `emoji` (optional) is the emoji that appears next to the cluster's name in the Buildkite interface and uses the example syntax above. - `color` (optional) provides the background color for this emoji and uses hex code syntax (for example, `#FFE0F1`). > 📘 A default queue is not automatically created > Unlike creating a new cluster through the [Buildkite interface](#create-a-cluster-using-the-buildkite-interface), a default queue is not automatically created using this API call. To create a new/default queue for any new cluster created through an API call, you need to manually [create a new queue](/docs/agent/queues/managing#create-a-self-hosted-queue). ##### Connect agents to a cluster Agents are associated with a cluster through the cluster's agent tokens. Learn more about this in [Agent tokens](/docs/agent/self-hosted/tokens). Once you have [created your required agent token/s](/docs/agent/self-hosted/tokens#create-a-token), [use them](/docs/agent/self-hosted/tokens#using-and-storing-tokens) with the relevant agents, along with an optional [tag representing the relevant queue in your cluster](/docs/agent/queues#assigning-a-self-hosted-agent-to-a-queue). You can also create, edit, and revoke other agent tokens from the cluster’s **Agent tokens**. ##### Migrate unclustered agents to a cluster Unclustered agents are agents associated with the **Unclustered** area of the **Clusters** page in a Buildkite organization. Learn more about unclustered agents in [Working with unclustered agent tokens](/docs/agent/self-hosted/tokens#working-with-unclustered-agent-tokens). Migrating unclustered agents to a cluster allows those agents to use [agent tokens](/docs/agent/self-hosted/tokens) that connect to Buildkite via a cluster, which can be managed by users with [cluster maintainer](/docs/pipelines/security/clusters/manage#manage-maintainers-on-a-cluster) privileges. > 📘 Buildkite organizations created after February 26, 2024 > Buildkite organizations created after this date will not have an **Unclustered** area. Therefore, this process is not required for these newer Buildkite organizations. Learn more about this entire process from the detailed [Migrate from unclustered to clustered agents](/docs/pipelines/security/clusters/migrate-from-unclustered-to-clustered-agents) guide, which guides you through the individual processes of: 1. [Assessing your current environment](/docs/pipelines/security/clusters/migrate-from-unclustered-to-clustered-agents#assessing-your-current-environment). 1. Deciding on an [agent migration strategy](/docs/pipelines/security/clusters/migrate-from-unclustered-to-clustered-agents#migration-strategies), noting that an initial [single-cluster migration strategy](/docs/pipelines/security/clusters/migrate-from-unclustered-to-clustered-agents#single-cluster-migration-overview) may likely provide the least friction. 1. Understanding the [technical considerations](/docs/pipelines/security/clusters/migrate-from-unclustered-to-clustered-agents#technical-considerations) of the agent migration process. 1. The [agent migration process](/docs/pipelines/security/clusters/migrate-from-unclustered-to-clustered-agents#agent-migration-process) itself. ##### Restrict an agent token's access by IP address As a security measure, each agent token has an optional **Allowed IP Addresses** setting that can be used to lock down access to the token. When this option is set on an agent token, only agents with an IP address that matches one this agent token's setting can use this token to connect to your Buildkite organization (through your cluster). An agent token's **Allowed IP Addresses** setting can be set [when the token is created](/docs/agent/self-hosted/tokens#create-a-token), or this setting can be added to or modified on existing agent tokens by a [cluster maintainer](#manage-maintainers-on-a-cluster) or Buildkite organization administrator, using the [**Agent Tokens** page of a cluster](#restrict-an-agent-tokens-access-by-ip-address-using-the-buildkite-interface), as well as Buildkite's [REST API](#restrict-an-agent-tokens-access-by-ip-address-using-the-rest-api) or [GraphQL API](#restrict-an-agent-tokens-access-by-ip-address-using-the-graphql-api). For these API requests, the _cluster ID_ value submitted in the request is the target cluster the token is associated with. > 🚧 Changing the **Allowed IP Addresses** setting > Modifying an agent token's **Allowed IP Addresses** setting forcefully disconnects any existing agents (using this token) with an IP address that no longer matches one of the values of this updated setting. This will prevent the completion of any jobs in progress on those agents. To remove this IP address restriction from an agent's token, explicitly set its **Allowed IP Addresses** value to its default value of `0.0.0.0/0`. Be aware that an agent token's **Allowed IP Addresses** setting also has the following limitations: - Access to the [Metrics API](/docs/apis/agent-api/metrics) for this agent token is not restricted. - There is a maximum of 24 CIDR blocks per agent token. - IPv6 is currently not supported. ###### Using the Buildkite interface To restrict an existing agent token's access by IP address (via the token's **Allowed IP Addresses** setting) using the Buildkite interface: 1. Select **Agents** in the global navigation to access the **Clusters** page. 1. Select the cluster associated with the agent token. 1. Select **Agent Tokens** and expand the agent token whose **Allowed IP Addresses** setting is to be added or modified. 1. Select **Edit**. 1. Update the **Allowed IP Addresses** setting, using space-separated [CIDR notation](https://en.wikipedia.org/wiki/Classless_Inter-Domain_Routing) to the IP addresses which agents must be accessible through. 1. Select **Save Token**. ###### Using the REST API To restrict an existing agent token's access by IP address using the REST API, run the following example `curl` command to [update this agent token](/docs/apis/rest-api/clusters/agent-tokens#update-a-token): ```bash curl -H "Authorization: Bearer $TOKEN" \ -X PUT "https://api.buildkite.com/v2/organizations/{org.slug}/clusters/{cluster.id}/tokens/{id}" \ -H "Content-Type: application/json" \ -d '{ "allowed_ip_addresses": "192.0.2.0/24 198.51.100.12" }' ``` where: - `$TOKEN` is an [API access token](https://buildkite.com/user/api-access-tokens) scoped to the relevant **Organization** and **REST API Scopes** that your request needs access to in Buildkite. - `{org.slug}` can be obtained: * From the end of your Buildkite URL, after accessing **Pipelines** in the global navigation of your organization in Buildkite. * By running the [List organizations](/docs/apis/rest-api/organizations#list-organizations) REST API query to obtain this value from `slug` in the response. For example: ```bash curl -H "Authorization: Bearer $TOKEN" \ -X GET "https://api.buildkite.com/v2/organizations" ``` - `{cluster.id}` can be obtained: * From the **Cluster Settings** page of your target cluster. To do this: 1. Select **Agents** (in the global navigation) > the specific cluster > **Settings**. 1. Once on the **Cluster Settings** page, copy the `id` parameter value from the **GraphQL API Integration** section, which is the `{cluster.id}` value. * By running the [List clusters](/docs/apis/rest-api/clusters#clusters-list-clusters) REST API query and obtain this value from the `id` in the response associated with the name of your target cluster (specified by the `name` value in the response). For example: ```bash curl -H "Authorization: Bearer $TOKEN" \ -X GET "https://api.buildkite.com/v2/organizations/{org.slug}/clusters" ``` - `{id}` is that of the agent token, whose value can be obtained: * From the Buildkite URL path when editing the agent token. To do this: - Select **Agents** (in the global navigation) > the specific cluster > **Agent Tokens** > expand the agent token > **Edit**. - Copy the ID value between `/tokens/` and `/edit` in the URL. * By running the [List tokens](/docs/apis/rest-api/clusters/agent-tokens#list-tokens) REST API query and obtain this value from the `id` in the response associated with the description of your token (specified by the `description` value in the response). For example: ```bash curl -H "Authorization: Bearer $TOKEN" \\ -X GET "https://api.buildkite.com/v2/organizations/{org.slug}/clusters/{cluster.id}/tokens" ``` - `allowed_ip_addresses` is/are the IP addresses which agents must be accessible through to access this agent token and be able to connect to Buildkite via your cluster. Use space-separated [CIDR notation](https://en.wikipedia.org/wiki/Classless_Inter-Domain_Routing) to enter IP addresses for this field value. ###### Using the GraphQL API To restrict an existing agent token's access by IP address using the [GraphQL API](/docs/apis/graphql-api), run the following example mutation to [update this agent token](/docs/apis/graphql/schemas/mutation/clusteragenttokenupdate): ```graphql mutation { clusterAgentTokenUpdate( input: { organizationId: "organization-id" id: "token-id" description: "A description" allowedIpAddresses: "202.144.0.0/24 198.51.100.12" } ) { clusterAgentToken { id uuid description allowedIpAddresses cluster { id uuid organization { id uuid } } createdBy { id uuid email } } } } ``` where: - `organizationId` (required) can be obtained: * From the **GraphQL API Integration** section of your **Organization Settings** page, accessed by selecting **Settings** in the global navigation of your organization in Buildkite. * By running a `getCurrentUsersOrgs` GraphQL API query to obtain the organization slugs for the current user's accessible organizations, followed by a [getOrgId](/docs/apis/graphql/schemas/query/organization) query, to obtain the organization's `id` using the organization's slug. For example: Step 1. Run `getCurrentUsersOrgs` to obtain the organization slug values in the response for the current user's accessible organizations: ```graphql query getCurrentUsersOrgs { viewer { organizations { edges { node { name slug } } } } } ``` Step 2. Run `getOrgId` with the appropriate slug value above to obtain this organization's `id` in the response: ```graphql query getOrgId { organization(slug: "organization-slug") { id uuid slug } } ``` **Note:** The `organization-slug` value can also be obtained from the end of your Buildkite URL, by selecting **Pipelines** in the global navigation of your organization in Buildkite. - `id` (required) is that of the agent token, whose value can only be obtained using the APIs, by running a [getClustersAgentTokenIds](/docs/apis/graphql/schemas/query/organization) query, to obtain the organization's clusters and each of their agent tokens' `id` values in the response. For example: ```graphql query getClustersAgentTokenIds { organization(slug: "organization-slug") { clusters(first: 10) { edges { node { name id agentTokens(first: 10) { edges { node { description id } } } } } } } } ``` - `description` (required) should clearly identify the environment the token is intended to be used for (for example, `Read-only token for static site generator`), as it is listed on the **Agent tokens** page of your specific cluster the agent connects to. To access this page, select **Agents** (in the global navigation) > the specific cluster > **Agent Tokens**. If you do not need to change the existing `description` value, specify the existing field value in the request. - `allowedIpAddresses` is/are the IP addresses which agents must be accessible through to access this agent token and be able to connect to Buildkite via your cluster. Use space-separated [CIDR notation](https://en.wikipedia.org/wiki/Classless_Inter-Domain_Routing) to enter IP addresses for this field value. ##### Manage maintainers on a cluster A user who is a [_Buildkite organization administrator_](/docs/pipelines/security/permissions#manage-teams-and-permissions-organization-level-permissions) can [create clusters](#create-a-cluster). As a Buildkite organization administrator, you can add and manage other users or teams in your Buildkite organization as _maintainers_ of a cluster in the organization. A cluster maintainer can: - Update or delete the cluster. - Manage [agent tokens](/docs/agent/self-hosted/tokens) associated with the cluster. - Manage [queues](/docs/agent/queues/managing) within the cluster. - Add pipelines to or remove them from the cluster. - Stop, pause and resume agents belonging to a queue within the cluster. - Manage [Buildkite secrets](/docs/pipelines/security/secrets/buildkite-secrets) associated with the cluster. > 📘 > Learn more about Buildkite organization administrators and user permissions in Buildkite from [User and team permissions](/docs/platform/team-management/permissions). To add a maintainer to a cluster: 1. Select **Agents** in the global navigation to access the **Clusters** page. 1. Select the cluster whose user or team is to be added as a maintainer of this cluster. 1. Select **Maintainers** > **Add Maintainer**. 1. Select if the maintainer will either be a specific **User** or **Team** of users. 1. Select the specific user or team from the dropdown list. 1. Select **Add Maintainer** and the user or team is listed on the **Maintainers** page. To remove a maintainer from a cluster: 1. From the cluster's **Maintainers** page, select **Remove** from the user or team to be removed as a maintainer. 1. Select **OK** to confirm this action. ##### Move a pipeline to a specific cluster Move a pipeline to a specific cluster to ensure the pipeline's builds run only on agents connected to that cluster. > 📘 Associating pipelines with cluster > A pipeline can only be associated with one cluster at a time. It is not possible to associate a pipeline with two or more clusters simultaneously. A pipeline can be moved to a cluster by a [cluster maintainer](#manage-maintainers-on-a-cluster) or Buildkite organization administrator via the pipeline's [**General** settings page](#move-a-pipeline-to-a-specific-cluster-using-the-buildkite-interface), as well as Buildkite's [REST API](#move-a-pipeline-to-a-specific-cluster-using-the-rest-api) or [GraphQL API](#move-a-pipeline-to-a-specific-cluster-using-the-graphql-api). For these API requests, the _cluster ID_ value submitted in the request is the target cluster the pipeline is being moved to. ###### Using the Buildkite interface To move a pipeline to a specific cluster using the Buildkite interface: 1. Select **Pipelines** in the global navigation to access your organization's list of accessible pipelines. 1. Select the pipeline to be moved to a specific cluster. 1. Select **Settings** to open the pipeline's **General** settings page. 1. On this page, select **Change Cluster** in the **Cluster** section of this page. 1. Select the specific target cluster in the dialog and select **Change**. The pipeline's **General** settings page indicates the current cluster the pipeline is associated with. The pipeline will also be visible and accessible from the cluster's **Pipelines** page. ###### Using the REST API To [move a pipeline to a specific cluster](/docs/apis/rest-api/pipelines#update-a-pipeline) using the [REST API](/docs/apis/rest-api), run the following `curl` command: ```bash curl -H "Authorization: Bearer $TOKEN" \ -X PATCH "https://api.buildkite.com/v2/organizations/{org.slug}/pipelines/{slug}" \ -H "Content-Type: application/json" \ -d '{ "cluster_id": "xxx" }' ``` where: - `$TOKEN` is an [API access token](https://buildkite.com/user/api-access-tokens) scoped to the relevant **Organization** and **REST API Scopes** that your request needs access to in Buildkite. - `{org.slug}` can be obtained: * From the end of your Buildkite URL, after accessing **Pipelines** in the global navigation of your organization in Buildkite. * By running the [List organizations](/docs/apis/rest-api/organizations#list-organizations) REST API query to obtain this value from `slug` in the response. For example: ```bash curl -H "Authorization: Bearer $TOKEN" \ -X GET "https://api.buildkite.com/v2/organizations" ``` - `{slug}` can be obtained: * From the end of your Buildkite URL, after accessing **Pipelines** in the global navigation of your organization in Buildkite, then accessing the specific pipeline to be moved to the cluster. * By running the [List pipelines](/docs/apis/rest-api/pipelines#list-pipelines) REST API query to obtain this value from `slug` in the response from the specific pipeline. For example: ```bash curl -H "Authorization: Bearer $TOKEN" \ -X GET "https://api.buildkite.com/v2/organizations/{org.slug}/pipelines" ``` - `cluster_id` can be obtained: * From the **Cluster Settings** page of your target cluster. To do this: 1. Select **Agents** (in the global navigation) > the specific cluster > **Settings**. 1. Once on the **Cluster Settings** page, copy the `id` parameter value from the **GraphQL API Integration** section, which is the `cluster_id` value. * By running the [List clusters](/docs/apis/rest-api/clusters#clusters-list-clusters) REST API query and obtain this value from the `id` in the response associated with the name of your target cluster (specified by the `name` value in the response). For example: ```bash curl -H "Authorization: Bearer $TOKEN" \ -X GET "https://api.buildkite.com/v2/organizations/{org.slug}/clusters" ``` ###### Using the GraphQL API To [move a pipeline to a specific cluster](/docs/apis/graphql/schemas/mutation/pipelineupdate) using the [GraphQL API](/docs/apis/graphql-api), run the following mutation: ```curl mutation { pipelineUpdate( input: { id: "pipeline-id" clusterId: "cluster-id" } ) { pipeline { id uuid name description slug createdAt cluster { id uuid name description } } } } ``` where: - `id` (required) is that of the pipeline to be moved, whose value can be obtained: * From the pipeline's **General** settings page. To do this: 1. Select **Pipelines** in the global navigation > the specific pipeline to be moved to the cluster > **Settings**. 1. Copy the **ID** shown in the **GraphQL API Integration** section of this page, which is this `id` value. * By running the `getCurrentUsersOrgs` GraphQL API query to obtain the organization slugs for the current user's accessible organizations, then [getOrgPipelines](/docs/apis/graphql/schemas/query/organization) query to obtain the pipeline's `id` in the response. For example: Step 1. Run `getCurrentUsersOrgs` to obtain the organization slug values in the response for the current user's accessible organizations: ```graphql query getCurrentUsersOrgs { viewer { organization { edges { node { name slug } } } } } ``` Step 2. Run `getOrgPipelines` with the appropriate slug value above to obtain this organization's `id` in the response: ```graphql query getOrgPipelines { organization(slug: "organization-slug") { pipelines(first: 100) { edges { node { id uuid name } } } } } ``` - `clusterId` (required) can be obtained: * From the **Cluster Settings** page of your target cluster. To do this: 1. Select **Agents** (in the global navigation) > the specific cluster > **Settings**. 1. Once on the **Cluster Settings** page, copy the `cluster` parameter value from the **GraphQL API Integration** section, which is the `cluster.id` value. * By running the [List clusters](/docs/apis/graphql/cookbooks/clusters#list-clusters) GraphQL API query and obtain this value from the `id` in the response associated with the name of your target cluster (specified by the `name` value in the response). For example: ```graphql query getClusters { organization(slug: "organization-slug") { clusters(first: 10) { edges { node { id name uuid color description } } } } } ``` ##### Cluster insights The cluster insights page provides an overview on the overall health of your cluster and agent set-up. Learn more in [Cluster insights](/docs/pipelines/insights/clusters). --- ### Migrate from unclustered to clustered agents URL: https://buildkite.com/docs/pipelines/security/clusters/migrate-from-unclustered-to-clustered-agents #### Migrate from unclustered to clustered agents Clusters create logical boundaries between different parts of your build infrastructure, enhancing security, discoverability, and manageability. Learn more about clusters from the [Clusters overview](/docs/pipelines/security/clusters) page. Therefore, if your Buildkite pipelines are still operating in an unclustered agent environment, you should migrate these pipelines across to operating with clustered agents. This guide provides details on how to migrate your unclustered agents across to clustered ones. Unclustered agents are agents associated with the **Unclustered** area of the **Clusters** page in a Buildkite organization. Learn more about unclustered agents in [Working with unclustered agent tokens](/docs/agent/self-hosted/tokens#working-with-unclustered-agent-tokens). Migrating unclustered agents to a cluster allows those agents to use [agent tokens](/docs/agent/self-hosted/tokens) that connect to Buildkite via a cluster, which can be managed by users with [cluster maintainer](/docs/pipelines/security/clusters/manage#manage-maintainers-on-a-cluster) privileges. > 📘 Buildkite organizations created after February 26, 2024 > Buildkite organizations created after this date will not have an **Unclustered** area. Therefore, this process is not required for these newer Buildkite organizations. ##### Single-cluster migration overview Migrating your unclustered agents to a single cluster is the fastest migration strategy that offers the least friction, and is a recommended starting point. To do this: 1. Ensure you are familiar with the [key benefits of clusters](#key-benefits-of-clusters), and [starting the migration process with a single cluster](#key-benefits-of-clusters-starting-with-a-single-cluster). 1. Generate a [new agent token](/docs/agent/self-hosted/tokens#create-a-token) for your **Default cluster**. **Note:** This step is only required for clustered agents that you'll be running in a [self-hosted (hybrid)](/docs/pipelines/architecture#self-hosted-hybrid-architecture) environment. 1. Create your required [self-hosted](/docs/agent/queues/managing#create-a-self-hosted-queue) or [Buildkite hosted](/docs/agent/queues/managing#create-a-buildkite-hosted-queue) queues in this cluster—one for each queue tag that was assigned to all agents when they were started in your unclustered agent environment. Ensure you are familiar with the differences in how queues are managed and configured between unclustered and clustered environments in [Agent queue differences](#technical-considerations-agent-queue-differences), as well as the [Create your clusters and queues](#agent-migration-process-create-your-clusters-and-queues) and [Migrate unclustered agents to clusters](#agent-migration-process-migrate-unclustered-agents-to-clusters) sections of the [Agent migration process](#agent-migration-process). **Tip:** If you'll be running your clustered agents in a self-hosted (hybrid) environment, ensure you create _copies_ of your unclustered agents for your new cluster. This allows you to use your unclustered agents to fall back on if you experience any issues in getting your new clustered agents up and running. Once your agents have been successfully migrated over to your new cluster, you can then decommission your unclustered agents. 1. Move the [pipelines associated with your unclustered agents to their new cluster](#agent-migration-process-move-pipelines-to-clusters), and [test and validate](#agent-migration-process-test-and-validate-the-migrated-pipelines) that they build as expected on your new clustered agents. 1. Decommission your [unclustered agents](#agent-migration-process-decommission-your-unclustered-resources). You can now unlock [cluster insights](/docs/pipelines/insights/clusters), [queue metrics](/docs/pipelines/insights/queue-metrics), and [secrets management](/docs/pipelines/security/secrets/buildkite-secrets). See [Agent migration process](#agent-migration-process) for the full migration process and detailed migration steps, bearing in mind that you are only working with a single cluster. ##### Key benefits of clusters - **Enhanced security boundaries**: Clusters provide hard security boundaries between different environments. However, you can use [rules](/docs/pipelines/security/clusters/rules) to create exceptions that allow controlled interaction between clusters when needed. - **Improved observability**: Clusters provide access to [cluster insights](/docs/pipelines/insights/clusters) (for customers on [Enterprise](https://buildkite.com/pricing/) plans), providing better metrics and visibility into your build infrastructure such as queue wait times and job pass rates. All plans have access to [queue metrics](/docs/pipelines/insights/queue-metrics). - **Secrets management**: Clusters provide access to [Buildkite secrets](/docs/pipelines/security/secrets/buildkite-secrets) and controlled access to sensitive resources. - **Easier agent management**: Clusters make agents and queues more discoverable across your organization and allow teams to self-manage their agent pools. - **Better organization**: You can separate agents and pipelines by team, environment, or use case, making your CI/CD infrastructure easier to understand and maintain. ###### Starting with a single cluster Starting your unclustered to clustered migration process with a single cluster offers several advantages: - **Minimal queue rewiring**: Your existing queue structure requires minimal configuration changes. - **No pipeline edits**: Pipelines continue to work without modification. - **Immediate insights**: Access [cluster insights](/docs/pipelines/insights/clusters) and [queue metrics](/docs/pipelines/insights/queue-metrics) instantly. - **Buildkite secrets**: Benefit from immediate access to [secrets management](/docs/pipelines/security/secrets/buildkite-secrets). ##### Assessing your current environment Before planning your migration, assess your current environment to understand the scope and complexity of the transition. ###### Make an inventory of existing resources 1. Document all _unclustered_ agents, including: * Number of agents * Agent queues * Agent tags * Agent environments (for example, operating system, architecture, etc.) 1. Document _all_ pipelines, including: * How pipelines are targeted to specific agents (that is, queue targeting, tag targeting, etc.) * Dependencies between pipelines * Shared resources or configurations 1. Identify cross-pipeline interactions: * Pipeline triggers * Artifact sharing * Other dependencies ###### Evaluate complexity of the agent migration process Consider the following factors that might increase the complexity of moving your unclustered agents to clustered ones: - **Agents assigned to multiple queues (not supported in clusters)**: In unclustered environments, a single agent can be assigned to multiple queues. However, with clusters, each agent can only belong to one queue. This limitation requires restructuring your agent configuration. * You'll need to decide whether to create separate agents for each queue or consolidate queues. * Pipeline configurations may need updating to accommodate the new queue structure. - **Use of agent tags across different queues**: Agent tags in clusters are scoped to the specific cluster they belong to, unlike in unclustered environments where tags can be used across multiple queues for targeting purposes. * Pipeline configurations that target agents using tags across queues will need to be updated. * You may need to standardize tagging conventions within each cluster. * Cross-cluster targeting patterns will require redesign using [rules](/docs/pipelines/security/clusters/rules) to allow specific exceptions. - **Pipelines that trigger other pipelines**: Pipelines across different clusters will not be able to trigger each other by default, requiring additional configuration if you split interconnected pipelines into separate clusters. * You'll need to create [rules](/docs/pipelines/security/clusters/rules) to allow cross-cluster pipeline triggering. * Consider grouping pipelines that interact frequently into the same cluster (at least initially, to simplify the agent moving process). * Triggers between clusters may have different behavior than within the same scope (for instance, [rules](/docs/pipelines/security/clusters/rules) allows [conditionals](/docs/pipelines/configure/conditionals)). - **Shared infrastructure or configuration between different teams or environments**: When different environments share infrastructure or configurations, sharing these resources across separate clusters adds complexity to the entire agent migration process. * Shared resources like caches, artifacts, or Docker images may need reconfiguring. * Teams might need to coordinate the timing their individual agent migrations to avoid disruption. * You may need to rethink how shared infrastructure is accessed across cluster boundaries. - **Custom scripts or automation that interacts with the Buildkite API**: Any custom scripts, integrations, or automations that interact with the Buildkite API might need updates to work with the cluster model. * Scripts that create or manage agents may need updating to handle [agent tokens](/docs/agent/self-hosted/tokens) (which work with clusters). * Reporting tools that query agent or pipeline state might need modification. * CI/CD automation that interacts with Buildkite Pipelines may require updates to handle the clustered structure. Use this assessment to determine which agent migration approach is best for your Buildkite organization. ##### Migration strategies Choose a migration strategy based on your organization's structure, CI/CD ownership model, and risk tolerance. ###### Single-cluster migration This migration strategy is the fastest and safest for most organizations. Start with a single cluster containing all your agents, then optionally split into multiple clusters later. ###### Advantages - Involves minimal queue and pipeline configuration changes. - Complete migration could be achieved within a matter of hours—not days or weeks. - Instant access to [cluster insights](/docs/pipelines/insights/clusters) and [queue metrics](/docs/pipelines/insights/queue-metrics). - Easiest environment to revert if issues arise, provided you have made copies of your agents as part of the [Migrate unclustered agents to clusters](#agent-migration-process-migrate-unclustered-agents-to-clusters) process (when running the agents in a [self-hosted (hybrid)](/docs/pipelines/architecture#self-hosted-hybrid-architecture) environment). ###### Considerations - All agents initially share the same security boundary. - Once you have completed migrating all your unclustered agents across to a single cluster, you may wish or need to split your agents and pipelines into multiple clusters later, using a [team-by-team](#migration-strategies-team-by-team-migration), [all-at-once](#migration-strategies-all-at-once-migration), or a [hybrid](#migration-strategies-hybrid-strategy) migration strategy, for an improved and more secure build environment. See [Agent migration process](#agent-migration-process) for detailed steps on the full migration process, bearing in mind that you are only working with a single cluster. ###### Team-by-team migration This migration strategy is best for Buildkite organizations that have their CI/CD ownership _distributed_ across multiple teams. ###### Advantages - Inherently has lower risk, as changes affect only one team at a time. - Teams can migration to clustered agents at their own pace. - Easier to troubleshoot issues if they arise. ###### Considerations - Requires a longer overall migration timeframe. - May require temporary solutions for cross-team pipeline dependencies. - Requires coordination between teams for shared resources. Learn more about the [technical considerations](#technical-considerations) of migrating agents from unclustered to clustered environments, and the [Agent migration process](#agent-migration-process) for detailed steps on the full migration process. ###### All-at-once migration This migration strategy is best for Buildkite organizations with _centrally_ managed infrastructure, particularly those using infrastructure-as-code tools like Terraform. ###### Advantages - Provides a shorter migration timeframe. - Provides a consistent implementation across all teams. - Avoids a prolonged hybrid state, where your Buildkite organization contains a mix of clustered and unclustered agents. ###### Considerations - Higher risk of other problems occurring if issues are encountered during migration. - Requires more extensive planning and testing. Learn more about the [technical considerations](#technical-considerations) of migrating agents from unclustered to clustered environments, and the [Agent migration process](#agent-migration-process) for detailed steps on the full migration process. ###### Hybrid strategy Consider a hybrid of the [team-by-team](#migration-strategies-team-by-team-migration) and [all-at-once](#migration-strategies-all-at-once-migration) migration strategy if your Buildkite organization has both _distributed_ and _centralized_ CI/CD components: - Migrate core infrastructure in one operation. - Allow teams to gradually migrate their team-specific agents and pipelines over to clusters. - Create a timeline with clear milestones for the complete agent migration process. Learn more about the [technical considerations](#technical-considerations) of migrating agents from unclustered to clustered environments, and the [Agent migration process](#agent-migration-process) for detailed steps on the full migration process. ##### Technical considerations Understanding these differences is crucial for planning your migration. ###### Agent queue differences The following table lists the differences in how agents, queues and tags are handled between unclustered and clustered environments. | Feature | Unclustered environment | Clustered environment | **** | | ###### Agent token management ###### Agent token differences The following table lists the differences between the former unclustered agent tokens and newer agent tokens associated with clusters. | Feature | Unclustered agent tokens | Agent tokens for clusters | **** | | ###### Security considerations - Switching from unclustered agent tokens to agent tokens for clusters is necessary for migrating your agents to clusters. - Ensure secure distribution of new agent tokens. - Plan for token rotation if needed, and if doing so, plan to implement [agent token expiration with a limited lifetime](/docs/agent/self-hosted/tokens#agent-token-lifetime) (available when [creating agent tokens](/docs/agent/self-hosted/tokens#create-a-token) using the [REST](/docs/agent/self-hosted/tokens#create-a-token-using-the-rest-api) or [GraphQL](/docs/agent/self-hosted/tokens#create-a-token-using-the-graphql-api) APIs). ###### Pipeline relationships - As part of [evaluating the complexity of the agent migration process](#assessing-your-current-environment-evaluate-complexity-of-the-agent-migration-process), be aware of which of your pipelines trigger others. - You'll need to create [rules](/docs/pipelines/security/clusters/rules) to allow cross-cluster pipeline interactions, such as triggering or reading cross-cluster artifacts. - Consider how to structure your clusters to minimize the need for cross-cluster triggers, but also maintain meaningful boundaries. ##### Agent migration process This section outlines the complete migration process from unclustered to clustered agents, providing both an overview of each step and detailed implementation guidance. ###### Plan and prepare 1. Identify which clusters you need based on your organization's structure. * Common patterns include creating clusters to separate environments (development, test, production), platforms (Linux, macOS, Windows), or teams. * Create a mapping of existing agents to their future clusters. 1. Document your current unclustered setup including: * Agent configurations and locations. * Pipeline configurations and dependencies. * Cross-pipeline interactions. 1. Create a realistic timeline that accounts for testing and potential rollbacks. 1. Develop a communication plan for all teams affected by the migration. See [best practices on communication planning](#best-practices-and-recommendations-communication-planning) for some high-level guidelines on how to approach this step. ###### Set up your infrastructure 1. Set up infrastructure to support your new cluster configuration. * Update agent installation scripts or configuration management tools. * Prepare for temporary coexistence of clustered and unclustered agents during migration. 1. If using infrastructure as code, create or update templates to support the new cluster model. 1. Establish monitoring for both clustered and unclustered agents during the transition. ###### Create your clusters and queues 1. Create the [appropriate clusters](/docs/pipelines/security/clusters/manage#setting-up-clusters) within your Buildkite organization. You can [create clusters](/docs/pipelines/security/clusters/manage#create-a-cluster) using the [Buildkite interface](/docs/pipelines/security/clusters/manage#create-a-cluster-using-the-buildkite-interface), or [REST](/docs/pipelines/security/clusters/manage#create-a-cluster-using-the-rest-api) or [GraphQL](/docs/pipelines/security/clusters/manage#create-a-cluster-using-the-graphql-api) APIs. 1. Define the [appropriate queues](/docs/agent/queues/managing#setting-up-queues) within each cluster. Name queues according to their purpose (for example, `linux-amd64`, `macos-m1`, etc.), bearing in mind that basing the queue name on the [queue tag assigned to an agent when it was started](/docs/agent/queues#assigning-a-self-hosted-agent-to-a-queue) (in its unclustered environment), could reduce the complexity of the agent migration process. If you have unclustered agents where any one of them was assigned to multiple queues (that is, if the [agent was started with multiple queue tags in its unclustered environment](/docs/agent/queues#setting-up-queues-for-unclustered-agents)), then create a new queue in its relevant cluster for each of these queue tags, or (based on **Migration considerations** under [Agent queue differences](#technical-considerations-agent-queue-differences) above), perhaps just for the important queue tags you wish to continue using in your clustered environment. If defining multiple queues, select a sensible queue to be the default. Jobs without a specific queue mentioned will use the default queue. Last, add descriptions to your queues to help users understand the queues' purposes and capabilities. You can create either: * [self-hosted queues](/docs/agent/queues/managing#create-a-self-hosted-queue) (using the [Buildkite interface](/docs/agent/queues/managing#create-a-self-hosted-queue-using-the-buildkite-interface), or [REST](/docs/agent/queues/managing#create-a-self-hosted-queue-using-the-rest-api) or [GraphQL](/docs/agent/queues/managing#create-a-self-hosted-queue-using-the-graphql-api) APIs), or * [Buildkite hosted queues](/docs/agent/queues/managing#create-a-buildkite-hosted-queue) (also using the [Buildkite interface](/docs/agent/queues/managing#create-a-buildkite-hosted-queue-using-the-buildkite-interface), or [REST](/docs/agent/queues/managing#create-a-buildkite-hosted-queue-using-the-rest-api) or [GraphQL](/docs/agent/queues/managing#create-a-buildkite-hosted-queue-using-the-graphql-api) APIs). > 📘 Cluster queue limit > By default, you can create up to 50 queues per cluster. If your organization requires more than 50 queues in a cluster, contact [support@buildkite.com](mailto:support@buildkite.com). 1. Configure the necessary permissions for each cluster. As part of this process, consider how you'll set up [cluster maintainers](/docs/pipelines/security/clusters/manage#manage-maintainers-on-a-cluster) so that infrastructure teams are enabled to self-manage agent resources. If you'll be: - Running any of your clustered agents in a [self-hosted (hybrid)](/docs/pipelines/architecture#self-hosted-hybrid-architecture) environment, continue on to the [Configure agent tokens](#agent-migration-process-configure-agent-tokens) and [Migrate unclustered agents to clusters](#agent-migration-process-migrate-unclustered-agents-to-clusters). - Running _all_ of your clustered agents as Buildkite hosted agents, you can skip to the [Move pipelines to clusters](#agent-migration-process-move-pipelines-to-clusters) section of this process. ###### Configure agent tokens This part of the agent migration process is only applicable for clustered agents running in a [self-hosted (hybrid)](/docs/pipelines/architecture#self-hosted-hybrid-architecture) environment. 1. Generate new [agent tokens](/docs/agent/self-hosted/tokens) for each cluster. 1. Securely distribute these agent tokens to the appropriate teams or systems. 1. Document the mapping between these agent tokens and their clusters. You can [create agent tokens](/docs/agent/self-hosted/tokens#create-a-token) using the [Buildkite interface](/docs/agent/self-hosted/tokens#create-a-token-using-the-buildkite-interface), or the [REST](/docs/agent/self-hosted/tokens#create-a-token-using-the-rest-api) or [GraphQL](/docs/agent/self-hosted/tokens#create-a-token-using-the-graphql-api) API. Consider rotating tokens and setting an expiry date as you create them. Learn more about this process in [Agent token lifetime](/docs/agent/self-hosted/tokens#agent-token-lifetime). ###### Migrate unclustered agents to clusters This part of the agent migration process is only applicable for clustered agents running in a [self-hosted (hybrid)](/docs/pipelines/architecture#self-hosted-hybrid-architecture) environment. 1. Update your unclustered agent configurations—preferably by making a new copy of each agent for its new clustered environment. For each new agent, replace its existing unclustered agent token with its new agent token for its cluster. As part of a [best practice](#best-practices-and-recommendations) strategy to [minimize downtime](#best-practices-and-recommendations-minimizing-downtime), creating copies of your agents like this results in two instances of each agent—one running in your original unclustered environment and the other associated with its appropriate cluster. This allows you to fall back on your unclustered agents if you have issues getting any of your clustered agents to operate as expected, thereby minimizing downtime. Be aware that this situation is only temporary, since you'll eventually be [decommissioning your unclustered agents](#agent-migration-process-decommission-your-unclustered-resources). 1. For each of your agents, ensure they are configured to start with their appropriate tags for targeting, _and_ set the queue that was [already defined in your cluster](#agent-migration-process-create-your-clusters-and-queues) (or the [default queue](/docs/agent/queues#assigning-a-self-hosted-agent-to-a-queue-the-default-self-hosted-queue)), which will be selected for that agent. For example, the following code snippet shows how to [configure an agent](/docs/agent/self-hosted/configure) from its [former unclustered environment that defined multiple queue tags](/docs/agent/queues#setting-up-queues-for-unclustered-agents), to instead target its single queue ([configured previously](#agent-migration-process-create-your-clusters-and-queues)) for its clustered environment: ```bash # Before migration (unclustered) - multiple queues buildkite-agent start \ --token "unclustered-agent-token-value" \ --tags "queue=linux,queue=testing,arch=amd64,env=prod" # After migration (clustered) - single queue buildkite-agent start \ --token "agent-token-value-for-cluster" \ --tags "queue=linux,arch=amd64,env=prod" ``` 1. Restart agents to apply the new configuration. 1. Verify that agents appear in the correct cluster in the Buildkite interface. ###### Move pipelines to clusters Move all the pipelines that were associated with your unclustered agents to their appropriate clusters (associated with the agents that will build these pipelines). Also, see [best practices](#best-practices-and-recommendations) on [minimizing downtime](#best-practices-and-recommendations-minimizing-downtime) and [testing strategies](#best-practices-and-recommendations-testing-strategies) for some high-level guidelines on how to approach moving your pipelines over to clusters. 1. For each such pipeline: 1. Navigate to the pipeline's **Settings**. 1. On the **General** settings page, select the **Change Cluster** button, and then select the appropriate cluster from the resulting dialog. 1. Select **Save** to update the pipeline's cluster. 1. Update any queue references in the pipeline's steps if required. Check both the pipeline's **Settings** > **Steps**, as well as the relevant `pipeline.yml` file uploaded from its Git repository. 1. Ensure these queue reference updates are saved. 1. Configure cross-cluster interactions if needed: 1. Navigate to your Buildkite organization's **Settings** > **Rules** to access its [**Rules** page](https://buildkite.com/organizations/~/rules). 1. Create [rules](/docs/pipelines/security/clusters/rules) to allow specific cross-cluster interactions. 1. Test that these new rules function as expected. 1. Update any CI/CD automation that interacts with these pipelines. Alternatively, consider using [Terraform](https://registry.terraform.io/providers/buildkite/buildkite/latest/docs) to assign pipelines to clusters in a single action at once. ###### Test and validate the migrated pipelines 1. Run test builds on migrated pipelines to verify their execution. 1. Verify that agents pick up jobs correctly and use [queue metrics](/docs/pipelines/insights/queue-metrics) graphs to monitor job creation within your clusters. 1. Check that any pipeline triggers that you implemented work as expected. 1. Monitor for any errors or unexpected behavior. 1. Test failure scenarios to ensure proper recovery. ###### Decommission your unclustered resources 1. Once [all tests pass](#agent-migration-process-test-and-validate-the-migrated-pipelines), gradually increase traffic to the pipelines that were migrated to clusters. 1. Monitor for any issues during the transition. If you are on an [Enterprise](https://buildkite.com/pricing/) plan, monitor [cluster insights](/docs/pipelines/insights/clusters). 1. After a suitable monitoring period (typically 1-2 weeks): * Decommission your old unclustered agents. * Archive any obsolete pipelines. * Remove temporary configurations used during the agent migration process. 1. Document the final cluster configuration for future reference. ##### Best practices and recommendations ###### Minimizing downtime - Maintain parallel unclustered and clustered agents during migration. - Migrate one pipeline at a time to minimize risk. - Schedule migrations during low-traffic periods. - Have a rollback plan ready in the event that unexpected issues are encountered. ###### Testing strategies - Create a test cluster with a subset of pipelines and agents before full migration. - Test with non-critical pipelines first. - Use feature branches to validate pipeline behavior in clusters. - Simulate failure scenarios to verify recovery processes. ###### Communication planning - Notify all stakeholders well in advance of the migration. - Provide documentation on how the new cluster structure works. - Offer training or support sessions for teams unfamiliar with clusters. - Set up a communication channel for reporting issues during migration. ##### Troubleshooting common issues ###### Agent connection problems **Issue**: Agents fail to connect to Buildkite after the migration. **Solutions**: - Verify the agent token is correct and belongs to the intended cluster. - Check network connectivity between the agent and Buildkite. - Review agent logs for error messages. ###### Pipeline execution failures **Issue**: Pipelines don't execute on clustered agents. **Solutions**: - Verify the pipeline is assigned to the correct cluster. - Check that any queues referenced in the pipeline steps exist in the cluster. - Ensure agent tags match what's expected by the pipeline. - Verify that agents are connected and healthy. ###### Queue configuration issues **Issue**: Jobs target the wrong agents or don't run. **Solutions**: - Review queue names and configurations. - Verify agent tags are correctly set and aren't trying to align cross-queues. - Check pipeline step queue targeting in your YAML. - Consider using more descriptive queue names to avoid confusion. --- ### Overview URL: https://buildkite.com/docs/pipelines/security/clusters/rules #### Rules overview _Rules_ is a Buildkite feature that can do the following: - Grant access between Buildkite resources that would normally be restricted by [cluster](/docs/pipelines/security/clusters), [visibility](/docs/pipelines/configure/public-pipelines), or [permissions](/docs/platform/team-management/permissions). - Allows an action between a source resource and a target resource across your Buildkite organization. For example, allowing one pipeline's builds to trigger another pipeline's builds. > 📘 > The _rules_ feature is currently in development, and is enabled on an opt-in basis for early access. To enable rules for your organization, please contact Buildkite's Support team at support@buildkite.com. ##### Rule types Buildkite Pipelines supports two types of rules that allow one pipeline build to: - [Trigger another pipeline build](#rule-types-pipeline-dot-trigger-build-dot-pipeline). - [Read the artifacts generated by another pipeline build](#rule-types-pipeline-dot-artifacts-read-dot-pipeline). ###### pipeline.trigger_build.pipeline This rule type allows one pipeline to trigger another, where: - Both pipelines are in the same or different [clusters](/docs/pipelines/security/clusters). - One pipeline is public and another is private. > 📘 > This rule type overrides the usual [trigger step permissions checks](/docs/pipelines/configure/step-types/trigger-step#permissions) on users and teams. **Rule Document** format: ```json { "rule": "pipeline.trigger_build.pipeline", "value": { "source_pipeline": "pipeline-uuid-or-slug", "target_pipeline": "pipeline-uuid-or-slug", "conditions": [ "source.build.branch == 'main'", "source.build.commit == target.trigger.commit" ] } } ``` where: - `source_pipeline` is the UUID or slug of the pipeline that's allowed to trigger another pipeline. - `target_pipeline` is the UUID or slug of the pipeline that can be triggered by the `source_pipeline`'s pipeline. - `conditions` is an optional array of [conditionals](/docs/pipelines/configure/conditionals) that must be met to allow the `source_pipeline`'s pipeline to trigger the `target_pipeline`'s pipeline. Learn more about this in the following [Conditions](#conditions-trigger) section. ###### Conditions The optional `conditions` field allows you to specify an array of [conditionals](/docs/pipelines/configure/conditionals) that must be met for the source pipeline (`source_pipeline`) to trigger the target pipeline (`target_pipeline`). In the example above, the rule would only allow triggering if the source pipeline's build branch is `main` and the commit of the source pipeline's build matches that of the target pipeline's trigger build. If no conditions are specified, triggering is allowed in all cases between the source and target pipelines. If _any_ of the conditions _are not_ met, triggering is not allowed, even if the default permissions would have allowed triggering. The conditions are evaluated using the [Buildkite conditionals syntax](/docs/pipelines/configure/conditionals#variable-and-syntax-reference). In the `pipeline.trigger_build.pipeline` rule the available variables for conditions are: | Variable | Type | Description | `source.build.*` | `Build` | The triggering build in the source pipeline (contains the trigger step). This includes all the variables available for a [build](/docs/pipelines/configure/conditionals#variable-and-syntax-reference-variables). Example variables available: `source.build.branch` - the branch of the source pipeline that the trigger step is targeting. `source.build.commit` - the commit of the source pipeline that the trigger step is targeting. `source.build.message` - the commit message of the source pipeline that the trigger step is targeting. | `target.trigger.branch` | `String` | The branch of the target pipeline that the trigger step is targeting. | `target.trigger.commit` | `String` | The commit of the target pipeline that the trigger step is targeting. | `target.trigger.message` | `String` | The commit message of the target pipeline that the trigger step is targeting. > 📘 > Conditions are shown in error messages when access is denied. Learn more about creating rules in [Manage rules](/docs/pipelines/security/clusters/rules/manage). ###### Example use case: cross-cluster pipeline triggering Clusters may be used to separate the environments necessary for building and deploying an application. For example, a continuous integration (CI) pipeline has been set up in cluster A and likewise, a continuous deployment (CD) pipeline in cluster B. Ordinarily, pipelines in separate clusters are not able to trigger builds between each other due to the strict isolation of clusters. However, a `pipeline.trigger_build.pipeline` rule would allow a trigger step in the CI pipeline of cluster A to target the CD pipeline in cluster B. Such rules would allow deployment to be triggered upon a successful CI build, while still maintaining the separation between the CI and CD agents in their respective clusters. ###### pipeline.artifacts_read.pipeline This rule type allows one pipeline to access (that is, with read-only permissions) the artifacts built by another, where: - Both pipelines are in the same or different [clusters](/docs/pipelines/security/clusters). - One pipeline is public and another is private. **Rule Document** format: ```json { "rule": "pipeline.artifacts_read.pipeline", "value": { "source_pipeline": "pipeline-uuid-or-slug", "target_pipeline": "pipeline-uuid-or-slug", "conditions": [ "source.build.branch == target.build.branch", ] } } ``` where: - `source_pipeline` is the UUID or slug of the pipeline that's allowed to access the artifacts from another pipeline. - `target_pipeline` is the UUID or slug of the pipeline whose artifacts can be accessed by jobs in the `source_pipeline` pipeline. - `conditions` is an optional array of [conditionals](/docs/pipelines/configure/conditionals) that must be met to allow the jobs of the `source_pipeline`'s pipeline to access the artifacts of the `target_pipeline`'s pipeline. Learn more about this in the following [Conditions](#conditions-artifacts) section. ###### Conditions The optional `conditions` field allows you to specify an array of [conditionals](/docs/pipelines/configure/conditionals) that must be met for jobs of the source pipeline (`source_pipeline`) to access artifacts built by the target pipeline (`target_pipeline`). In the example above, the rule would only allow artifact access if the source pipeline's build branch matches the target pipeline's build branch. If no conditions are specified, artifact access is allowed in all cases between the source and target pipelines. If _any_ of the conditions _are not_ met, artifact access is not allowed, even if the default permissions would have allowed triggering. The conditions are evaluated using the [Buildkite conditionals syntax](/docs/pipelines/configure/conditionals#variable-and-syntax-reference). In the `pipeline.read_artifacts.pipeline` rule the available variables for conditions are: | Variable | Type | Description | `source.build.*` | `Build` | The build in the source pipeline that is accessing the artifacts. This includes all the variables available for a [build](/docs/pipelines/configure/conditionals#variable-and-syntax-reference-variables). Example variables available: `source.build.branch` - the branch of the source pipeline that is accessing the artifacts. `source.build.commit` - the commit of the source pipeline that is accessing the artifacts. `source.build.message` - the commit message of the source pipeline that is accessing the artifacts. | `target.build.*` | `Build` | The build in the target pipeline that the artifacts are being accessed from. This includes all the variables available for a [build](/docs/pipelines/configure/conditionals#variable-and-syntax-reference-variables). Example variables available: `target.build.branch` - the branch of the target pipeline that the artifacts are being accessed from. `target.build.commit` - the commit of the target pipeline that the artifacts are being accessed from. `target.build.message` - the commit message of the target pipeline that the artifacts are being accessed from. | `source.request.query` | `String` | The query used to search for artifacts in the target build. See [Searching artifacts](/docs/agent/cli/reference/artifact#searching-artifacts) for more information on the query syntax. > 📘 > Conditions are shown in error messages when access is denied. Learn more about creating rules in [Manage rules](/docs/pipelines/security/clusters/rules/manage). ###### Example use case: sharing assets between clusters Artifacts are not accessible between pipelines across different clusters. For example, a deployment pipeline in cluster B cannot ordinarily access artifacts uploaded by a CI pipeline in cluster A. However, a `pipeline.artifacts_read.pipeline` rule can be used to override this restriction. For example, assets uploaded as artifacts by the CI pipeline would now be accessible to the deployment pipeline via the `buildkite-agent artifact download --build xxx` command. --- ### Manage rules URL: https://buildkite.com/docs/pipelines/security/clusters/rules/manage #### Manage rules This page provides details on how to manage [rules](/docs/pipelines/security/clusters/rules) within your Buildkite organization. ##### Create a rule New rules can be created by [Buildkite organization administrators](/docs/platform/team-management/permissions#manage-teams-and-permissions-organization-level-permissions) using the [**Rules** page](#create-a-rule-using-the-buildkite-interface), as well as Buildkite's [REST API](#create-a-rule-using-the-rest-api) or [GraphQL API](#create-a-rule-using-the-graphql-api). ###### Using the Buildkite interface To create a new rule using the Buildkite interface: 1. Select **Settings** in the global navigation to access the [**Organization Settings**](https://buildkite.com/organizations/~/settings) page. 1. In the **Pipelines** section, select **Rules** > **New Rule** to open its page. 1. For **Rule Type**, select the [type of rule](/docs/pipelines/security/clusters/rules#rule-types) to be created, that is, either **pipeline.trigger_build.pipeline** or **pipeline.artifacts_read.pipeline**. 1. Specify a short **Description** for the rule. 1. In the **Rule Document** field: * Specify the relevant values (either a pipeline UUID or a pipeline slug) for both the `source_pipeline` and `target_pipeline` pipelines, of your [**pipeline.trigger_build.pipeline**](/docs/pipelines/security/clusters/rules#rule-types-pipeline-dot-trigger-build-dot-pipeline) or [**pipeline.artifacts_read.pipeline**](/docs/pipelines/security/clusters/rules#rule-types-pipeline-dot-artifacts-read-dot-pipeline) rule. You can find the UUID values for these pipelines on the pipelines' respective **Settings** page under the **GraphQL API integration** section. * Specify any optional conditions that must be met for the source pipeline to [trigger](/docs/pipelines/security/clusters/rules#conditions-trigger) or [access artifacts built by](/docs/pipelines/security/clusters/rules#conditions-artifacts) its target pipeline. 1. Select **Create Rule**. The rule is created and presented on the **Rules** page, with a brief description of the rule type and the relationship between both pipelines. ###### Using the REST API To [create a new rule](/docs/apis/rest-api/rules#rules-create-a-rule) using the [REST API](/docs/apis/rest-api), run the following example `curl` command: ```bash curl -H "Authorization: Bearer $TOKEN" \ -X POST "https://api.buildkite.com/v2/organizations/{org.slug}/rules" \ -H "Content-Type: application/json" \ -d '{ "rule": "pipeline.trigger_build.pipeline", "description": "A short description for your rule", "value": { "source_pipeline": "{pipeline-uuid-or-slug}", "target_pipeline": "{pipeline-uuid-or-slug}", "conditions": ["{condition-1}", "{condition-2}"] } }' ``` where: - `$TOKEN` is an [API access token](https://buildkite.com/user/api-access-tokens) scoped to the relevant **Organization** and **REST API Scopes** that your request needs access to in Buildkite. - `{org.slug}` can be obtained: * From the end of your Buildkite URL, after accessing **Pipelines** in the global navigation of your organization in Buildkite. * By running the [List organizations](/docs/apis/rest-api/organizations#list-organizations) REST API query to obtain this value from `slug` in the response. For example: ```bash curl -H "Authorization: Bearer $TOKEN" \ -X GET "https://api.buildkite.com/v2/organizations" ``` - `rule` is the [type of rule](/docs/pipelines/security/clusters/rules#rule-types) to be created, that is, either `pipeline.trigger_build.pipeline` or `pipeline.artifacts_read.pipeline`. - `description` (optional) is a short description for the rule. - `source_pipeline` and `target_pipeline` accept either a pipeline slug or UUID. Pipeline UUID values for `source_pipeline` and `target_pipeline` can be obtained: * From the **Pipeline Settings** page of the appropriate pipeline. To do this: 1. Select **Pipelines** (in the global navigation) > the specific pipeline > **Settings**. 1. Once on the **Pipeline Settings** page, copy the `UUID` value from the **GraphQL API Integration** section * By running the [List pipelines](/docs/apis/rest-api/pipelines#list-pipelines) REST API query to obtain this value from `id` in the response from the specific pipeline. For example: ```bash curl -H "Authorization: Bearer $TOKEN" \ -X GET "https://api.buildkite.com/v2/organizations/{org.slug}/pipelines" ``` - `conditions` (optional) is an array of conditions that must be met for the source pipeline to [trigger](/docs/pipelines/security/clusters/rules#conditions-trigger) or [access artifacts built by](/docs/pipelines/security/clusters/rules#conditions-artifacts) its target pipeline. Some example values could include: * `source.build.creator.teams includes 'core'` * `source.build.branch == 'main'` ###### Using the GraphQL API To [create a new rule](/docs/apis/graphql/cookbooks/rules#create-a-rule) using the [GraphQL API](/docs/apis/graphql-api), use the `ruleCreate` mutation based on the following example, where the contents of the `value` field must be a JSON-encoded string: ```graphql mutation { ruleCreate(input: { organizationId: "organization-id", type: "pipeline.trigger_build.pipeline", description: "An short description for your rule", value: "{\"source_pipeline\":\"pipeline-uuid-or-slug\",\"target_pipeline\":\"pipeline-uuid-or-slug\",\"conditions\":[\"condition-1\",\"condition-2\"]}" }) { rule { id type description targetType sourceType source { ... on Pipeline { uuid } } target { ... on Pipeline { uuid } } effect action createdBy { id name } } } } ``` where: - `organizationId` (required) can be obtained: * From the **GraphQL API Integration** section of your **Organization Settings** page, accessed by selecting **Settings** in the global navigation of your organization in Buildkite. * By running a `getCurrentUsersOrgs` GraphQL API query to obtain the organization slugs for the current user's accessible organizations, followed by a [getOrgId](/docs/apis/graphql/schemas/query/organization) query, to obtain the organization's `id` using the organization's slug. For example: Step 1. Run `getCurrentUsersOrgs` to obtain the organization slug values in the response for the current user's accessible organizations: ```graphql query getCurrentUsersOrgs { viewer { organizations { edges { node { name slug } } } } } ``` Step 2. Run `getOrgId` with the appropriate slug value above to obtain this organization's `id` in the response: ```graphql query getOrgId { organization(slug: "organization-slug") { id uuid slug } } ``` **Note:** The `organization-slug` value can also be obtained from the end of your Buildkite URL, by selecting **Pipelines** in the global navigation of your organization in Buildkite. - `type` is the [type of rule](/docs/pipelines/security/clusters/rules#rule-types) to be created, that is, either `pipeline.trigger_build.pipeline` or `pipeline.artifacts_read.pipeline`. - `description` (optional) is a short description for the rule. - `source_pipeline` and `target_pipeline` accept either a pipeline slug or UUID. Pipeline UUID values for `source_pipeline` and `target_pipeline` can be obtained: * From the **Pipeline Settings** page of the appropriate pipeline. To do this: 1. Select **Pipelines** (in the global navigation) > the specific pipeline > **Settings**. 1. Once on the **Pipeline Settings** page, copy the `UUID` value from the **GraphQL API Integration** section * By running the `getCurrentUsersOrgs` GraphQL API query to obtain the organization slugs for the current user's accessible organizations, then [getOrgPipelines](/docs/apis/graphql/schemas/query/organization) query to obtain the pipeline's `uuid` in the response. For example: Step 1. Run `getCurrentUsersOrgs` to obtain the organization slug values in the response for the current user's accessible organizations: ```graphql query getCurrentUsersOrgs { viewer { organizations { edges { node { name slug } } } } } ``` Step 2. Run `getOrgPipelines` with the appropriate slug value above to obtain this organization's `uuid` in the response: ```graphql query getOrgPipelines { organization(slug: "organization-slug") { pipelines(first: 100) { edges { node { id uuid name } } } } } ``` - `conditions` (optional) is an array of conditions that must be met for the source pipeline to [trigger](/docs/pipelines/security/clusters/rules#conditions-trigger) or [access artifacts built by](/docs/pipelines/security/clusters/rules#conditions-artifacts) its target pipeline. Some example values could include: * `source.build.creator.teams includes 'core'` * `source.build.branch == 'main'` ##### Edit a rule Rules can be edited by [Buildkite organization administrators](/docs/platform/team-management/permissions#manage-teams-and-permissions-organization-level-permissions) using the [**Rules** page](#edit-a-rule-using-the-buildkite-interface), as well as Buildkite's [GraphQL API](#edit-a-rule-using-the-graphql-api). When editing a rule, you can modify its **Description** and **Rule Document** details, where the latter is contained within the `value` field of API requests, although a rule's type is fixed once it is [created](#create-a-rule) and its value cannot be modified. ###### Using the Buildkite interface To edit an existing rule using the Buildkite interface: 1. Select **Settings** in the global navigation to access the [**Organization Settings**](https://buildkite.com/organizations/~/settings) page. 1. In the **Pipelines** section, select **Rules** to access its page. 1. Expand the existing rule to be edited. 1. Select the **Edit** button to open the rule's **Edit Rule** page. 1. If required, modify the rule's short **Description**, or clear this field to remove this value. 1. In the **Rule Document** field: * Modify the relevant values (either a pipeline UUID or a pipeline slug) for both the `source_pipeline` and `target_pipeline` pipelines, of your [**pipeline.trigger_build.pipeline**](/docs/pipelines/security/clusters/rules#rule-types-pipeline-dot-trigger-build-dot-pipeline) or [**pipeline.artifacts_read.pipeline**](/docs/pipelines/security/clusters/rules#rule-types-pipeline-dot-artifacts-read-dot-pipeline) rule. You can find the UUID values for these pipelines on the pipelines' respective **Settings** page under the **GraphQL API integration** section. * Modify any optional `conditions` that must be met for the source pipeline to [trigger](/docs/pipelines/security/clusters/rules#conditions-trigger) or [access artifacts built by](/docs/pipelines/security/clusters/rules#conditions-artifacts) its target pipeline. To remove a condition, remove its specific value from the array, or to remove all conditions, remove the entire `conditions` array. 1. Select **Save Rule**. The rule is updated and you are returned to the **Rules** page. The rule's **Description** and other details can be accessed when the rule is expanded. ###### Using the GraphQL API To [edit an existing rule](/docs/apis/graphql/cookbooks/rules#edit-a-rule) using the [GraphQL API](/docs/apis/graphql-api), use the `ruleUpdate` mutation based on the following example, where the contents of the `value` field must be a JSON-encoded string: ```graphql mutation { ruleUpdate(input: { organizationId: "organization-id", id: "rule-id", description: "A optional, new short description for your rule", value: "{\"source_pipeline\":\"pipeline-uuid-or-slug\",\"target_pipeline\":\"pipeline-uuid-or-slug\",\"conditions\":[\"condition-1\",\"condition-2\"]}" }) { rule { id type description targetType sourceType source { ... on Pipeline { uuid } } target { ... on Pipeline { uuid } } effect action createdBy { id name } } } } ``` where: - `organizationId` (required) can be obtained: * From the **GraphQL API Integration** section of your **Organization Settings** page, accessed by selecting **Settings** in the global navigation of your organization in Buildkite. * By running a `getCurrentUsersOrgs` GraphQL API query to obtain the organization slugs for the current user's accessible organizations, followed by a [getOrgId](/docs/apis/graphql/schemas/query/organization) query, to obtain the organization's `id` using the organization's slug. For example: Step 1. Run `getCurrentUsersOrgs` to obtain the organization slug values in the response for the current user's accessible organizations: ```graphql query getCurrentUsersOrgs { viewer { organizations { edges { node { name slug } } } } } ``` Step 2. Run `getOrgId` with the appropriate slug value above to obtain this organization's `id` in the response: ```graphql query getOrgId { organization(slug: "organization-slug") { id uuid slug } } ``` **Note:** The `organization-slug` value can also be obtained from the end of your Buildkite URL, by selecting **Pipelines** in the global navigation of your organization in Buildkite. - `id` is the rule ID value of the existing rule to be edited. This value can be obtained: * From the **Rules** section of your **Organization Settings** page, accessed by selecting **Settings** in the global navigation of your organization in Buildkite. Then, expand the existing rule and copy its **GraphQL ID** value. * By running a [List rules](/docs/apis/graphql/cookbooks/rules#list-rules) GraphQL API query to obtain the rule's `id` in the response. For example: ```graphql query getRules { organization(slug: "organization-slug") { rules(first: 10) { edges { node { id type source { ... on Pipeline { slug } } target { ... on Pipeline { slug } } } } } } } ``` - `description` (optional) is a short description for the rule. Omitting this value will remove this value from the rule. - `source_pipeline` and `target_pipeline` accept either a pipeline slug or UUID. Pipeline UUID values for `source_pipeline` and `target_pipeline` can be obtained: * From the **Pipeline Settings** page of the appropriate pipeline. To do this: 1. Select **Pipelines** (in the global navigation) > the specific pipeline > **Settings**. 1. Once on the **Pipeline Settings** page, copy the `UUID` value from the **GraphQL API Integration** section * By running the `getCurrentUsersOrgs` GraphQL API query to obtain the organization slugs for the current user's accessible organizations, then [getOrgPipelines](/docs/apis/graphql/schemas/query/organization) query to obtain the pipeline's `uuid` in the response. For example: Step 1. Run `getCurrentUsersOrgs` to obtain the organization slug values in the response for the current user's accessible organizations: ```graphql query getCurrentUsersOrgs { viewer { organizations { edges { node { name slug } } } } } ``` Step 2. Run `getOrgPipelines` with the appropriate slug value above to obtain this organization's `uuid` in the response: ```graphql query getOrgPipelines { organization(slug: "organization-slug") { pipelines(first: 100) { edges { node { id uuid name } } } } } ``` - `conditions` (optional) is an array of conditions that must be met for the source pipeline to [trigger](/docs/pipelines/security/clusters/rules#conditions-trigger) or [access artifacts built by](/docs/pipelines/security/clusters/rules#conditions-artifacts) its target pipeline. Some example values could include: * `source.build.creator.teams includes 'core'` * `source.build.branch == 'main'` **Note:** To remove a condition, remove its specific value from this array, or to remove all conditions, remove the entire `conditions` array. ##### Delete a rule Rules can be deleted by [Buildkite organization administrators](/docs/platform/team-management/permissions#manage-teams-and-permissions-organization-level-permissions) using the [**Rules** page](#delete-a-rule-using-the-buildkite-interface), as well as Buildkite's [REST API](#delete-a-rule-using-the-rest-api) or [GraphQL API](#delete-a-rule-using-the-graphql-api). ###### Using the Buildkite interface To delete an existing rule using the Buildkite interface: 1. Select **Settings** in the global navigation to access the [**Organization Settings**](https://buildkite.com/organizations/~/settings) page. 1. In the **Pipelines** section, select **Rules** to access its page. 1. Expand the existing rule to be deleted. 1. Select the **Delete** button to delete this rule. **Note:** Exercise caution at this point as this action happens immediately without any warning message appearing after selecting this button. ###### Using the REST API To [delete an existing rule](/docs/apis/rest-api/rules#rules-delete-a-rule) using the [REST API](/docs/apis/rest-api), run the following example `curl` command: ```bash curl -H "Authorization: Bearer $TOKEN" \ -X DELETE "https://api.buildkite.com/v2/organizations/{org.slug}/rules/{rule.uuid}" ``` where: - `$TOKEN` is an [API access token](https://buildkite.com/user/api-access-tokens) scoped to the relevant **Organization** and **REST API Scopes** that your request needs access to in Buildkite. - `{org.slug}` can be obtained: * From the end of your Buildkite URL, after accessing **Pipelines** in the global navigation of your organization in Buildkite. * By running the [List organizations](/docs/apis/rest-api/organizations#list-organizations) REST API query to obtain this value from `slug` in the response. For example: ```bash curl -H "Authorization: Bearer $TOKEN" \ -X GET "https://api.buildkite.com/v2/organizations" ``` - `{rule.uuid}` can be obtained: * From the **Rules** section of your **Organization Settings** page, accessed by selecting **Settings** in the global navigation of your organization in Buildkite. * By running a [List rules](/docs/apis/rest-api/rules#rules-list-rules) REST API query to obtain the rule's `uuid` in the response. For example: ```bash curl -H "Authorization: Bearer $TOKEN" \ -X GET "https://api.buildkite.com/v2/organizations/{org.slug}/rules" ``` **Important:** For the rule identified by its `uuid` in the response, ensure the pipeline UUIDs of the source (`source_uuid`) and target (`target_uuid`), as well as the rule type (`type`) match those of this rule to be deleted. ###### Using the GraphQL API To [delete an existing rule](/docs/apis/graphql/cookbooks/rules#delete-a-rule) using the [GraphQL API](/docs/apis/graphql-api), use the `ruleDelete` mutation, based on the following example: ```graphql mutation { ruleDelete(input: { organizationId: "organization-id", id: "rule-id" }) { deletedRuleId } } ``` where: - `organizationId` (required) can be obtained: * From the **GraphQL API Integration** section of your **Organization Settings** page, accessed by selecting **Settings** in the global navigation of your organization in Buildkite. * By running a `getCurrentUsersOrgs` GraphQL API query to obtain the organization slugs for the current user's accessible organizations, followed by a [getOrgId](/docs/apis/graphql/schemas/query/organization) query, to obtain the organization's `id` using the organization's slug. For example: Step 1. Run `getCurrentUsersOrgs` to obtain the organization slug values in the response for the current user's accessible organizations: ```graphql query getCurrentUsersOrgs { viewer { organizations { edges { node { name slug } } } } } ``` Step 2. Run `getOrgId` with the appropriate slug value above to obtain this organization's `id` in the response: ```graphql query getOrgId { organization(slug: "organization-slug") { id uuid slug } } ``` **Note:** The `organization-slug` value can also be obtained from the end of your Buildkite URL, by selecting **Pipelines** in the global navigation of your organization in Buildkite. - `id` is the rule ID value of the existing rule to be deleted. This value can be obtained: * From the **Rules** section of your **Organization Settings** page, accessed by selecting **Settings** in the global navigation of your organization in Buildkite. Then, expand the existing rule and copy its **GraphQL ID** value. * By running a [List rules](/docs/apis/graphql/cookbooks/rules#list-rules) GraphQL API query to obtain the rule's `id` in the response. For example: ```graphql query getRules { organization(slug: "organization-slug") { rules(first: 10) { edges { node { id type source { ... on Pipeline { slug } } target { ... on Pipeline { slug } } } } } } } ``` **Important:** For the rule identified by its `uuid` in the response, ensure the pipeline UUIDs of the source (`source_uuid`) and target (`target_uuid`), as well as the rule type (`type`) match those of this rule to be deleted. --- ### Incoming webhooks URL: https://buildkite.com/docs/pipelines/security/incoming-webhooks #### Incoming webhooks Incoming webhooks are sent to Buildkite by source control providers ([GitHub](/docs/pipelines/source-control/github), [GitLab](/docs/pipelines/source-control/gitlab), [Bitbucket](/docs/pipelines/source-control/bitbucket), etc.) to trigger builds. This page answers the most frequent questions about the security of incoming webhooks in Buildkite. ##### What kind of information on incoming webhooks is logged by Buildkite? Buildkite only logs and temporarily stores the incoming webhook information as it was received, the relevant HTTP headers, and the remote IP for diagnostics purposes. This information is soon erased and cannot be used for audit. ##### How long does Buildkite store the information on incoming webhooks? Buildkite stores the information on recent incoming webhooks received per pipeline for approximately one week. It is used for troubleshooting purposes only — for example, to diagnose why a given hook didn't trigger a build. This functionality is not exposed in the UI. ##### What security information is stored in the secret in a Buildkite URL? Secrets are pipeline-specific. A secret can't prove that it hasn't been acquired by somebody else and added as a webhook inside their own GitHub repository (which is equally true of any other secret). However, all incoming webhooks from GitHub-hosted repositories are IP-filtered to provide an additional layer of security. As a Buildkite user, you can [set up a pipeline and request a URL](/docs/pipelines/source-control/github#set-up-a-new-pipeline-for-a-github-repository) to add it to your GitHub organization as a webhook, proving it comes from your GitHub organization. IP filtering is available for: - GitHub - GitHub Enterprise Server - Bitbucket Server (not Bitbucket Cloud) - GitLab Community - GitLab Enterprise For GitHub's cloud-hosted products, Buildkite manages a [set of IP addresses](https://docs.github.com/en/authentication/keeping-your-account-and-data-secure/about-githubs-ip-addresses) and filters them automatically. For self-hosted solutions like GitHub Enterprise Server, you can provide the IP addresses to filter. To enable IP filtering for these repository providers: 1. Navigate to the [Repository providers](https://buildkite.com/organizations/~/repository-providers) in your organization's settings. 1. Enter the allowed IP addresses. ##### How do I access the webhook that triggered a build? There are two ways to do it: - To view the webhook that triggered the build, go to the build page and add "/webhook" at the end of its URL - for example `https://buildkite.com/rails/rails/builds/*****/webhook` - To find the actual payload of a webhook that triggered a build, use the following GraphQL snippet: ```graphql query FindWebhookPayload { build(uuid: "...") { source { ...on BuildSourceWebhook { headers payload } } } } ``` ##### Would adding a separate secret per webhook make it more secure? The webhook URL already contains a secret value, so a second secret field is redundant and doesn't increase security. ##### Is it possible to ensure that Buildkite is only getting incoming webhooks from my GitHub organization? Can this be specifically set up on Buildkite's side? The secret verifies that Buildkite receives webhooks from an organization with sufficient Buildkite rights — meaning that it is able to obtain the secret webhook URL, which Buildkite treats as equivalent to coming from your GitHub organization. By definition, anyone with that level of access could change any other security setting in your Buildkite organization. The webhook secret is generated by Buildkite, per pipeline. The only thing a webhook can do is cause a start of a build of a commit that already exists in the pipeline's configured repository, using the pipeline's configured set of build steps. ##### Can I (re)issue a webhook secret using the Buildkite UI? You can ask Buildkite support (at support@buildkite.com) to create a new webhook secret for you. In case of an emergency, you could modify the existing pipeline to make it unbuildable and replace it with a new one, with a new webhook secret. --- ### Overview URL: https://buildkite.com/docs/pipelines/security/oidc #### OIDC in Buildkite Pipelines [Open ID Connect (OIDC)](https://openid.net/developers/how-connect-works/) is an authentication protocol based on the [OAuth 2.0 framework](https://auth0.com/docs/authenticate/protocols/oauth/). With OIDC, one system or service issues a typically short-lived _OIDC token_, which is a signed [JSON Web Token (JWT)](https://jwt.io/) containing metadata (or _claims_) about a user or object. This token can be consumed by another service (which may be offered by a third-party or by the same organization) to authenticate the user or object. An _OIDC policy_ configured on this other service defines which OIDC tokens, based on their claims (also known as _asserted_ claims) are permitted to perform the actions. If the OIDC token's asserted claims comply with those of the OIDC policy configured in the other service, the token is authenticated and the service issuing the token is permitted to perform its actions on the other service. You can configure third-party products and services, such as [AWS](https://aws.amazon.com/), [GCP](https://cloud.google.com/), [Azure](https://azure.microsoft.com/) and many others, as well as Buildkite products, such as [Package Registries](/docs/package-registries/security/oidc), with OIDC policies that only permit Buildkite agent interactions from specific Buildkite organizations, pipelines, agents, and other metadata associated with the pipeline's job. A Buildkite OIDC token is issued by a Buildkite agent, asserting claims about the slugs of the pipeline it is building and organization that contains this pipeline, the ID of the job that created the token, as well as other claims, such as the name of the branch used in the build, the SHA of the commit that triggered the build, and the agent ID. Such a token is: - Associated with a Buildkite agent interaction to perform one or more actions within your third-party services. If the token's claims do not comply with the service's OIDC policy, the token is rejected and subsequent pipeline jobs' interactions from the Buildkite agent are rejected. If the claims do comply, the Buildkite agent and its permitted pipeline's jobs will have access to the allowable actions defined by these services. - Short-lived to further mitigate the risk of compromising the security of these services, should the token accidentally be leaked. The [Buildkite agent's `oidc` command](/docs/agent/cli/reference/oidc) allows you to request an OIDC token from Buildkite containing claims about the pipeline's current job. These tokens can then be consumed by federated systems like AWS, and exchanged for authenticated role-based access with specific permissions to interact with your cloud environments. By default, the token's `sub` (subject) claim identifies the pipeline. You can use the `--subject-claim` flag to set it to a different immutable identifier, such as a cluster UUID or organization UUID, giving you control over how broadly or narrowly trust is scoped. See [Custom subject claims](/docs/agent/cli/reference/oidc#custom-subject-claims) for details. This section of the Buildkite Docs covers Buildkite's OIDC implementation with other federated systems, such as [AWS](/docs/pipelines/security/oidc/aws) and [Azure](/docs/pipelines/security/oidc/azure). --- ### OIDC with AWS URL: https://buildkite.com/docs/pipelines/security/oidc/aws #### OIDC with AWS The [Buildkite agent's `oidc` command](/docs/agent/cli/reference/oidc) allows you to request an [Open ID Connect (OIDC)](https://openid.net/developers/how-connect-works/) token containing _claims_ about the current pipeline and its job. These tokens can be consumed by AWS and exchanged for an Identity and Access Management (IAM) role with AWS-scoped permissions. This process uses the following Buildkite plugins to implement OIDC with AWS and your Buildkite pipelines: - [AWS assume-role-with-web-identity](https://github.com/buildkite-plugins/aws-assume-role-with-web-identity-buildkite-plugin) - [AWS SSM Buildkite Plugin](https://github.com/buildkite-plugins/aws-ssm-buildkite-plugin) Learn more about: - How OIDC tokens are constructed and how to extract and use claims in the [OpenID Connect Core documentation](https://openid.net/specs/openid-connect-core-1_0.html#IDToken). - Amazon's implementation of OIDC with their federated system in [Create an OpenID Connect (OIDC) identity provider in IAM](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_providers_create_oidc.html) of the AWS IAM User Guide. ##### Step 1: Set up an OIDC provider in your AWS account First, you'll need to set up an IAM OIDC provider in your AWS account. Learn more about how to do this in the [Create an OpenID Connect (OIDC) identity provider in IAM](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_providers_create_oidc.html) page of the AWS IAM User Guide. On this page, as part of the [Creating and managing an OIDC provider (console)](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_providers_create_oidc.html#manage-oidc-provider-console) process, specify the following values for the: - **Provider URL**: `https://agent.buildkite.com` - **Audience**: `sts.amazonaws.com` ##### Step 2: Create a new (or update an existing) IAM role to use with your pipelines Creating new or updating existing IAM roles is conducted through your AWS account. Learn more about how to do this in the [Creating a role using custom trust policies (console)](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_create_for-custom.html) page of the AWS IAM User Guide. As part of this process: 1. Choose the **Custom trust policy** role type. 1. Copy the following example trust policy in the following JSON code block and paste it into a code editor: ```json { "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Principal": { "Federated": "arn\:aws\:iam:\:AWS_ACCOUNT_ID\:oidc-provider/agent.buildkite.com" }, "Action": [ "sts:TagSession", "sts:AssumeRoleWithWebIdentity" ], "Condition": { "StringLike": { "agent.buildkite.com:sub": "organization\:ORGANIZATION_SLUG\:pipeline:PIPELINE_SLUG\:ref\:REF\:commit\:BUILD_COMMIT\:step\:STEP_KEY" }, "StringEquals": { "agent.buildkite.com:aud": "sts.amazonaws.com", "aws:RequestTag/organization_slug": "ORGANIZATION_SLUG", "aws:RequestTag/organization_id": "ORGANIZATION_ID", "aws:RequestTag/pipeline_slug": "PIPELINE_SLUG" }, "IpAddress": { "aws:SourceIp": [ "AGENT_PUBLIC_IP_ONE", "AGENT_PUBLIC_IP_TWO" ] } } } ] } ``` Learn more about creating custom trust policies in [Creating IAM policies](https://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies_create-console.html#access_policies_create-start) of the AWS IAM User Guide. 1. Modify the `Principal` section of the pasted code snippet accordingly: 1. Ensure that this is set to `Federated`, and points to the `oidc-provider` Amazon Resource Name (ARN) from the **Provider URL** you [configured above](#step-1-set-up-an-oidc-provider-in-your-aws-account) (that is, `agent.buildkite.com`). 1. Change `AWS_ACCOUNT_ID` to your actual AWS account ID. 1. Modify the `Condition` section of the code snippet accordingly: 1. Ensure the `StringLike` subsection's `agent.buildkite.com:sub` field name has at least one value that matches the format: `organization:ORGANIZATION_SLUG:pipeline:PIPELINE_SLUG:ref:REF:commit:BUILD_COMMIT:step:STEP_KEY`. You can choose to wildcard sections of this string to make your trust policy more permissive, e.g. `organization:acme-inc:*` will match for any invocation of pipeline in Buildkite organization "acme-inc". Buildkite recommends that the subject claim is used to narrow the trust policy scope to a Buildkite organization, and `aws:RequestTag` style claims to be used to further narrow the trust policy scope e.g. to a pipeline. `aws:RequestTag` style claims allow you to specify immutable UUIDs in your trust policy. Note that [AWS requires the `agent.buildkite.com:sub` claim](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_providers_oidc_secure-by-default.html) to be specified in the trust policies associated with IAM roles using a Buildkite OIDC provider federated principal. 1. Ensure the `StringEquals` subsection's _audience_ field name has a value that matches the **Audience** you [configured above](#step-1-set-up-an-oidc-provider-in-your-aws-account) (that is, `sts.amazonaws.com`). The _audience_ field name is your provider URL appended by `:aud`—`agent.buildkite.com:aud`. 1. Ensure the `StringEquals` subsection's `RequestTag` fields have values match the Buildkite pipeline that will use this role. Buildkite strongly recommends using the immutable UUIDs in your trust policy. When formulating such values, the following constituent field's value: - `ORGANIZATION_SLUG` can be obtained: * From the end of your Buildkite URL, after accessing **Pipelines** in the global navigation of your organization in Buildkite. * By running the [List organizations](/docs/apis/rest-api/organizations#list-organizations) REST API query to obtain this value from `slug` in the response. For example: ```bash curl -X GET "https://api.buildkite.com/v2/organizations" \ -H "Authorization: Bearer $TOKEN" ``` * From the `BUILDKITE_ORGANIZATION_SLUG` value displayed on the `Environment` tab of any job that ran in the organization. - `ORGANIZATION_ID` is a UUID and can be obtained: * By running the same [List organizations](/docs/apis/rest-api/organizations#list-organizations) REST API query used to obtain `ORGANIZATION_SLUG`. * From the `BUILDKITE_ORGANIZATION_ID` value displayed on the `Environment` tab of any job that ran in the organization. - `PIPELINE_SLUG` (optional) can be obtained: * From the end of your Buildkite URL, after accessing **Pipelines** in the global navigation of your organization in Buildkite, then accessing the specific pipeline to be specified in the custom trust policy. * By running the [List pipelines](/docs/apis/rest-api/pipelines#list-pipelines) REST API query to obtain this value from `slug` in the response from the specific pipeline. For example: ```bash curl -X GET "https://api.buildkite.com/v2/organizations/{org.slug}/pipelines" \ -H "Authorization: Bearer $TOKEN" ``` 1. If you have dedicated/static public IP addresses and wish to implement defense in depth against an attacker stealing an OIDC token to access your cloud environment, retain the `Condition` section's `IpAddress` subsection, and modify its values (`AGENT_PUBLIC_IP_ONE` and `AGENT_PUBLIC_IP_TWO`) with a list of your agent's IP addresses or [CIDR](https://en.wikipedia.org/wiki/Classless_Inter-Domain_Routing) range or block. Only OIDC token exchange requests (for IAM roles) from Buildkite agents with these IP addresses will be permitted. 1. Verify that your custom trust policy is complete. The following example trust policy (noting that `AWS_ACCOUNT_ID` has not been specified) will only allow the exchange of an agent's OIDC tokens with IAM roles when: * The Buildkite organization is `example-org`, with an ID of `ab3883b1-9596-4312-a09c-4527ae997ba7`. * The Buildkite pipeline is `example-pipeline`. * On Buildkite agents whose IP addresses are either `192.0.2.0` or `198.51.100.0`. ```json { "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Principal": { "Federated": "arn\:aws\:iam:\:AWS_ACCOUNT_ID\:oidc-provider/agent.buildkite.com" }, "Action": [ "sts:TagSession", "sts:AssumeRoleWithWebIdentity" ], "Condition": { "StringLike": { "agent.buildkite.com:sub": "organization\:example-org\:*" }, "StringEquals": { "agent.buildkite.com:aud": "sts.amazonaws.com", "aws:RequestTag/organization_slug": "example-org", "aws:RequestTag/organization_id": "b3883b1-9596-4312-a09c-4527ae997ba7", "aws:RequestTag/pipeline_slug": "example-pipeline" } "IpAddress": { "aws:SourceIp": [ "192.0.2.0", "198.51.100.0" ] } } } ] } ``` **Note:** AWS requires that the `sub` claim is matched for all trust policies used with OIDC in Buildkite Pipelines. Therefore, it is recommended that you use the `sub` claim to match your Buildkite organization, and then use `aws:RequestTag` conditions for more granular trust policy restrictions, as demonstrated in the example above. 1. In the **Custom trust policy** section, copy your modified custom trust policy, paste it into your IAM role, and complete the next few steps up to specifying the **Role name**. 1. Specify an appropriate **Role name**, for example, `example-pipeline-oidc-for-ssm`, and complete the remaining steps. ##### Step 3: Configure your IAM role with AWS actions Add an inline or managed IAM policy (separate to the custom trust policy [configured above](#step-2-create-a-new-or-update-an-existing-iam-role-to-use-with-your-pipelines)) to allow the IAM role to perform any actions your pipeline needs. Learn more about how to do this in [Managed policies and inline policies](https://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies_managed-vs-inline.html) of the AWS IAM User Guide. Common examples are permissions to read secrets from SSM and push images to ECR, although this would depend on the purpose of your pipeline. In the following example, we'll allow access to read an SSM Parameter Store key named `/pipelines/example-pipeline/oidc-for-ssm/example-deploy-key` by attaching the following inline policy: ```json { "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": [ "ssm:GetParameters" ], "Resource": "arn\:aws\:ssm\:us-east-1\:012345678910:parameter/pipelines/example-pipeline/oidc-for-ssm/example-deploy-key" } ] } ``` ##### Step 4: Configure your pipeline to assume the role Finally, use the two Buildkite plugins to use the IAM role and to pull in the SSM parameter (added above): - [AWS assume-role-with-web-identity](https://github.com/buildkite-plugins/aws-assume-role-with-web-identity-buildkite-plugin) - [AWS SSM Buildkite Plugin](https://github.com/buildkite-plugins/aws-ssm-buildkite-plugin) Incorporate the following into your pipeline (modifying as required): ```yaml agents: queue: mac-small steps: - label: "\:aws\: Deploy to Production" key: deploy-to-production command: echo "Example Deploy Key equals \$EXAMPLE_DEPLOY_KEY" env: AWS_DEFAULT_REGION: us-east-1 AWS_REGION: us-east-1 plugins: - aws-assume-role-with-web-identity#v1.2.0: role-arn: arn\:aws\:iam::012345678910:role/example-pipeline-oidc-for-ssm session-tags: - organization_slug - organization_id - pipeline_slug - aws-ssm#v1.0.0: parameters: EXAMPLE_DEPLOY_KEY: /pipelines/example-pipeline/oidc-for-ssm/example-deploy-key ``` > 📘 > The backslash (`\`) before `$EXAMPLE_DEPLOY_KEY` in the example above prevents this environment variable from being interpolated during the pipeline's upload to Buildkite Pipelines. You could alternatively use a `$` symbol for this purpose (resulting in `$$EXAMPLE_DEPLOY_KEY`). ##### AWS CloudTrail A Buildkite job that successfully assumes an AWS IAM Role using this pattern will leave a record in AWS CloudTrail. That record will include details like the IP address of the agent that ran the job, plus the values for any of the `session-tags` that were listed in the `pipeline.yml`. Here is a fragment of an AWS CloudTrail event with the relevant tags: ```json { "eventVersion": "1.08", "userIdentity": { "type": "WebIdentityUser", "principalId": "arn\:aws\:iam::AWS_ACCOUNT_ID:oidc-provider/agent.buildkite.com:sts.amazonaws.com\:organization\:example-org\:pipeline\:example-pipeline\:ref\:refs/heads/main\:commit\:1da177e4c3f41524e886b7f1b8a0c1fc7321cac2\:step\:", "userName": "organization\:example-org\:pipeline\:example-pipeline\:ref:refs/heads/main\:commit\:1da177e4c3f41524e886b7f1b8a0c1fc7321cac2\:step\:", "identityProvider": "arn\:aws\:iam::AWS_ACCOUNT_ID:oidc-provider/agent.buildkite.com" }, "eventTime": "2025-02-18T13:34:48Z", "eventSource": "sts.amazonaws.com", "eventName": "AssumeRoleWithWebIdentity", "awsRegion": "us-east-1", "sourceIPAddress": "192.0.2.0", "userAgent": "aws-cli/2.13.0 Python/3.11.4 Linux/6.7.12 exe/x86_64.ubuntu.22 prompt/off command/sts.assume-role-with-web-identity", "requestParameters": { "principalTags": { "pipeline_slug": "example-pipeline", "organization_id": "ab3883b1-9596-4312-a09c-4527ae997ba7", "organization_slug": "example-org" }, "roleArn": "arn\:aws\:iam::AWS_ACCOUNT_ID:role/example-pipeline-oidc-for-ssm", "roleSessionName": "buildkite-job-01951944-87df-428f-ad92-90709ee78a59" }, ... } ``` ##### Including the build branch in your custom trust policy When [creating a custom trust policy for your IAM role](#step-2-create-a-new-or-update-an-existing-iam-role-to-use-with-your-pipelines), you can include the build branch within this policy. However, be aware that doing so comes with potential risks, since this doesn't necessarily guarantee that the entire build will be run from the branch defined in the policy. For instance, the policy might allow a build to commence off the `main` branch. However, the next step of the pipeline might check out a different branch and run the remainder of the pipeline's build from that branch. Nevertheless, being aware of these risks, if you do wish to include the build branch in your custom trust policy, you can do so by making the following modifications to the steps above. 1. When [defining your trust policy in the code editor](#step-2-create-a-new-or-update-an-existing-iam-role-to-use-with-your-pipelines), add the `RequestTag/build_branch` entry to your `Condition` section's `StringEquals` subsection: ```json ... "Condition": { "StringEquals": { ... "aws:RequestTag/build_branch": "BRANCH_NAME" } ... ``` where `BRANCH_NAME` is usually replaced with `main` to initially restrict the IAM role's access to the `main` branch. If this `RequestTag` condition is omitted, the role can initially be assumed by a build on any branch. 1. When [configuring your pipeline to use the IAM role](#step-4-configure-your-pipeline-to-assume-the-role), ensure `build_branch` is included in the [AWS assume-role-with-web-identity](https://github.com/buildkite-plugins/aws-assume-role-with-web-identity-buildkite-plugin) `plugins` attribute's `session-tags` value, for example: ```yaml steps: - ... plugins: - aws-assume-role-with-web-identity#v1.2.0: role-arn: arn\:aws\:iam::012345678910:role/example-pipeline-oidc-for-ssm session-tags: - ... - build_branch ``` Note also that the `build_branch` property and value is also included in [AWS CloudTrail events](#aws-cloudtrail): ```json { ... "requestParameters": { "principalTags": { ... "build_branch": "main" }, ... }, ... } ``` --- ### OIDC with Azure URL: https://buildkite.com/docs/pipelines/security/oidc/azure #### OIDC with Azure OpenID Connect (OIDC) allows your Buildkite pipelines to authenticate directly with [Microsoft Azure](https://azure.microsoft.com/) without storing long-lived credentials. Instead of managing client secrets, your pipeline requests a short-lived token from the Buildkite agent at runtime, and Azure validates it using a trust relationship you configure in [Microsoft Entra ID (formerly Azure AD)](https://www.microsoft.com/en-us/security/business/identity-access/microsoft-entra-id). This guide walks you through setting up OIDC between Buildkite Pipelines and Azure, including a working example that uses Terraform with an Azure Storage Account backend. Learn more about: - How OIDC tokens are constructed and how to extract and use claims in the [OpenID Connect Core specification](https://openid.net/specs/openid-connect-core-1_0.html). - Microsoft's implementation of workload identity federation in [Workload identity federation](https://learn.microsoft.com/en-us/entra/workload-id/workload-identity-federation) on Microsoft Learn. - Supported scenarios and limitations in [Considerations for workload identity federation](https://learn.microsoft.com/en-us/entra/workload-id/workload-identity-federation-considerations) on Microsoft Learn. ##### Requirements You will need: - An Azure subscription with permissions to create App Registrations and assign RBAC roles. Note your **Subscription ID** from the Azure Portal (found on the Subscriptions page). - A Buildkite pipeline you want to authenticate with Azure. Depending on [which subject claim you use](#custom-subject-claims), you'll need its **Pipeline UUID**, **Cluster UUID**, or another identifier. You can find the Pipeline UUID in Buildkite under **Pipeline Settings** > **General**, listed as **Pipeline ID**. You can also retrieve it using the [REST API](/docs/apis/rest-api/pipelines#get-a-pipeline) (the `id` field) or the [GraphQL API](/docs/apis/graphql/cookbooks/pipelines#get-a-pipelines-uuid). ##### Step 1: Register an application in Microsoft Entra ID The App Registration in Microsoft Entra ID acts as the identity that your Buildkite pipeline will assume when accessing Azure resources. To register an application in Microsoft Entra ID: 1. In the Azure Portal, go to **Microsoft Entra ID** > **App registrations**. 1. Click **New registration**. 1. Enter a name for the application (for example, `buildkite-oidc-example`). 1. Leave the default setting for **Supported account types** (single tenant). 1. Click **Register**. Once created, note the following values from the App Registration's **Overview** page. You'll need them later: - **Application (client) ID**, for example, `00xx0x0-0x00-0x00-xx00-x0x000xxx0x0` - **Directory (tenant) ID**, for example, `00xx0x0-0x00-0x00-xx00-x0x000xxx0x0` > 📘 > When you register an application in the previous step, Azure automatically creates a **service principal** for it. You'll see this term later when assigning RBAC roles. Think of the App Registration as the definition of your app, and the service principal as the identity it uses to access resources. Learn more in [Application and service principal objects in Microsoft Entra ID](https://learn.microsoft.com/en-us/entra/identity-platform/app-objects-and-service-principals). ##### Step 2: Add a federated identity credential The Federated Identity Credential establishes the trust between your Buildkite pipeline and the Azure App Registration. Azure uses it to validate the OIDC token that the Buildkite agent presents. 1. In your App Registration, go to **Certificates & secrets**. 1. Select the **Federated credentials** tab. 1. Click **Add credential**. 1. For **Federated credential scenario**, select **Other issuer**. 1. Configure the credential with the following values, then click **Add**. | Field | Value | | --- | --- | | **Issuer** | `https://agent.buildkite.com` | | **Subject identifier** | The value of the subject claim in the OIDC token. By default, this is the pipeline UUID (for example, `000xx00x-000x-0000-00xx-00x0x00x00x0`). If you're using a [custom subject claim](#custom-subject-claims), use the corresponding identifier instead (for example, a cluster UUID). | | **Name** | A descriptive name (for example, `buildkite-pipeline-deploy`) | | **Audience** | Leave as the default `api://AzureADTokenExchange` | > 📘 > The **Subject identifier** must match the `sub` claim in the OIDC token exactly. By default, this is the pipeline UUID. If you use `--subject-claim` to override it (see [Custom subject claims](#custom-subject-claims)), set the **Subject identifier** to the corresponding value, such as a cluster UUID. Each unique subject value that needs Azure access requires its own Federated Identity Credential. ##### Step 3: Assign RBAC roles Your App Registration needs Azure RBAC roles to access resources. The roles you assign depend on what your pipeline needs to do. For this example, the pipeline uses Terraform with an Azure Storage Account backend, so it will need: - **Contributor** on the resource group (to create and manage resources) - **Storage Blob Data Contributor** on the storage account (to read and write Terraform state) To assign a role: 1. Navigate to the resource (resource group, storage account, subscription, and so on). 1. Go to **Access control (IAM)** > **Role assignments**. 1. Click **Add** > **Add role assignment**. 1. Select the role, then assign it to your App Registration's service principal. ##### Step 4: Configure Azure credentials in your pipeline To authenticate, your pipeline needs: - Azure Client ID - Tenant ID - Subscription ID These values are identifiers, not secrets. Define them as pipeline-level [environment variables](/docs/pipelines/configure/environment-variables) in your `pipeline.yml`: ```yaml env: ARM_CLIENT_ID: "your-application-client-id" ARM_TENANT_ID: "your-directory-tenant-id" ARM_SUBSCRIPTION_ID: "your-azure-subscription-id" ``` This keeps the values easy to find and change in one place. You can also store these values as [Buildkite Secrets](/docs/pipelines/security/secrets/buildkite-secrets) if your organization prefers to keep all configuration out of version control. The approach is the same either way. The OIDC token itself is the only sensitive value, and it's generated fresh in each step. > 📘 > Buildkite Secrets requires agent version 3.106.0 or later. The secret key names are up to you, just match them in your pipeline YAML. ##### Step 5: Request an OIDC token in your pipeline In your pipeline steps, use the `buildkite-agent oidc request-token` command to get an OIDC token. The token is short-lived and scoped to the pipeline by default. ```bash BUILDKITE_OIDC_TOKEN=$(buildkite-agent oidc request-token --audience "api://AzureADTokenExchange") ``` You can change what the token's `sub` claim contains by using the `--subject-claim` flag. For example, to scope the token to the cluster instead of the pipeline: ```bash BUILDKITE_OIDC_TOKEN=$(buildkite-agent oidc request-token --audience "api://AzureADTokenExchange" --subject-claim cluster_id) ``` See [Custom subject claims](#custom-subject-claims) for the full list of allowed claims and guidance on when to use each one. The `--audience` value must match one of the audiences Azure accepts for federated identity credentials: | Azure environment | Audience value | | --- | --- | | Azure Commercial (public) | `api://AzureADTokenExchange` | | Azure US Government | `api://AzureADTokenExchangeUSGov` | | Azure China (21Vianet) | `api://AzureADTokenExchangeChina` | Do not change the audience to a custom value. If the audience in the OIDC token doesn't match one of these values, Azure will reject the token exchange and authentication will fail. Most users should leave this as the default `api://AzureADTokenExchange`. > 📘 > Each step in a Buildkite pipeline runs independently. If multiple steps need Azure access, each step must request its own OIDC token. Tokens cannot be passed between steps. ##### Step 6: Authenticate with Azure using the token Once you have the OIDC token, use it to authenticate with Azure. The exact method depends on your tooling. ###### Using the Azure CLI ```bash az login --service-principal \ --username "$ARM_CLIENT_ID" \ --tenant "$ARM_TENANT_ID" \ --federated-token "$BUILDKITE_OIDC_TOKEN" ``` ###### Using Terraform with the AzureRM provider Set the following environment variables in your pipeline: ```yaml env: ARM_USE_OIDC: "true" ARM_USE_AZUREAD: "true" ARM_CLIENT_ID: "your-application-client-id" ARM_TENANT_ID: "your-directory-tenant-id" ARM_SUBSCRIPTION_ID: "your-azure-subscription-id" ``` The `ARM_CLIENT_ID`, `ARM_TENANT_ID`, and `ARM_SUBSCRIPTION_ID` values are identifiers, not secrets, so they can be defined directly in your `pipeline.yml`. The OIDC token is the only sensitive value, and it's generated fresh in each step. The AzureRM provider will read these environment variables automatically when `ARM_USE_OIDC` is set to `true`. The `ARM_USE_AZUREAD` variable is needed when using an Azure Storage Account backend for Terraform state. It tells the provider to authenticate to the storage data plane using Entra ID rather than storage account keys. You can also store these values as [Buildkite Secrets](/docs/pipelines/security/secrets/buildkite-secrets) if your organization prefers to keep all configuration out of version control. Both approaches work the same way with the [Docker Compose Buildkite plugin's](https://buildkite.com/resources/plugins/buildkite-plugins/docker-compose-buildkite-plugin/) `propagate-environment` option. ##### Example pipeline This example pipeline runs Terraform to deploy Azure resources, authenticating entirely through OIDC with no stored Azure credentials. It uses the [Docker Compose Buildkite plugin](https://buildkite.com/resources/plugins/buildkite-plugins/docker-compose-buildkite-plugin/) to run Terraform in a container. The pipeline defines the Azure identifiers and OIDC flags as pipeline-level environment variables. Each step requests a fresh OIDC token before running Terraform commands. ```yaml env: ARM_USE_OIDC: "true" ARM_USE_AZUREAD: "true" ARM_CLIENT_ID: "your-application-client-id" ARM_TENANT_ID: "your-directory-tenant-id" ARM_SUBSCRIPTION_ID: "your-azure-subscription-id" steps: - label: "\:terraform\: init & plan" key: terraform-plan command: | echo "--- Getting OIDC token" export ARM_OIDC_TOKEN=$$(buildkite-agent oidc request-token --audience api://AzureADTokenExchange) echo "--- Terraform init" terraform init \ -backend-config="resource_group_name=your-resource-group" \ -backend-config="storage_account_name=yourstorageaccount" \ -backend-config="container_name=tfstate" \ -backend-config="key=terraform.tfstate" echo "--- Terraform plan" terraform plan -out=tfplan echo "--- Uploading plan artifact" buildkite-agent artifact upload tfplan plugins: - docker-compose#v5.12.1: run: terraform propagate-environment: true mount-buildkite-agent: true - block: "\:rocket\: Deploy?" prompt: "Review the plan output above and approve to apply" - label: "\:terraform\: apply" key: terraform-apply depends_on: terraform-plan command: | echo "--- Getting OIDC token" export ARM_OIDC_TOKEN=$$(buildkite-agent oidc request-token --audience api://AzureADTokenExchange) echo "--- Downloading plan artifact" buildkite-agent artifact download tfplan . echo "--- Terraform init" terraform init \ -backend-config="resource_group_name=your-resource-group" \ -backend-config="storage_account_name=yourstorageaccount" \ -backend-config="container_name=tfstate" \ -backend-config="key=terraform.tfstate" echo "--- Terraform apply" terraform apply tfplan plugins: - docker-compose#v5.12.1: run: terraform propagate-environment: true mount-buildkite-agent: true ``` A few things to note about this pipeline: - Each step requests a fresh OIDC token independently. Tokens are short-lived and can't be shared between steps. - The [Docker Compose Buildkite plugin](https://buildkite.com/resources/plugins/buildkite-plugins/docker-compose-buildkite-plugin/) with `propagate-environment: true` automatically passes all pipeline environment variables (the `ARM_*` values) into the container, removing the need for explicit `-e` flags. - `mount-buildkite-agent: true` makes the `buildkite-agent` binary available inside the container. This is required for requesting OIDC tokens and uploading/downloading artifacts. The [Docker Compose Buildkite plugin](https://buildkite.com/resources/plugins/buildkite-plugins/docker-compose-buildkite-plugin/) defaults this to `false`. - The plan step saves the plan to a file (`-out=tfplan`) and uploads it as a Buildkite artifact. The apply step downloads that exact plan and applies it, so you're always applying exactly what was reviewed. - The `block` step between plan and apply gives you a chance to review the plan before deploying. - Backend configuration values are passed using `-backend-config` flags at init time, keeping the Terraform code environment-agnostic. - The `$$` prefix on the `buildkite-agent` command prevents Buildkite from interpolating the command substitution at pipeline upload time. The command runs at step execution time inside the container. ###### Docker Compose configuration The pipeline expects a `docker-compose.yml` in your repository root: ```yaml services: terraform: image: hashicorp/terraform:1.9.1 entrypoint: [] working_dir: /workspace volumes: - ".:/workspace" ``` The `entrypoint: []` line is required. The `hashicorp/terraform` Docker image sets `terraform` as its default entrypoint. Without clearing it, the Docker Compose Buildkite plugin can't execute shell commands inside the container because Docker will try to pass the shell command as arguments to the `terraform` binary. ###### Terraform configuration The Terraform configuration uses the [AzureRM](https://registry.terraform.io/providers/hashicorp/azurerm/latest/docs) provider with OIDC enabled. The backend block is intentionally minimal. The resource group, storage account, container, and state key are all passed using `-backend-config` at init time. ```hcl terraform { required_version = ">= 1.5" required_providers { azurerm = { source = "hashicorp/azurerm" version = "~> 3.0" } } backend "azurerm" { use_oidc = true } } provider "azurerm" { features {} use_oidc = true } ``` ##### How OIDC token exchange works When your pipeline step runs: 1. The step calls `buildkite-agent oidc request-token` to get a JSON Web Token (JWT) from the Buildkite agent. 1. The `sub` (subject) claim in the JWT contains the pipeline UUID by default, or a different identifier if `--subject-claim` was used. 1. The `aud` (audience) claim in the JWT contains `api://AzureADTokenExchange`. 1. The step presents this JWT to Microsoft Entra ID. 1. Entra ID validates the JWT against the Federated Identity Credential configuration (matching the issuer, subject, and audience). 1. If valid, Entra ID issues an Azure access token for the App Registration's service principal. 1. The step uses this Azure access token to access Azure resources according to its RBAC roles. ##### Monitoring OIDC sign-ins When a Buildkite pipeline authenticates with Azure using OIDC, the sign-in is recorded in Microsoft Entra ID's sign-in logs under **Service principal sign-ins**. To view these sign-ins: 1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com). 1. Browse to **Identity** > **Monitoring & health** > **Sign-in logs**. 1. Select the **Service principal sign-ins** tab. 1. Filter by the App Registration name (for example, `buildkite-oidc-example`) to see sign-in activity from your Buildkite pipelines. These logs show whether each authentication attempt succeeded or failed, along with details like the IP address of the Buildkite agent, the time of the sign-in, and any error codes. This is useful for debugging OIDC configuration issues and auditing which pipelines are accessing your Azure resources. Learn more about sign-in logs in [Sign-in logs in Microsoft Entra ID](https://learn.microsoft.com/en-us/entra/identity/monitoring-health/concept-sign-ins) on Microsoft Learn. ##### Custom subject claims By default, the `sub` claim in a Buildkite OIDC token contains the pipeline UUID. The `--subject-claim` flag lets you change the `sub` claim to a different immutable identifier, giving you control over how broadly or narrowly Azure trust is scoped. ###### Allowed subject claims Only immutable identifiers are allowed as subject claims. Mutable values like slugs and branch names are excluded because renaming them would silently break federated identity credentials. | Claim | Description | Scope | | --- | --- | --- | | `pipeline_id` | The pipeline UUID (default) | A single pipeline | | `cluster_id` | The cluster UUID | All pipelines in a cluster | | `queue_id` | The queue UUID | All pipelines targeting a queue | | `organization_id` | The organization UUID | All pipelines in the organization | | `build_id` | The build UUID | A single build (one-time use) | | `job_id` | The job UUID | A single job (one-time use) | | `agent_id` | The agent UUID | A single agent | ###### Choosing a subject claim The right subject claim depends on how you want to balance the number of federated identity credentials against the breadth of access. - **`pipeline_id` (default):** One credential per pipeline. Tight scoping, but requires a new Federated Identity Credential for each pipeline that needs Azure access. - **`cluster_id`:** One credential per cluster. Any pipeline in the cluster can authenticate. Fewer credentials to manage, but broader access. - **`queue_id`:** One credential per queue. Useful when different queues have different trust boundaries (for example, production versus staging queues). - **`organization_id`:** One credential for the entire organization. The broadest scope. Any pipeline in the organization can authenticate with the same Azure App Registration. - **`build_id`, `job_id`, `agent_id`:** Extremely narrow scope, typically used for auditing or one-time access rather than ongoing trust relationships. ###### Using a custom subject claim To use a custom subject claim, pass the `--subject-claim` flag when requesting the OIDC token: ```bash BUILDKITE_OIDC_TOKEN=$(buildkite-agent oidc request-token --audience "api://AzureADTokenExchange" --subject-claim cluster_id) ``` Then set the **Subject identifier** in the Azure Federated Identity Credential to the corresponding UUID. For example, if you use `--subject-claim cluster_id`, set the **Subject identifier** to your cluster's UUID. > 📘 > The subject claim value must also be included as an optional claim in the token. If you use `--subject-claim cluster_id`, the `cluster_id` claim is automatically included. You don't need to pass `--claim cluster_id` separately. ##### Known limitations Azure federated identity credentials require an exact match on the OIDC token's subject claim. This creates constraints around access control and trust, regardless of which [subject claim](#custom-subject-claims) you use. ###### Access control is scoped to a single identifier Azure's federated identity credentials match on the `sub` claim only. You can't restrict Azure access by branch, build source, or other build context. Any build that produces a token with a matching subject can authenticate with Azure, whether it was triggered from `main`, a feature branch, or a manual build. ###### Untrusted builds can authenticate to Azure Because OIDC trust is tied to the token's subject claim, it doesn't distinguish between a build triggered from `main` and one triggered by an unreviewed pull request. If your pipeline accepts public pull requests and has build forks enabled, anyone who can open a PR against that repo can add a step that requests an OIDC token and hits your Azure resources with whatever RBAC roles you've assigned. This applies whether you use the default `pipeline_id` subject or a broader claim like `cluster_id`. > 📘 > This is a common constraint across CI/CD providers that support OIDC. Palo Alto's Unit 42 team [demonstrated real-world attacks using this pattern](https://unit42.paloaltonetworks.com/oidc-misconfigurations-in-ci-cd/) at DEF CON 32, and the [tj-actions/changed-files supply chain attack](https://openssf.org/blog/2025/06/11/maintainers-guide-securing-ci-cd-pipelines-after-the-tj-actions-and-reviewdog-supply-chain-attacks/) in March 2025 showed how compromised tooling inside a pipeline can exfiltrate tokens. To reduce the risk: - **Separate CI and CD pipelines.** Run tests on one pipeline, deployments on another. Only configure OIDC on the deploy pipeline where you control what triggers builds and what code runs. - **Scope RBAC roles to the minimum required.** Don't assign Contributor at the subscription level when a single resource group will do. See Microsoft's guidance on [best practices for Azure RBAC](https://learn.microsoft.com/en-us/azure/role-based-access-control/best-practices). - **Restrict who can trigger builds.** Use the [pipeline-level permissions](/docs/pipelines/security/permissions) in Buildkite Pipelines to control who can create builds on pipelines with OIDC configured. - **Monitor sign-ins in Entra ID.** Check the Service principal sign-in logs for unexpected activity. See the [Monitoring OIDC sign-ins](#monitoring-oidc-sign-ins) section above. ###### Getting tighter control To limit what your pipelines can do in Azure: - **Use [custom subject claims](#custom-subject-claims) to match your trust boundaries.** Use `pipeline_id` (the default) for tight per-pipeline scoping, `cluster_id` or `queue_id` for shared infrastructure, or `organization_id` when a single credential should cover all pipelines. - **Use separate App Registrations per environment.** Create one for production, one for staging, each with RBAC roles scoped to their own resources and linked to separate Buildkite pipelines or clusters. This gives you isolation between environments. - **Scope RBAC roles tightly.** Assign roles to the smallest resource possible (a single resource group, not the whole subscription). Authentication might succeed, but the pipeline can only touch what it's been granted. - **Apply Conditional Access Policies.** Organizations with Entra ID P1/P2 can use [Conditional Access for workload identities](https://learn.microsoft.com/en-us/entra/identity/conditional-access/workload-identity) to restrict authentication by IP range or other conditions. ##### Troubleshooting Common errors when setting up OIDC between Buildkite Pipelines and Azure, and how to resolve them. ###### "AADSTS70021: No matching federated identity record found" The subject in the OIDC token doesn't match the Subject identifier in the Federated Identity Credential. Check that you're using the correct pipeline UUID (just the UUID, not a prefixed string). ###### "AADSTS700016: Application not found in the directory" The Client ID is incorrect or the App Registration doesn't exist in the specified tenant. ###### "AuthorizationFailed" when accessing resources The App Registration's service principal doesn't have the required RBAC roles on the target resource. Check your role assignments in Access control (IAM). ###### "Storage account key access is disabled" If you've disabled shared key access on a storage account (recommended), make sure the service principal has the **Storage Blob Data Contributor** role on the storage account, not just Contributor. ###### Token expired errors OIDC tokens are short-lived. Each pipeline step must request its own token at the start of execution. Tokens cannot be passed between steps. ###### Terraform can't authenticate to the storage backend Make sure `ARM_USE_OIDC` and `ARM_USE_AZUREAD` are both set to `true`. The AzureRM backend needs both to authenticate with OIDC for state storage operations. --- ### Permissions URL: https://buildkite.com/docs/pipelines/security/permissions #### User, team, and pipeline permissions The [_teams_ feature](#manage-teams-and-permissions) allows you to apply access permissions and functionality controls for one or more groups of users (that is, _teams_) on each pipeline throughout your organization. Enterprise plan customers can configure pipeline permissions for all users across their Buildkite organization through the **Security** page. Learn more about this feature in [Manage organization security for pipelines](#manage-organization-security-for-pipelines). ##### Manage teams and permissions To manage teams across the Buildkite Pipelines application, a _Buildkite organization administrator_ first needs to enable this feature across their organization. Learn more about how to do this in the [Manage teams and permissions in the Platform documentation](/docs/platform/team-management/permissions#manage-teams-and-permissions). Once the _teams_ feature is enabled, you can see the teams that you're a member of from the **Users** page, which: - As a Buildkite organization administrator, you can access by selecting **Settings** in the global navigation > [**Users**](https://buildkite.com/organizations/~/users/). - As any other user, you can access by selecting **Teams** in the global navigation > [**Users**](https://buildkite.com/organizations/~/users/). ###### Organization-level permissions Learn more about what a _Buildkite organization administrator_ can do in the [Organization-level permissions in the Platform documentation](/docs/platform/team-management/permissions#manage-teams-and-permissions-organization-level-permissions). As an organization administrator, you can access the [**Organization Settings** page](https://buildkite.com/organizations/~/settings) by selecting **Settings** in the global navigation, where you can do the following: - Add new teams or edit existing ones in the [**Team** section](https://buildkite.com/organizations/~/teams). - After selecting a team, you can view and administer the member-, [pipeline-](#manage-teams-and-permissions-pipeline-level-permissions), [test suite-](/docs/test-engine/permissions#manage-teams-and-permissions-test-suite-level-permissions), [registry-](/docs/package-registries/security/permissions#manage-teams-and-permissions-registry-level-permissions) and [team-](/docs/platform/team-management/permissions#manage-teams-and-permissions-team-level-permissions)level settings for that team. **Note:** Registry-level settings are only available once [Buildkite Package Registries has been enabled](/docs/package-registries/security/permissions#enabling-buildkite-packages). ###### Team-level permissions Learn more about what _team members_ are and what _team maintainers_ can do in the [Team-level permissions in the Platform documentation](/docs/platform/team-management/permissions#manage-teams-and-permissions-team-level-permissions). ###### Pipeline-level permissions When the [teams feature is enabled](#manage-teams-and-permissions), any user can create a new pipeline, as long as this user is a member of at least one team within the Buildkite organization, and this team has the **Create pipelines** [team member permission](#manage-teams-and-permissions-team-level-permissions). When you create a new pipeline in Buildkite: - You are automatically granted the **Full Access** (`MANAGE_BUILD_AND_READ`) permission to this pipeline. - Any members of teams to which you provide access to this pipeline are also granted the **Full Access** permission. **Full Access** on a pipeline allows you to: - View and create builds or rebuilds. - Edit pipeline settings, which includes the ability to change the pipeline's visibility. - Archive the pipeline or delete the pipeline. - Provide access to other users, by adding the pipeline to other teams that you are a [team maintainer](#manage-teams-and-permissions-team-level-permissions) on. Any user with the **Full Access** permission on a pipeline can change its permission to either: - **Build & Read** (`BUILD_AND_READ`), which allows you to view and create builds or rebuilds, but _not_: * Edit the pipeline settings. * Archive or delete the pipeline. * Provide access to other users. - **Read Only** (`READ_ONLY`), which allows you to view builds only, but _not_: * Create builds or issue rebuilds. * Edit the pipeline settings. * Archive or delete the pipeline. * Provide access to other users. A user who is a member of at least one team with **Full Access** permission to a pipeline can change the permission on this pipeline. However, once this user loses **Full Access** through their last team with this permission on this pipeline, the user then loses the ability to change the pipeline's permissions in any team they are a member of. Another user with **Full Access** to this pipeline or a [Buildkite organization administrator](#manage-teams-and-permissions-organization-level-permissions) is required to change the pipeline's permission back to **Full Access** again. ##### Manage organization security for pipelines Buildkite customers on the [Enterprise plan](https://buildkite.com/pricing/) can configure pipeline action permissions and related security features for all users across their Buildkite organization. These features can be used either with or without the [teams feature enabled](#manage-teams-and-permissions). These user-level permissions and security features are managed by _Buildkite organization administrators_. To access this feature: 1. Select **Settings** in the global navigation to access the [**Organization Settings**](https://buildkite.com/organizations/~/settings) page. 1. Select [**Security** > **Pipelines** tab](https://buildkite.com/organizations/~/security/pipelines) to access your organization's security for pipelines page. From this page, you can configure the following permissions for all users across your Buildkite organization: - **Create Pipelines**—if the [teams feature](#manage-teams-and-permissions) is enabled, then this permission is controlled at a [team-level](#manage-teams-and-permissions-team-level-permissions) and therefore, this option will be unavailable on this page. - **Delete pipelines** - **Change Pipeline Visibility**—Make private pipelines publicly available. - **Change Notification Services**—Allows notification services to be created, edited, and deleted. - **Manage Agent Registration Tokens**—Allows [agent tokens](/docs/agent/self-hosted/tokens) to be created, edited, and deleted. - **Stop Agents**—Allows users to disconnect agents from Buildkite. --- ### Overview URL: https://buildkite.com/docs/pipelines/governance #### Governance overview Governance plays a crucial role in CI/CD tools, particularly for anyone operating in regulated industries or handling sensitive data. It encompasses policies, processes, and controls to ensure practices align with the following: - Industry-specific regulations. - Internal policies. - Widely recognized compliance standards. These standards assess the security, availability, processing integrity, confidentiality, and privacy of information systems. Buildkite understands the importance of meeting compliance and auditing requirements. The following features are tailored to meet your governance needs: - [Pipeline templates](/docs/pipelines/governance/templates) - [Build exports](/docs/pipelines/governance/build-exports) With these features, you can maintain your compliance and build software with confidence. --- ### Pipeline templates URL: https://buildkite.com/docs/pipelines/governance/templates #### Pipeline templates > 📘 Enterprise plan feature > Pipeline templates are only available on an [Enterprise](https://buildkite.com/pricing) plan. ##### Overview Pipeline templates allow you to define standard pipeline step configurations to use across all the pipelines in your organization. When a pipeline has a template assigned, the pipeline inherits its step configuration from the template. Before assigning a template to a pipeline, you need to mark that template as available for use in your organization. ##### Creating a pipeline template Only administrators can create or update pipeline templates. You can do this through the Buildkite UI or the REST and GraphQL APIs. To create a template: 1. Navigate to your [organization’s pipeline templates](https://buildkite.com/organizations/-/pipeline-templates). 1. If this is your first template, select **Create a Template**. Otherwise, select **New Template**. 1. Enter the name and description for your new template. 1. Update the default step configuration. 1. Select **Available for assignment by non-admins** if you would like everyone in your organization to be able to use this template when creating pipelines or editing pipeline steps. 1. Select **Create Template**. An administrator can add multiple templates to use across the organization. Making changes and saving a template will apply those changes to all pipelines using that template. As an administrator you do not need to mark a template available to see it in the available templates dropdown. You will be able to see all the templates you created while creating a new build, creating a new pipeline or editing steps for an existing pipeline. ##### Testing a pipeline template An administrator can test a pipeline template against a pipeline using the **New Build** button on the pipeline page. If a template exists for the organization, it can be selected from the **Pipeline template** dropdown to create a new build using the step configuration from that template. ##### Requiring pipeline templates The power of pipeline templates comes from how much you require their use. Administrators can select from the following options, listed in increasing strictness: 1. **Do not require pipeline templates:** Pipeline steps remain editable for any user with permission to create or update a pipeline. Templates marked as available can be assigned to pipelines. Use this option if you would like your pipeline templates to act more like starting guides for users in your organization to create pipelines faster. 1. **Require a pipeline template on new pipelines:** A template must be selected when creating a new pipeline. The step configuration of existing pipelines will become read-only. Pipelines can be assigned a template individually, making a gradual migration to pipeline templates possible. 1. **Requiring a pipeline template for everything:** Templates are mandatory on all new and existing pipelines. When choosing this setting, you will select a pipeline template to apply to any pipeline that does not already have a template assigned. To change your organization's requirements for pipeline templates: 1. Navigate to your [organization's pipeline templates](https://buildkite.com/organizations/-/pipeline-templates). 1. Check you have at least one template. If you don't have a template, create one. 1. Select **Settings**. 1. Select the requirement you want to set. If you stop requiring pipeline templates for your organization, any pipelines using templates will continue to do so. You can later change their steps settings to remove the template. ##### Assigning a pipeline template to a pipeline After an administrator marks a pipeline template available for use, anyone with permission to create or change a pipeline can assign a template. Assigning a template overrides the pipeline's step configuration with the template. You can use the following methods to assign a template to a pipeline: - On the step settings for the pipeline (your pipeline > **Settings** > **Steps**), select the template to assign. - Using the REST API, [update the pipeline](/docs/apis/rest-api/pipelines#update-a-pipeline) with the appropriate `pipeline_template_uuid`. - Using the GraphQL API, run the [`pipelineUpdate` mutation](/docs/apis/graphql/schemas/mutation/pipelineupdate) with the appropriate `pipelineTemplateId`. You can find the IDs for a pipeline template on its page in the Buildkite dashboard. > 📘 Web steps editor compatibility > Pipelines defined using the web steps editor cannot be assigned templates through the Buildkite dashboard. These pipelines must be either [migrated to YAML steps first](/docs/pipelines/tutorials/pipeline-upgrade), updated using the APIs, or bulk-assigned a template when selecting the **Require a pipeline template for everything** setting. --- ### Build exports URL: https://buildkite.com/docs/pipelines/governance/build-exports #### Build exports > 📘 Enterprise plan feature > The build exports feature is only available to customers on the [Enterprise](https://buildkite.com/pricing) plan, and this feature has a [build retention](/docs/pipelines/configure/build-retention) period of 12 months. If you need to retain build data beyond the [retention period](/docs/pipelines/configure/build-retention) in your [Buildkite plan](https://buildkite.com/pricing), you can export the data to your own [Amazon S3 bucket](https://aws.amazon.com/s3/) or [Google Cloud Storage (GCS) bucket](https://cloud.google.com/storage). If you don't configure a bucket, Buildkite stores the build data for 18 months in case you need it. You cannot access this build data through the API or Buildkite dashboard, but you can request the data by contacting support. > 🚧 Builds from deleted pipelines are not exported > When [a pipeline is deleted](/docs/pipelines/configure/workflows/archiving-and-deleting-pipelines#deleting-pipelines), all of its associated builds are also deleted and will _not_ be exported. > If you need to [retain builds](/docs/pipelines/configure/build-retention) to preserve their data and be able to export them, [archive the pipeline](/docs/pipelines/configure/workflows/archiving-and-deleting-pipelines#archiving-pipelines) instead. ##### How it works Builds older than the build retention limit are automatically exported as JSON using the build export strategy (S3 or GCS) you have configured. If you haven't configured a bucket for build exports, Buildkite stores that build data as JSON in our own Amazon S3 bucket for a further 18 months in case you need it. The following diagram outlines this process. Buildkite exports each build as multiple gzipped JSON files, which include the following data: ``` buildkite/build-exports/org={UUID}/date={YYYY-MM-DD}/pipeline={UUID}/build={UUID}/ ├── annotations.json.gz ├── artifacts.json.gz ├── build.json.gz ├── step-uploads.json.gz └── jobs/ ├── job-{UUID}.json.gz └── job-{UUID}.log ``` The files are stored in the following formats: * [Annotations](/docs/apis/rest-api/annotations#list-annotations-for-a-build) * [Artifacts](/docs/apis/rest-api/artifacts#list-artifacts-for-a-build) (as meta-data) * [Builds](/docs/apis/rest-api/builds#get-a-build) (but without `jobs`, as they are stored in separate files) * Jobs (as would be embedded in a [Build via the REST API](/docs/apis/rest-api/builds#get-a-build)) ##### Configure build exports To configure build exports for your organization, you'll need to prepare an Amazon S3 or GCS bucket before enabling exports in the Buildkite dashboard. ###### Prepare your Amazon S3 bucket * Read and understand [Security best practices for Amazon S3](https://docs.aws.amazon.com/AmazonS3/latest/userguide/security-best-practices.html). * Your bucket must be located in Amazon's `us-east-1` region. * Your bucket must have a policy allowing cross-account access as described here and demonstrated in the example below¹. - Allow Buildkite's AWS account `032379705303` to `s3:GetBucketLocation`. - Allow Buildkite's AWS account `032379705303` to `s3:PutObject` keys matching `buildkite/build-exports/org=YOUR-BUILDKITE-ORGANIZATION-UUID/*`. - Do *not* allow AWS account `032379705303` to `s3:PutObject` keys outside that prefix. * Your bucket should use modern S3 security features and configurations, for example (but not limited to): - [Block public access](https://docs.aws.amazon.com/AmazonS3/latest/userguide/access-control-block-public-access.html) to prevent accidental misconfiguration leading to data exposure. - [ACLs disabled with bucket owner enforced](https://docs.aws.amazon.com/AmazonS3/latest/userguide/about-object-ownership.html) to ensure your AWS account owns the objects written by Buildkite. - [Server-side data encryption](https://docs.aws.amazon.com/AmazonS3/latest/userguide/serv-side-encryption.html) (`SSE-S3` is enabled by default, we do not currently support `SSE-KMS` but let us know if you need it). - [S3 Versioning](https://docs.aws.amazon.com/AmazonS3/latest/userguide/Versioning.html) to help recover objects from accidental deletion or overwrite. * You may want to use [Amazon S3 Lifecycle](https://docs.aws.amazon.com/AmazonS3/latest/userguide/object-lifecycle-mgmt.html) to manage storage class and object expiry. * You may want to set up additional safety mechanisms for large data dumps: - We recommend setting up logging and alerts (e.g. using [AWS CloudWatch](https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/WhatIsCloudWatch.html)) to monitor usage and set thresholds for data upload limits. - Use cost monitoring with [AWS Budgets](https://docs.aws.amazon.com/cost-management/latest/userguide/budgets-managing-costs.html) or [AWS CloudWatch](https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/WhatIsCloudWatch.html) to track large or unexpected uploads that may lead to high costs. Setting budget alerts can help you detect unexpected increases in usage early. ¹ Your S3 bucket policy should look like this, with `YOUR-BUCKET-NAME-HERE` and `YOUR-BUILDKITE-ORGANIZATION-UUID` substituted with your details: ```json { "Version": "2012-10-17", "Statement": [ { "Sid": "BuildkiteGetBucketLocation", "Effect": "Allow", "Principal": { "AWS": "arn\:aws\:iam::032379705303:root" }, "Action": "s3:GetBucketLocation", "Resource": "arn\:aws\:s3:::YOUR-BUCKET-NAME-HERE" }, { "Sid": "BuildkitePutObject", "Effect": "Allow", "Principal": { "AWS": "arn\:aws\:iam::032379705303:root" }, "Action": "s3:PutObject", "Resource": "arn\:aws\:s3:::YOUR-BUCKET-NAME-HERE/buildkite/build-exports/org=YOUR-BUILDKITE-ORGANIZATION-UUID/*" } ] } ``` Your Buildkite Organization ID (UUID) can be found on the settings page described in the next section. ###### Prepare your Google Cloud Storage bucket * Read and understand [Google Cloud Storage security best practices](https://cloud.google.com/security/best-practices) and [Best practices for Cloud Storage](https://cloud.google.com/storage/docs/). * Your bucket must have a policy allowing our Buildkite service-account access as described here. - Assign Buildkite's service-account `buildkite-production-aws@buildkite-pipelines.iam.gserviceaccount.com` the `"Storage Object Creator"`. - Scope the `"Storage Object Creator"` role using IAM Conditions to limit access to objects matching the prefix `buildkite/build-exports/org=YOUR-BUILDKITE-ORGANIZATION-UUID/*`. - Your IAM Conditions should look like this, with `YOUR-BUCKET-NAME-HERE` and `YOUR-BUILDKITE-ORGANIZATION-UUID` substituted with your details: ```json { "expression": "resource.name.startsWith('projects/_/buckets/YOUR-BUCKET-NAME-HERE/objects/buildkite/build-exports/org=YOUR-BUILDKITE-ORGANIZATION-UUID/')", "title": "Scope build exports prefix", "description": "Allow Buildkite's service-account to create objects only within the build exports prefix", } ``` Your Buildkite Organization ID (UUID) can be found on the [organization's pipeline settings](https://buildkite.com/organizations/~/pipeline-settings). * Your bucket must grant our Buildkite service-account (`buildkite-production-aws@buildkite-pipelines.iam.gserviceaccount.com`) `storage.objects.create` permission. * Your bucket should use modern Google Cloud Storage security features and configurations, for example (but not limited to): - [Public access prevention](https://cloud.google.com/storage/docs/public-access-prevention) to prevent accidental misconfiguration leading to data exposure. - [Access control lists](https://cloud.google.com/storage/docs/access-control/lists) to ensure your GCP (Google Cloud Provider) account owns the objects written by Buildkite. - [Data encryption options](https://cloud.google.com/storage/docs/encryption). - [Object versioning](https://cloud.google.com/storage/docs/object-versioning) to help recover objects from accidental deletion or overwrite. * You may want to use [GCS Object Lifecycle Management](https://cloud.google.com/storage/docs/lifecycle) to manage storage class and object expiry. ###### Enable build exports To enable build exports: 1. Navigate to your [organization's pipeline settings](https://buildkite.com/organizations/~/pipeline-settings). 1. In the **Exporting historical build data** section, select your build export strategy (S3 or GCS). 1. Enter your bucket name. 1. Select **Enable Export**. Once **Enable Export** is selected, we perform validation to ensure we can connect to the bucket provided for export. If there are any issues with connectivity export will not get enabled and you will see an error in the UI. Second part of validation is we upload a test file "deliverability-test.txt" to your build export bucket. Please note that this test file may not appear right away in your build export bucket as there is an internal process that needs to kick off for this to happen. --- ### Job log archiving URL: https://buildkite.com/docs/pipelines/governance/job-log-archiving #### Job log archiving By default, Buildkite Pipelines stores job logs in Buildkite's own infrastructure. With private job log archiving, you can configure your own, private Amazon S3 bucket to store job logs, giving your Buildkite organization full control over where job log data resides. > 📘 Enterprise plan feature and current limitations > The private job log archiving feature is only available to Buildkite customers on the [Enterprise](https://buildkite.com/pricing) plan. > This feature currently only supports Amazon S3 buckets in the `us-east-1` region. Google Cloud Storage and Azure Blob Storage are currently not supported. ##### How it works When job log archiving is enabled, Buildkite Pipelines writes job logs to your specified S3 bucket instead of the default storage location. Each job's log output is stored as an object in your bucket. Buildkite Pipelines reads from this location when users view job logs in the Buildkite dashboard or through the API. ##### Configure private job log archiving To configure job log archiving for your organization, you need to prepare an Amazon S3 bucket and then enable archiving in Buildkite. ###### Prepare your Amazon S3 bucket - Read and understand [Security best practices for Amazon S3](https://docs.aws.amazon.com/AmazonS3/latest/userguide/security-best-practices.html). - Your bucket must meet the following criteria: * Be located in Amazon's `us-east-1` region. * Have a policy allowing cross-account read and write access from Buildkite's AWS account `032379705303`. * Should implement modern S3 security features and configurations, such as (but not limited to): - [Block public access](https://docs.aws.amazon.com/AmazonS3/latest/userguide/access-control-block-public-access.html) to prevent accidental misconfiguration leading to data exposure. - [ACLs disabled with bucket owner enforced](https://docs.aws.amazon.com/AmazonS3/latest/userguide/about-object-ownership.html) to ensure your AWS account owns the objects written by Buildkite. - [Server-side data encryption](https://docs.aws.amazon.com/AmazonS3/latest/userguide/serv-side-encryption.html) (`SSE-S3` is enabled by default). - [S3 Versioning](https://docs.aws.amazon.com/AmazonS3/latest/userguide/Versioning.html) to help recover objects from accidental deletion or overwrite. - You may want to use [Amazon S3 Lifecycle](https://docs.aws.amazon.com/AmazonS3/latest/userguide/object-lifecycle-mgmt.html) to manage storage class and object expiry. ###### Enable job log archiving To enable job log archiving, contact Buildkite support at support@buildkite.com with your S3 bucket name and organization details. The support team will configure the archive location for your organization. ##### Related pages - [Build exports](/docs/pipelines/governance/build-exports) for exporting historical build data to your own storage. - [Managing log output](/docs/pipelines/configure/managing-log-output) for controlling how job logs are displayed. --- ### Overview URL: https://buildkite.com/docs/pipelines/deployments #### Deployments with Buildkite There are many ways to set up both manual and continuous deployment workflows using Buildkite. This covers various ways of architecting deployment pipelines, common workflows, and how to integrate with external deployment systems. ##### Single deployment steps Adding a deployment step that runs after your tests pass is the simplest way to deploy from a Buildkite pipeline. The example `pipeline.yml` below shows how to set up continuous deployment using a single step that runs after the tests pass. ```yml steps: - label: "🔨" command: "scripts/tests" - wait - label: "🚀" command: "scripts/deploy" if: build.branch == 'main' concurrency: 1 concurrency_group: "my-app-deploy" ``` This pipeline uses a [conditional](/docs/pipelines/configure/conditionals) to only run on commits to the main branch, and sets a [concurrency limit](/docs/pipelines/configure/workflows/controlling-concurrency) of 1 to ensure that only one deployment happens at a time. ##### Dedicated deployment pipelines A dedicated deployment pipeline separates your deploy steps from any other testing and building steps. Creating deployment pipelines makes it easier to: - Separate deployment failures from test failures - Separate test and deployment pipeline.yml files - Re-run failed deployments - Simplify adding rollback steps - Group other deploy-related tasks with the deployment steps - Use teams for role based access control - Allowlist deploy pipelines in agent hooks A common pattern is to have two separate pipelines, each with its own `pipeline.yml` file in your project's repository: ``` .buildkite/tests.pipeline.yml .buildkite/deploy.pipeline.yml ``` For example, your app's test pipeline (with slug `my-app`) runs on every git commit, and is configured to upload the following `.buildkite/tests.pipeline.yml` file: ```yml steps: - label: "🔨" command: "scripts/tests" - wait # This makes sure that deploys are triggered in the same order as the # test builds, no matter which test builds finish first. - label: "Concurrency gate" command: "exit 0" concurrency: 1 concurrency_group: "my-app-deploy-concurrency-gate" - wait - label: "🚀" trigger: "my-app-deploy" if: build.branch == 'main' build: commit: "$BUILDKITE_COMMIT" ``` Once the tests run successfully, if the commit is on the main branch then continuous deployment is done by triggering a build on the deployment pipeline. The deployment pipeline (with slug `my-app-deployment`) could be configured to upload the following `.buildkite/deploy.pipeline.yml` file: ```yml steps: - label: "🚀" command: "scripts/deploy" if: build.branch == 'main' concurrency: 1 concurrency_group: "my-app-deploy" ``` This pipeline runs the deployment script, and sets a [concurrency limit](/docs/pipelines/configure/workflows/controlling-concurrency) of 1 to ensure that only one deployment happens at a time. You can add any of the [pipeline step types](/docs/pipelines/configure/defining-steps) to add additional capabilities to your deployment pipelines, such as manual approval steps, teams permission checks, or additional API calls. ##### Manual approval steps Adding a manual approval to your pipeline before your deployment ensures that a deploy never goes out without explicit approval. You can use [block steps](/docs/pipelines/configure/step-types/block-step) to add manual approvals before any deploy scripts or triggers. The below example uses the same pipeline as the [Single deployment step](#single-deployment-steps) section, but adds a block step before the step that performs the deploy: ```yml steps: - label: "🔨" command: "scripts/tests" - block: "Deploy" prompt: "Deploy to production?" - label: "🚀" command: "scripts/deploy" if: build.branch == 'main' concurrency_group: "my-app-deploy" concurrency: 1 ``` Until the block step is manually unblocked either in Buildkite or using an API call, the build will be paused and the "🚀" deployment step will not run. ##### Deployment plugins There are [Buildkite plugins](/docs/pipelines/integrations/plugins) available for various systems and tools. For example, the [ECS Deploy plugin](https://github.com/buildkite-plugins/ecs-deploy-buildkite-plugin) and the [AWS Lambda Deploy plugin](https://github.com/envato/lambda-deploy-buildkite-plugin). The following example shows how to use the ECS Deploy plugin to automatically deploy an pre-built Docker image to an [AWS ECS](https://aws.amazon.com/ecs/) service: ```yaml steps: - block: "Deploy" prompt: "Deploy to production?" - label: "\:ecs\: 🚀" concurrency: 1 concurrency_group: "rails-app-deploy" plugins: - ecs-deploy#v1.3.0: cluster: "production" service: "app" task-definition: "production-deploy/rails-app.json" task-family: "rails-app" image: "my.ecr.repo/rails-app:${BUILDKITE_COMMIT}" task-role-arn: "deployer" deployment-configuration: "100/200" ``` You can find the latest deployment plugins in the [plugins directory](https://buildkite.com/plugins). If there's no plugin for your deployment service of choice, see the [Writing plugins](/docs/pipelines/integrations/plugins/writing) documentation for information on how to write your own. ##### External deployment systems You can deploy applications to services like Kubernetes, Argo CD, Heroku, or ECS from a script in a Buildkite [command step](/docs/pipelines/configure/step-types/command-step), similar to how you'd do so on a command line. Learn more about these processes from the following relevant pages for walk-throughs with examples: - [Deploying to AWS Lambda](/docs/pipelines/deployments/to-aws-lambda) - [Deploying to Kubernetes](/docs/pipelines/deployments/to-kubernetes) - [Deploying with Argo CD](/docs/pipelines/deployments/with-argo-cd) - [Deploying with Heroku](/docs/pipelines/deployments/with-heroku) In more complex environments you can use external deployment/delivery systems such as [Spinnaker](https://www.spinnaker.io), [Shipit](https://github.com/Shopify/shipit-engine), [Samson](https://github.com/zendesk/samson), or [Octopus](https://octopus.com). You can call the deployment system's CLI tool or API from a script in a Buildkite [command step](/docs/pipelines/configure/step-types/command-step), similar to how you'd do it on a command line. ##### GitHub deployments You can set up your pipelines to create a build whenever there is a deployment created in GitHub. You can trigger these builds using a call to [GitHub's Deployments REST API](https://developer.github.com/v3/guides/delivering-deployments/), or using the [GitHub Slack app](https://slack.github.com)'s `/github deploy my-org/my-repo` command. To enable builds to be created from GitHub deployment events, create a pipeline and select 'Trigger builds on deployment' in your Buildkite pipeline's GitHub settings: To customize the deployment's environment name and URL in GitHub, you can set the following two [build meta-data](/docs/pipelines/configure/build-meta-data) values in the pipeline that performs the deployment: ```shell buildkite-agent meta-data set "github_deployment_status:environment" "staging" buildkite-agent meta-data set "github_deployment_status:environment_url" "https://staging.my-app-dev.com/" ``` --- ### Deploying to AWS Lambda URL: https://buildkite.com/docs/pipelines/deployments/to-aws-lambda #### Deploying to AWS Lambda This tutorial demonstrates how to deploy Lambda functions to [AWS Lambda](https://docs.aws.amazon.com/lambda/latest/dg/welcome.html) using Buildkite Pipelines and the [AWS Lambda Deploy plugin](https://buildkite.com/resources/plugins/buildkite-plugins/aws-lambda-deploy-buildkite-plugin/). The plugin provides alias management, health checks, and automatic rollback capabilities for reliable Lambda deployments. ##### Before starting Before deploying to AWS Lambda from Buildkite Pipelines, ensure the following requirements are met: - An AWS account with appropriate [Lambda permissions](https://docs.aws.amazon.com/lambda/latest/dg/lambda-intro-execution-role.html) (further explained in [Required AWS IAM permissions](/docs/pipelines/deployments/to-aws-lambda#before-starting-required-aws-iam-permissions)) - [AWS CLI v2](https://docs.aws.amazon.com/cli/latest/userguide/getting-started-install.html) installed on Buildkite agents - [`jq` command-line tool](https://jqlang.org/) available - A Lambda function already created in AWS (or permission to create one) ###### Required AWS IAM permissions Buildkite agents need the following Lambda permissions: ```json { "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": [ "lambda:GetFunction", "lambda:UpdateFunctionCode", "lambda:UpdateFunctionConfiguration", "lambda:PublishVersion", "lambda:GetAlias", "lambda:UpdateAlias", "lambda:CreateAlias", "lambda:DeleteFunction", "lambda:InvokeFunction" ], "Resource": "arn\:aws\:lambda:*:*:function:my-function*" } ] } ``` For S3-based deployments, additional S3 permissions are required: ```json { "Effect": "Allow", "Action": ["s3:GetObject", "s3:GetObjectVersion"], "Resource": "arn\:aws\:s3:::deployment-bucket/*" } ``` ##### Deploying ZIP-based Lambda functions The most common Lambda deployment pattern uses ZIP files containing the function's code. The following example demonstrates a pipeline that builds and deploys a Python Lambda function: ```yaml steps: - label: ":package: Build function" key: "build" commands: - echo "Building Lambda function..." - zip -r function.zip src/ artifact_paths: - "function.zip" - label: ":rocket: Deploy to Lambda" depends_on: "build" commands: - buildkite-agent artifact download "function.zip" . plugins: - aws-lambda-deploy#v1.0.0: function-name: "my-function" alias: "production" mode: "deploy" zip-file: "function.zip" region: "us-east-1" runtime: "python3.13" handler: "lambda_function.lambda_handler" timeout: 30 memory-size: 128 description: "Deployed from build ${BUILDKITE_BUILD_NUMBER}" environment: LOG_LEVEL: "INFO" STAGE: "production" auto-rollback: true health-check-enabled: true health-check-timeout: 60 health-check-payload: '{"test": true}' ``` ##### Deploying container-based Lambda functions For larger functions or functions requiring custom runtimes, Lambda supports container images. The following example deploys a containerized Lambda function from Amazon Elastic Container Registry (ECR): ```yaml steps: - label: ":rocket: Deploy Lambda container" plugins: - aws-lambda-deploy#v1.0.0: function-name: "my-container-function" alias: "production" mode: "deploy" package-type: "Image" image-uri: "123456789012.dkr.ecr.us-east-1.amazonaws.com/my-function:${BUILDKITE_BUILD_NUMBER}" region: "us-east-1" timeout: 300 memory-size: 512 description: "Container deployment from build ${BUILDKITE_BUILD_NUMBER}" environment: STAGE: "production" VERSION: "${BUILDKITE_BUILD_NUMBER}" auto-rollback: true health-check-enabled: true health-check-payload: '{"length": 5, "width": 10}' health-check-timeout: 120 ``` ##### S3-based deployments For larger deployment packages or shared packages, Lambda functions can be deployed from S3: ```yaml steps: - label: ":rocket: Deploy from S3" plugins: - aws-lambda-deploy#v1.0.0: function-name: "my-function" alias: "production" mode: "deploy" s3-bucket: "my-deployment-bucket" s3-key: "functions/my-function-${BUILDKITE_BUILD_NUMBER}.zip" region: "us-east-1" runtime: "python3.13" handler: "lambda_function.lambda_handler" auto-rollback: true health-check-enabled: true ``` ##### Manual approval and rollback For production [deployments](/docs/pipelines/deployments), you can use [block steps](/docs/pipelines/configure/step-types/block-step) and manual rollback: ```yaml steps: - label: ":rocket: Deploy to production" plugins: - aws-lambda-deploy#v1.0.0: function-name: "my-function" alias: "production" mode: "deploy" zip-file: "function.zip" region: "us-east-1" runtime: "python3.13" handler: "lambda_function.lambda_handler" timeout: 30 memory-size: 128 description: "Deployment from build ${BUILDKITE_BUILD_NUMBER}" - block: ":thinking_face: Review deployment" prompt: "Check if the deployment is working correctly" - label: ":leftwards_arrow_with_hook: Manual rollback" plugins: - aws-lambda-deploy#v1.0.0: function-name: "my-function" alias: "production" mode: "rollback" region: "us-east-1" ``` ##### Health checks The plugin supports comprehensive health checks to validate deployments: ```yaml - label: ":rocket: Deploy with health checks" plugins: - aws-lambda-deploy#v1.0.0: function-name: "my-api-function" alias: "production" mode: "deploy" zip-file: "function.zip" region: "us-east-1" # Health check configuration health-check-enabled: true health-check-timeout: 120 health-check-payload: | { "httpMethod": "GET", "path": "/health", "headers": { "User-Agent": "Buildkite-HealthCheck" } } health-check-expected-status: 200 auto-rollback: true ``` Health checks run after the deployment completes and will trigger automatic rollback if they fail (when `auto-rollback` is enabled). ##### Build metadata and tracking The [AWS Lambda Deploy plugin](https://buildkite.com/resources/plugins/buildkite-plugins/aws-lambda-deploy-buildkite-plugin/) automatically tracks deployment state using Buildkite build's metadata. This enables: - **Cross-step state sharing**: multiple steps can access deployment information. - **Rollback coordination**: rollback steps can access previous version information. - **Deployment history**: track which versions were deployed when. Metadata keys are namespaced by function name: - `deployment:aws_lambda:my-function:current_version` - `deployment:aws_lambda:my-function:previous_version` - `deployment:aws_lambda:my-function:result` For complete configuration options, see the [AWS Lambda Deploy plugin documentation](https://github.com/buildkite-plugins/aws-lambda-deploy-buildkite-plugin/). --- ### Deploying to Kubernetes URL: https://buildkite.com/docs/pipelines/deployments/to-kubernetes #### Deploying to Kubernetes This tutorial demonstrates deploying to Kubernetes using Buildkite best practices. The tutorial uses one pipeline for tests and another for deploys. The test pipeline runs tests and push a Docker image to a registry. The deploy pipelines uses the `DOCKER_IMAGE` environment variable to create a [Kubernetes deployment](https://kubernetes.io/docs/concepts/workloads/controllers/deployment/) using `kubectl`. Then, you'll see how to link them together to automate deploys from the `main` branch. First up, you need to add a step to your existing test pipeline that pushes a Docker image. Also check your agents have `kubectl` access to your target cluster. Refer to the notes at the end of tutorial for tips on setting this up. ##### Create the deploy pipeline This section covers creating a new Buildkite pipeline that loads steps from `.buildkite/pipeline.deploy.yml`. We'll use a [trigger step](/docs/pipelines/configure/step-types/trigger-step) later on to connect the test and deploy pipelines. The first step will be a pipeline upload using our new deploy pipeline YAML file. Create a new pipeline. Enter `buildkite-agent pipeline upload .buildkite/pipeline.deploy.yml` in the commands to run field. Now create `.buildkite/pipeline.deploy.yml` with a single step. We'll write the deploy script in the next step. ```yml steps: - label: "\:rocket\: Push to \:kubernetes\:" command: script/buildkite/deploy concurrency: 1 concurrency_group: deploy/tutorial ``` Set `concurrency` and `concurrency_group` when updating mutable state. These settings ensure only one step runs at a time. ##### Writing the deploy script The next step is writing a deploy script that generates a [Kubernetes deployment manifest](https://kubernetes.io/docs/concepts/workloads/controllers/deployment/) from the `DOCKER_IMAGE` environment variable. Let's start with manifest file. This sample file creates a Deployment with three replicas (horizontal scale in Kubernetes lingo) each listening port `3000`. Change the `containerPort` to fit your application. > 📘 > The [official deployment documentation](https://kubernetes.io/docs/concepts/workloads/controllers/deployment/) covers much more than what fits in this tutorial. Refer back to these docs for information on setting CPU and memory, controlling networking, deployment update strategies, and how to expose your application to the internet. Let's call this file `k8s/deployment.yml`. ```yml --- apiVersion: apps/v1 kind: Deployment metadata: name: tutorial labels: app: tutorial spec: # TODO: replace with a value that fits your application replicas: 3 selector: matchLabels: app: tutorial template: metadata: labels: app: tutorial spec: containers: - name: app image: "${DOCKER_IMAGE}" ports: # TODO: replace with the correct port for your application - containerPort: 3000 ``` Note manifest includes `${DOCKER_IMAGE}`. There is no environment variable substitution in YAML or `kubectl` itself. This is where our custom deploy script comes in. Our deploy script will use `envsubst` ("environment substitute"; [docs](https://linux.die.net/man/1/envsubst)) as a minimal templating solution. The resulting output may be piped directly into `kubectl`. The full script has three parts: 1. Check `$DOCKER_IMAGE` is set 1. Generate a complete manifest with `envsubst` and apply with `kubectl` 1. Wait for Kubernetes to complete the deploy. This fits neatly into a Bash script. Here's the complete `script/buildkite/deploy`: ```bash #!/usr/bin/env bash set -euo pipefail if [ -z "${DOCKER_IMAGE:-}" ]; then echo "\:boom\: \$DOCKER_IMAGE missing" 1>&2 exit 1 fi manifest="$(mktemp)" echo '--- \:kubernetes\: Shipping' envsubst "${manifest}" kubectl apply -f "${manifest}" echo '--- \:zzz\: Waiting for deployment' kubectl wait --for condition=available --timeout=300s -f "${manifest}" ``` You can test your pipeline now that everything is in place. All you need is your Docker image. ##### Test the pipeline Open the deployment pipeline and click "New Build". Click "Options" and set the `DOCKER_IMAGE` environment variable. Assuming your agents have the required access to run `kubectl` against your cluster, then success! :tada: ##### Continuous deployment We'll use a [trigger steps](/docs/pipelines/configure/step-types/trigger-step) to connect the test and deploy pipelines. This effectively creates a continuous deployment pipeline. First, add a wait step at the end of your existing `.buildkite/pipeline.yml` otherwise deploys will trigger at the wrong time, and even for failed builds! ```yml # Add a wait step to only deploy after all steps complete - wait # More steps to follow ``` Next add a `trigger` step: ```yml - label: ':rocket: Deploy' # TODO: replace with your deploy pipeline's name trigger: kubernetes-tutorial-deploy # Only trigger on main build build: message: "${BUILDKITE_MESSAGE}" commit: "${BUILDKITE_COMMIT}" branch: "${BUILDKITE_BRANCH}" env: # TODO: replace with your Docker image name DOCKER_IMAGE: "asia.gcr.io/buildkite-kubernetes-tutorial/app:${BUILDKITE_BUILD_NUMBER}" branches: main ``` This `trigger` step creates a build with the same message, commit, and branch. `buildkite-agent pipeline-upload` interpolates environment variables so the correct values are replaced when the pipeline starts. The `env` setting passes along the `DOCKER_IMAGE` environment variable. Lastly, the `branches` options indicates to only build on `main`. This prevents deploying unexpected topic branches. It's magic time. Push some code. :tada: Continuous deployment! If something goes wrong, then verify your `kubectl` and Kubernetes versions are compatible. You can check with `kubectl version`. If your agents cannot connect to the cluster, then check the kubectl access section for setup advice. ##### Deploying with the Helm chart plugin For complex applications that are already packaged as Helm charts, the [Buildkite deployment Helm chart plugin](https://github.com/buildkite-plugins/deployment-helm-chart-buildkite-plugin) provides a robust deployment solution. Unlike the kubectl approach, Helm maintains deployment history and enables safe rollbacks when deployments fail or cause issues in production. The ability to instantly revert to the previous working version without manual intervention or complex recovery procedures is a critical advantage for production environments where downtime must be minimized. ###### Deployment example Instead of a custom deploy script, you can use the Helm plugin in your `.buildkite/pipeline.deploy.yml`. The plugin will receive the same `DOCKER_IMAGE` environment variable from your trigger step: ```yml steps: - label: "🚀 Deploy to Production" command: | echo "Deploying Docker image: $${DOCKER_IMAGE}" echo "Extracting image repository and tag..." export IMAGE_REPOSITORY="$$(echo "$${DOCKER_IMAGE}" | cut -d: -f1)" export IMAGE_TAG="$$(echo "$${DOCKER_IMAGE}" | cut -d: -f2)" echo "Repository: $${IMAGE_REPOSITORY}" echo "Tag: $${IMAGE_TAG}" plugins: - deployment-helm-chart#v1.0.0: mode: deploy chart: ./k8s/helm-chart release: tutorial namespace: default values: - k8s/helm-chart/values.yaml set: - image.repository=${IMAGE_REPOSITORY} - image.tag=${IMAGE_TAG} - replicas=3 create_namespace: true wait: true atomic: true timeout: 600s ``` ###### Rollback example ```yml steps: - label: "🔄 Rollback Deployment" plugins: - deployment-helm-chart#v1.0.0: mode: rollback release: tutorial namespace: default revision: 15 # Optional: specific revision to rollback to ``` Note that while the example above shows how to integrate the Helm plugin with the existing kubectl workflow using `DOCKER_IMAGE`, the plugin can also be used independently. You can configure it with its own parameters as below: ```yml steps: - label: "🚀 Deploy to Production" plugins: - deployment-helm-chart#v1.0.0: mode: deploy chart: ./k8s/helm-chart release: tutorial namespace: production repo_url: https://charts.yourcompany.com repo_name: yourcompany values: - k8s/helm-chart/values.yaml - k8s/helm-chart/values-prod.yaml set: - image.tag=v1.2.3 - replicas=5 - environment=production create_namespace: true wait: true atomic: true timeout: 600s concurrency: 1 concurrency_group: deploy/production ``` ##### Next steps Congratulations! :tada: You've set up a continuous deployment pipeline to Kubernetes. Here are some things to do next: - Try a [block step](/docs/pipelines/configure/step-types/block-step) before the trigger to enforce manual deploys. - Use [GitHub's Deployment API](https://buildkite.com/blog/github-deployments) to trigger deployments from external tooling (for example, ChatOps). - Expose the application to the internet with [Kubernetes Service](https://kubernetes.io/docs/concepts/services-networking/service/). - Replace the `envsubst` implementation with something like [kustomize](https://kustomize.io/) ##### Configuring kubectl and Helm access Configuring `kubectl` and `helm` access depends on your infrastructure. Here's an overview for common scenarios. If you're on GCP using agents on GCE and a GKE cluster: 1. Grant GCE agents GKE access with a [service account](https://cloud.google.com/compute/docs/access/service-accounts) 1. Install `gcloud` and `helm` on agent instances 1. Use `gcloud container clusters get-credentials` to get `kubectl` access 1. Helm will automatically use the same kubeconfig as kubectl If you're on AWS using agents on EC2 and an EKS cluster: 1. Grant agent access to EKS API calls with an instance profile 1. [Register the Buildkite agent IAM role with EKS](https://docs.aws.amazon.com/eks/latest/userguide/add-user-role.html) 1. [Install kubectl](https://docs.aws.amazon.com/eks/latest/userguide/install-kubectl.html) and [helm](https://helm.sh/docs/intro/install/) on agents 1. [Install IAM authenticator](https://docs.aws.amazon.com/eks/latest/userguide/install-aws-iam-authenticator.html) on agents 1. Install the AWS CLI 1. Use `aws update-kubeconfig` to get [kubectl access](https://docs.aws.amazon.com/eks/latest/userguide/create-kubeconfig.html) 1. Helm will automatically use the same kubeconfig as kubectl --- ### Deploying with Argo URL: https://buildkite.com/docs/pipelines/deployments/with-argo-cd #### Deploying with Argo CD [Argo CD](https://argoproj.github.io/cd/): - Is a continuous delivery tool specifically designed for Kubernetes. - Focuses on deploying applications to Kubernetes clusters using GitOps principles, where the desired state of your applications is declaratively defined in Git repositories and automatically synchronized to your Kubernetes clusters. Buildkite Pipelines and Argo CD complement each other in modern CI/CD workflows, where you can allow Pipelines to handle the CI tasks, such as building, testing, and packaging applications, and allow Argo CD to specialize in handling continuous deployment. The following example workflow outlines how Buildkite would work with Argo CD: 1. Buildkite Pipelines receives a code commit and triggers a build. 1. The build process in Pipelines might include steps to package, test, and create Kubernetes manifests. 1. Buildkite Pipelines pushes the generated manifests to a GitOps repository, which is monitored by Argo CD. 1. Argo CD detects the changes in the GitOps repo and automatically deploys the application to the target Kubernetes cluster. This approach allows for a clear separation of concerns—Pipelines handles the build and test processes, while Argo CD handles the deployment to Kubernetes. This simplifies the overall CI/CD pipeline and makes it easier to manage deployments. ##### Using Argo CD with Buildkite Pipelines There are various ways Argo CD could be used with Buildkite Pipelines. The most common ones include: - The Buildkite agent pushes Kubernetes manifests to a GitOps repository and then waits for the GitOps engine to [reconcile](http://argo-cd.readthedocs.io/en/stable/operator-manual/reconcile/) the change to a target Kubernetes cluster. - Buildkite Pipelines triggers Argo CD to deploy to Kubernetes. - Buildkite Pipelines triggers Argo CD via Argo API to either [sync an application](https://cd.apps.argoproj.io/swagger-ui#tag/ApplicationService/operation/ApplicationService_Sync), or [roll back a synchronization](https://cd.apps.argoproj.io/swagger-ui#tag/ApplicationService/operation/ApplicationService_Rollback), and monitors the deployment until completion. ##### Deploying to Kubernetes with Argo CD triggered by Buildkite Pipelines You can trigger the deployments to Argo CD through a command defined in your Buildkite pipeline definition. For example: ```yaml ... - key: "deploy-to-dev" label: "Trigger Argo CD sync" command: | echo "Triggering Argo CD application sync..." argocd app sync myapp --auth-token ${MYARGOCD_TOKEN} --server ${MYARGOCD_SERVER} env: MYARGOCD_TOKEN: ${MYARGOCD_AUTH_TOKEN} MYARGOCD_SERVER: "argocd.example.com" if: build.branch == "main" ``` You can insert a [block step](/docs/pipelines/configure/step-types/block-step) before triggering Argo CD for deployment to make sure a condition for deployment is met. For example: ```yaml ... - if: build.branch == "main" key: "block-step-condition-for-deploy" block: "Deploy this to Dev?" - key: "deploy-to-dev" label: "Buildkite agent to Argo CD CLI Manifest for Dev" command: | echo "--- :rocket: Deploying to Dev via Argo CD" argocd app sync my-app-dev --server $MYARGOCD_SERVER --auth-token $MYARGOCD_TOKEN env: MYARGOCD_TOKEN: ${MYARGOCD_AUTH_TOKEN} MYARGOCD_SERVER: "argocd.example.com" ... ``` > 🚧 > Bear in mind that these examples are aimed at providing you with a basic understanding of how to use Argo CD with Buildkite. For production-ready implementations, as discussed in [Risk considerations](/docs/pipelines/security/secrets/risk-considerations), it is _strongly recommended_ that you avoid using your secrets in plaintext pipeline files. Instead, you can use a [Buildkite secrets](/docs/pipelines/security/secrets/buildkite-secrets)-based approach. ##### Using annotations to link to Argo CD With the help of Buildkite's build [annotations](/docs/agent/cli/reference/annotate), you can include a deployment link to the Argo CD interface after the build has finished running to review the deployment status. For example: ```yaml steps: - label: "Deploy" command: | buildkite-agent annotate "🚀 [View Deployment in Argo CD](https://argocd.myorg.com/applications/default/myapp)" --style info --context "deployment" ``` ##### Deploying with the Argo CD deployment plugin In the traditional fire-and-forget approach, you would trigger either Argo CD's sync command (used in deploy operations) or rollback command (used in rollback operations), and the command would complete immediately. This approach doesn't include health monitoring or failure detection. If issues arise, manual intervention is required. The [Argo CD Deployment Buildkite Plugin](https://github.com/buildkite-plugins/argocd-deployment-buildkite-plugin) provides a vastly extended set of features and offers several advantages over manual CLI usage: - Unlike Argo CD's basic rollback, the plugin can automatically detect deployment failures and roll back to the last known good state, or provide interactive rollback decisions with detailed context through the use of [block steps](/docs/pipelines/configure/step-types/block-step). - The plugin performs real-time continuous health monitoring during deployment with configurable intervals and timeouts via the Argo CD API. Basic CLI commands don't provide this capability. - Deployment observability features of the plugin include automatic log collection (including pod logs), artifact upload, and detailed [Buildkite annotations](/docs/agent/cli/reference/annotate) that provide deployment visibility. - Production-ready safety features allow performing atomic deployments, setting configurable timeouts, and configuring Slack notifications for deployment events. ###### Requirements for using the plugin The plugin requires the Argo CD CLI to be installed on your Buildkite agents, as it uses the CLI for Argo CD operations while adding the enhanced monitoring and rollback logic on top. ###### Authentication setup The plugin requires the following Argo CD authentication environment variables: - `ARGOCD_SERVER` - Argo CD server URL (can also be set in plugin configuration). - `ARGOCD_USERNAME` - Argo CD username (can also be set in plugin configuration). - `ARGOCD_PASSWORD` - Argo CD password (must be set via environment variable). For production deployments, use a secure secret management solution like [Buildkite secrets](/docs/pipelines/security/secrets/buildkite-secrets), HashiCorp Vault, or AWS Secrets Manager to fetch the `ARGOCD_PASSWORD` before your deployment steps. ###### Production deployment with auto-rollback For production environments, use automatic rollback on health check failures: ```yaml steps: - label: "🚀 Deploy to Production" plugins: - secrets#v1.0.0: env: ARGOCD_PASSWORD: argocd-production-password - argocd_deployment#v1.0.0: app: "my-app" argocd_server: "https://argocd.example.com" argocd_username: "admin" mode: "deploy" rollback_mode: "auto" # Automatic rollback on failure; default if not specified collect_logs: true upload_artifacts: true log_lines: 1000 health_check_interval: 30 timeout: 600 health_check_timeout: 300 notifications: slack_channel: "#deployments" ``` ###### Development deployment with manual rollback For development environments, use manual rollback control with interactive decisions: ```yaml steps: - label: "🚫 Deploy to Development" plugins: - aws-sm#v1.0.0: secrets: - name: ARGOCD_PASSWORD key: argocd/development/password - argocd_deployment#v1.0.0: app: "my-app-dev" argocd_server: "argocd-server.argocd.svc.cluster.local:443" argocd_username: "admin" mode: "deploy" rollback_mode: "manual" # Interactive rollback decision; must be specified collect_logs: true log_lines: 2000 upload_artifacts: true notifications: slack_channel: "#dev-deployments" ``` ###### Manual rollback operations You can also perform explicit rollbacks to specific revisions: ```yaml steps: - label: "🔄 Manual Rollback" plugins: - vault-secrets#v2.2.1: server: ${VAULT_ADDR} secrets: - path: secret/argocd/password field: ARGOCD_PASSWORD - argocd_deployment#v1.0.0: app: "my-app" argocd_server: "argocd.example.com:443" argocd_username: "admin" mode: "rollback" rollback_mode: "manual" # Or "auto"; either must be specified target_revision: "370" # Argo CD History ID or Git commit SHA collect_logs: true log_lines: 3000 upload_artifacts: true ``` Note that by default, Argo CD only returns the last 10 entries from the deployment history. For manual rollbacks, use recent History IDs (visible in `argocd app history `) or commit SHA values from the recent deployments. --- ### Deploying with Heroku URL: https://buildkite.com/docs/pipelines/deployments/with-heroku #### Deploying with Heroku You can test and deploy [Heroku](https://heroku.com/) applications from your Buildkite pipelines. For GitHub-based pipelines you can use Heroku's [Automatic Deploys](https://devcenter.heroku.com/articles/github-integration) feature to have a branch deployed once Buildkite has marked the commit as 'passed'. To get started with automatic deploys: enable it in your Heroku dashboard, check "Wait for CI to pass before deploy", and when Heroku sees the passing Buildkite commit status it will automatically perform a slug deploy. You can also auto-deploy pull requests using [Review Apps](https://devcenter.heroku.com/articles/github-integration-review-apps). If you don't use GitHub or need more control of deployments, this guide will run you through the steps required for performing a manual deployment using `git push`. You can also use these methods to create pipelines for `heroku` cli tasks you'd like to automate. ##### Setting up the Heroku command-line interface (CLI) You can deploy to Heroku using the same `git push` command you'd run from your development machine. First step is installing the [Heroku CLI](https://devcenter.heroku.com/articles/heroku-command-line) on your Buildkite agent machine. For example, this is how you'd do it on a linux agent machine: ```bash $ wget -O- https://toolbelt.heroku.com/install-ubuntu.sh | sh ``` You can find macOS and Windows installers from [The Heroku CLI](https://devcenter.heroku.com/articles/heroku-cli) documentation page. And now verify it installed correctly by checking the version: ```bash $ heroku --version ``` You should see version numbers for both the `heroku-toolbelt` and `heroku-cli` packages. The next step is to login using the CLI and your Heroku credentials. We recommend creating a new user in Heroku that you only use for this purpose (a "machine user"). To create a machine user in Heroku, sign up to Heroku as a new user and then add that user as a collaborator to your existing Heroku application. If you're ready to login with the Heroku user credentials, make sure you're running as your buildkite-agent user (which for most packages is "buildkite-agent") and then run the heroku login command: ```bash $ sudo su buildkite-agent $ heroku login ``` You're now ready to run `heroku` commands in your Buildkite pipelines! :tada: ##### Setting up your build pipeline In your pipeline, add a command step for your deploy. Limit it to the branch you want to deploy, and add the commands to push to Heroku: ```yaml steps: - label: "\:ruby\: Tests" command: - cp .env.sample-dev .env - foreman run rake db:setup spec - label: "\:heroku\: Deploy" branches: "main" command: - "heroku git:remote --app my-app" - "git push heroku \"$$BUILDKITE_COMMIT\":main" ``` What's happening here? `heroku git:remote` ensures the Git `remote` is always pointing to the correct Heroku application and doesn't have an old value from previous builds. The `git push` specifies the exact commit, to make sure we're pushing the same commit we've tested. ##### Running a build Once you've saved the pipeline settings the final step is to push a commit to the `main` branch and watch it automatically deploy to Heroku: ##### Post-deploy scripts If you want to run tasks after the deploy, such as running database migrations, you can add a [wait step](/docs/pipelines/configure/step-types/wait-step) and additional command step that uses `heroku run`, for example: ```yaml steps: - label: "\:heroku\: Deploy" commands: - "heroku git:remote --app my-app" - "git push heroku \"$$BUILDKITE_COMMIT\":main" - wait - label: "\:heroku\: DB Migrations" commands: - "heroku git:remote --app my-app" - "heroku run rails db:migrate" ``` --- ### Deployment visibility with Backstage URL: https://buildkite.com/docs/pipelines/deployments/deployment-visibility-with-backstage #### Deployment visibility with Backstage [Backstage](https://backstage.io/) is an open source framework for building developer portals that provide unified visibility into your infrastructure's tools, services, and documentation. By integrating your Buildkite pipelines with Backstage using the [Buildkite plugin for Backstage](/docs/pipelines/integrations/other/backstage), you can monitor the status of your pipelines and manage their builds from a single interface. ##### Overview The Buildkite plugin for Backstage transforms how your team manages deployments by providing: - **Centralized pipeline monitoring**: view Buildkite pipeline status alongside your [Backstage Service Catalog](https://backstage.io/docs/features/software-catalog/), eliminating the need to switch between multiple tools. - **Real-time build tracking**: monitor build progress with automatic status updates. - **Build management**: trigger rebuilds directly from Backstage. - **Detailed build information**: access build logs, timing metrics, and commit context. ##### Setting up deployment visibility To use Backstage for deployment visibility with Buildkite, you'll need to have: - Admin access to both your Buildkite organization and Backstage instance. - The [Buildkite plugin for Backstage](/docs/pipelines/integrations/other/backstage) [installed](/docs/pipelines/integrations/other/backstage#installation) and [configured](/docs/pipelines/integrations/other/backstage#plugin-configuration). - A valid [Buildkite API access token](/docs/apis/managing-api-tokens) with the following permissions: * `read_pipelines` * `read_builds` * `read_user` * `write_builds` (for rebuild functionality) - Existing deployment pipelines in Buildkite that you want to monitor. - Deployment components annotated in your [Backstage Software Catalog](https://backstage.io/docs/features/software-catalog/). - Your deployment pipelines configured for optimal visibility. ###### Annotating deployment components Connect your Backstage components to their corresponding Buildkite deployment pipelines by adding annotations to your [`catalog-info.yaml`](https://backstage.io/docs/features/software-catalog/descriptor-format/) files: ```yaml apiVersion: backstage.io/v1alpha1 kind: Component metadata: name: my-production-service annotations: buildkite.com/pipeline-slug: my-org/production-deployment-pipeline tags: - production - deployment spec: type: service owner: platform-team lifecycle: production ``` Note that the `pipeline-slug` must exactly match your Buildkite organization's slug and the pipeline slug. It is also recommended to use descriptive tags to categorize and filter deployment components (for example, `production`or `deployment`). ###### Organizing deployment pipelines To maximize deployment visibility of your Buildkite pipelines in Backstage: - Use consistent naming conventions for deployment pipelines (for example, `service-name-env-deploy`). - Tag deployment builds with environment information using [build metadata](/docs/pipelines/configure/build-meta-data). - Set up deployment-specific badges to visually identify deployment status. ##### Monitoring your deployments When properly configured, the Backstage integration provides environment overview, deployment metrics, and build artifact tracking. ##### Best practices for deployment visibility The following are some tips for optimizing your workflow in Buildkite Pipelines and Backstage for the best integration results. ###### Structure your pipelines When naming your pipelines, use descriptive and consistent naming conventions that can scale: ``` my-service-ci # Continuous integration my-service-deploy-dev # Development deployment my-service-deploy-prod # Production deployment ``` ###### Use deployment-specific metadata Add metadata context to the configuration file of your deployment pipelines: ```yaml steps: - label: ":rocket: Deploy to Production" command: deploy.sh metadata: environment: "production" version: "$BUILDKITE_TAG" deployed_by: "$BUILDKITE_BUILD_CREATOR" ``` ###### Implement deployment gates Use [block steps](/docs/pipelines/configure/step-types/block-step) to create approval gates visible in Backstage: ```yaml steps: - block: ":hand: Deployment Approval" prompt: "Deploy to production?" fields: - text: "Release notes" key: "release-notes" required: true ``` ###### Track deployment events Configure your pipelines to emit deployment events that Backstage can consume: ```bash #### In your deployment script buildkite-agent annotate "Deployed version ${VERSION} to ${ENVIRONMENT}" \ --style "success" \ --context "deployment-${ENVIRONMENT}" ``` ##### Monitoring and alerting Use Backstage's deployment visibility to: - Configure notifications for failed deployments to set up deployment alerts. - Generate regular deployment performance reports. - Monitor service level objectives for deployments. ##### Troubleshooting deployment visibility This section covers some common issues and the proposed mitigations for integration between Buildkite Pipelines and Backstage using the [Buildkite plugin for Backstage](/docs/pipelines/integrations/other/backstage). ###### API access token issues If you are experiencing authentication errors, verify that: - Your [Buildkite API access token](/docs/apis/managing-api-tokens): * Has [all required permissions](#setting-up-deployment-visibility). * Is correctly set in your environment variables. - The [proxy configuration in `app-config.yaml`](/docs/pipelines/integrations/other/backstage#plugin-configuration-add-proxy-configuration) is correct. ###### Missing Buildkite deployments If your Buildkite deployments aren't appearing in Backstage: - Check that the annotation format is correct: `organization-slug/pipeline-slug`. - Verify that the pipeline slug matches exactly what's shown in your Buildkite URL. - Verify that your pipeline annotation exactly matches the deployment pipeline you're expecting to see. - Ensure the component has been properly registered in your [Backstage Software Catalog](https://backstage.io/docs/features/software-catalog/). - Ensure the builds exist within the selected time range. - Confirm that all filters are set correctly. - Check that that your Buildkite API access token has [sufficient permissions](/docs/apis/managing-api-tokens#token-scopes) (`read_pipelines`, `read_builds`, `read_user`, and `write_builds`, for rebuild functionality). - Confirm your deployment builds are [properly tagged with deployment metadata](/docs/pipelines/deployments/deployment-visibility-with-backstage#best-practices-for-deployment-visibility-use-deployment-specific-metadata). ###### Incomplete deployment information To improve deployment data quality and make the deployment information complete: - Add comprehensive [build metadata](/docs/pipelines/integrations/other/backstage#deployment-tracking-using-the-metadata) and [deployment metadata](/docs/pipelines/deployments/deployment-visibility-with-backstage#best-practices-for-deployment-visibility-use-deployment-specific-metadata). - Use consistent environment naming (for example, `production`, `staging`, `dev`) and avoid variations like, for example, `prod-east` and `production-us-east-1` for the same environment type. - Include version information in all deployment builds. ###### Missing real-time updates If your Buildkite deployments show up in Backstage correctly, but you are experiencing issues with the synchronization of updates, do the following: - Verify that your web browser tab is active as the updates pause in background tabs. - Check your network connectivity. - Ensure that the Buildkite API access token you are using hasn't expired. ###### Build logs are not loading If you are experiencing an issue with loading logs from Buildkite deployments in Backstage: - Check that the build exists and is accessible. - Ensure the Buildkite API access token has `read_builds` permission. - Verify that your [proxy configuration](/docs/pipelines/integrations/other/backstage#plugin-configuration-add-proxy-configuration) can handle log requests. --- ### Deployment plugins URL: https://buildkite.com/docs/pipelines/deployments/deployment-plugins #### Deployment plugins The _deployment plugins directory_ helps you discover Buildkite plugins for deployment. buildkite.com/resources/plugins/category/deployment --- ### Waterfall view URL: https://buildkite.com/docs/pipelines/insights/waterfall #### Waterfall view Waterfall view allows you to see build data as a waterfall chart, providing enhanced visibility into your build's job processes, durations and dependencies. To access waterfall view: 1. Navigate to any build page. 1. Select **View**. 1. Select **Waterfall** from the dropdown menu. Waterfall view only displays data for finished steps. If a finished step has jobs that are canceled, timed out, expired or skipped, the row will render as blank for those jobs. Wait, block, and input steps are not included in the chart. Most rows will show bars with three colored sections: 1. Gray: time the job spent waiting for an agent to be assigned. 1. Yellow: time elapsed since the agent was assigned, up until the time the agent started running the job. 1. Green or Red: time the agent spent running the job. Displayed as green for a **passed** job or red for a **failed** job. You can hover over a bar to view these durations. Time is rounded to the nearest second. Group, matrix and parallel steps are shown with nested rows underneath a 'parent' row. A parent row displays a solid bar representing the total duration of its child rows. The bar is green if all child rows passed, and red if any of them failed. > 📘 Build time discrepancies in the waterfall view > Although canceled jobs appear as a blank line in the waterfall view, their duration still contributes to the total build time. For example, if a job ran for 20 minutes and was then canceled, that job will appear as a blank line in the waterfall view, but contributes 20 minutes to the total build time. --- ### Cluster insights URL: https://buildkite.com/docs/pipelines/insights/clusters #### Cluster insights > 📘 Enterprise plan feature > The cluster insights dashboard is only available on [Enterprise](https://buildkite.com/pricing) plans. The _cluster insights_ dashboard provides real-time visibility into your build infrastructure's performance, helping you monitor and optimize your CI/CD workflows. This guide explains how to use and interpret the dashboard's metrics to improve your build system's efficiency. To export metrics to external tools or compare cluster insights with other monitoring approaches, see the [monitoring and observability best practices](/docs/pipelines/best-practices/monitoring-and-observability#getting-metrics-out-of-buildkite-pipelines). ##### Before you start The dashboard is available to all users of your Buildkite organization, but requires your build infrastructure to be managed through [clusters](/docs/pipelines/security/clusters). If you're using [unclustered agents](/docs/agent/self-hosted/tokens#working-with-unclustered-agent-tokens) and want to access these insights, contact Buildkite support at support@buildkite.com to discuss migrating your workloads to clusters. The shortcut to the cluster insights dashboard is https://buildkite.com/organizations/~/clusters/insights. ##### Access the cluster insights dashboard To access the cluster insights dashboard: 1. Select **Agents** in the global navigation to access the **Clusters** page. 1. Select the **View Cluster Insights** button to access the cluster insights dashboard. ##### Dashboard overview The cluster insights dashboard displays the following primary metrics that help you understand your CI system: - queue wait time - queued jobs waiting - agent utilization - job pass rate Each metric provides specific insights into your build infrastructure's health and efficiency. ###### View different cluster and queue scopes The cluster insights dashboard allows you to monitor your build infrastructure at different levels of detail. By default, the dashboard shows metrics across all clusters within your Buildkite organization. However, you can use the following dropdowns to filter these metrics: - **All clusters** — select a cluster to show only performance metrics associated with that cluster. - **All cluster queues** — if a specific cluster is selected, select its queue to show only the statistics and metrics associated with that cluster's queue. ###### Time range analysis The dashboard offers three time ranges for metric analysis. You can select between **1h** (the default), **24h**, or **7d** to restrict the historical data shown to one hour, 24 hours, or seven days, respectively. The one-hour default view helps with immediate issue investigation, while the 24-hour and seven-day views enable analysis of daily patterns and longer-term trends. ##### Understanding key metrics ###### Queue wait time The queue wait time measures how long jobs wait before an agent starts processing them, directly impacting your build times and developer productivity. While brief spikes during high-activity periods are normal, especially as auto-scaling responds, sustained high wait times may indicate underlying issues. When you notice sustained high wait times, investigate these areas: - Check agent utilization rates. - Review agent scaling configurations. - Consider increasing your base agent count. For recurring spikes, focus on: - Analyzing peak usage patterns. - Adjusting auto-scaling thresholds. - Reviewing job scheduling strategies. ###### Queued jobs waiting The queued jobs waiting metric shows the number of jobs awaiting assignment to an agent. It displays peak queue depth for your selected time period and volume trends by cluster. This metric provides critical insight into your build pipeline's throughput capacity. When interpreting this metric: - Brief spikes that resolve quickly are normal during high-activity periods and indicate your auto-scaling is working properly. - Sustained spikes signal potential agent availability constraints that require attention. For sustained high queue depths: - Compare with agent utilization — High utilization with high queue depth indicates you need more agents. - Check agent scaling configurations — Your scaling may be too slow or have insufficient maximum capacity. - Review agent health — Agents may be online but unable to process jobs due to configuration issues. - Analyze job distribution — Queue buildup might occur on specific queues while others remain idle. For recurring patterns in queue depth: - Identify peak usage times — Schedule non-urgent jobs outside these windows. - Implement queue prioritization — Ensure critical jobs get processed first during high-demand periods. - Adjust pre-scaling thresholds — Configure auto-scaling to anticipate known busy periods. - Consider reserved capacity — Maintain a higher baseline agent count for predictable peak periods. Effective queue management directly impacts developer productivity by reducing wait times and maintaining consistent build performance. ###### Agent utilization Agent utilization reveals the percentage of your agent fleet actively running jobs. Consistent utilization above 95% indicates potential capacity issues, and utilization below 70% suggests inefficient resource use. When facing high utilization (>95%): - Increase agent capacity. - Review job distribution across clusters. - Check for blocked or stalled agents. For low utilization (<70%): - Consider reducing agent capacity. - Review agent scaling settings. - Analyze job scheduling patterns. ###### Active agents and running jobs These metrics provide insight into your build capacity and resource usage. Sudden drops in active agents or misalignment between running jobs and agent utilization often indicate potential issues that need investigation. When investigating capacity issues, consider: - Monitor scaling effectiveness. - Check agent health when seeing unexpected drops. - Balance job distribution across clusters. ###### Job pass rate The job pass rate helps identify potential issues across your clusters. Sudden dips or sustained lower pass rates often indicate problems that require immediate attention. For sudden dips in pass rate: - Check affected clusters. - Review recent changes. - Investigate failed jobs. When dealing with sustained lower pass rates: - Analyze patterns by cluster. - Review agent configurations. - Check for infrastructure issues. ##### Common scenarios and solutions ###### High queue times with normal utilization High queue times combined with normal utilization often point to inefficiencies in your build infrastructure. This pattern typically indicates agent capacity issues, job scheduling problems, or agent configuration mismatches. To address these issues: - Review agent scaling settings. - Check job queue distribution. - Analyze job resource requirements. ###### Spiky utilization patterns Spiky utilization patterns usually stem from scheduled job bunching or insufficient auto-scaling response. These patterns can impact build performance and resource efficiency. To optimize your setup: - Adjust job scheduling. - Review auto-scaling configurations. - Consider workload distribution changes. ##### Getting help The cluster insights dashboard helps identify potential issues, but sometimes you may need additional support. Buildkite offers several resources to help you optimize your build infrastructure: - Review the [Buildkite agents documentation](/docs/agent). - Contact Buildkite support at support@buildkite.com for personalized guidance. - Join the [Buildkite community forum](https://forum.buildkite.community/) to discuss configurations with other users. --- ### Queue metrics URL: https://buildkite.com/docs/pipelines/insights/queue-metrics #### Queue metrics in clusters Queue metrics show the most important statistics to help you optimize your agent setup and monitor a queue's performance. These statistics are updated on the page every 10 seconds. > 📘 > _Unclustered agents_ are not reported in advanced queue metrics. Learn more about unclustered agents in [Working with unclustered agent tokens](/docs/agent/self-hosted/tokens#working-with-unclustered-agent-tokens). ##### Metrics panels ###### Agents panel **Agents Connected** is the number of agents connected to the queue. The circular chart represents the fraction of agents that are busy working on jobs compared to those that are idle and ready for a job. Hovering over the chart shows the **Agent Utilization** panel, which displays the percentage values for each chart component. For agent utilization, agents are considered busy if they have a job ID assigned. > 📘 > The number of agents shown in the agent panel includes agents in a `stopping` state. This may cause a variation in the number shown in the agents panel and the graph displaying `connected` agents. ###### Jobs panel **Jobs Running** shows the number of jobs assigned to agents. These are any jobs in the queue in the following states: - `ASSIGNED` - `ACCEPTED` - `RUNNING` - `CANCELING` - `TIMING_OUT` **Jobs Waiting** shows the number of jobs not yet assigned to an agent. These are any jobs for the queue in the `SCHEDULED` state. ###### Current wait panel **Current Wait** shows the various job wait time percentiles for this queue's waiting jobs. The percentiles represent how long it takes jobs to be assigned an agent. If there are no waiting jobs, dashes (`-`) are shown instead. ##### Advanced Queue Metrics Advanced Queue Metrics show a queue’s activity from the past hour, identifying patterns in how your agents adapt to job numbers and evaluating the efficiency of your [scaling rules](/docs/pipelines/tutorials/parallel-builds#auto-scaling-your-build-agents). - `Connected Agents` shows the number of agents that were connected to this queue - `Waiting Jobs` shows the number of jobs that were waiting to be assigned an agent - `Running Jobs` shows the number of jobs that have started running on an agent The chart shows the past hour of activity with each data point representing a minute. A minute is represented by a snapshot of the metric at the end of that minute. > 📘 > Advanced Queue Metrics is complimentary while in beta however it will be a separate paid product once refined and no longer a beta feature. ###### Enable Advanced Queue Metrics Any Buildkite administrator can enable Advanced Queue Metrics for an organization. Once you enable Advanced Queue Metrics, you can only disable them by contacting support. To enable Advanced Queue Metrics: 1. Navigate to your [organization’s pipeline settings](https://buildkite.com/organizations/~/pipeline-settings). 1. In **Advanced Queue Metrics**, select **Enable Advanced Queue Metrics**. 1. Advanced Queue Metrics will now appear on your queue pages. Immediately after enabling Advanced Queue Metrics you'll notice the `Connected Agents` count will be zero or too low. This is because we only track newly connected agents once Advanced Queue Metrics is enabled. This usually resolves itself as your agents scale down and back up. --- ### Overview URL: https://buildkite.com/docs/pipelines/integrations #### Integrations Learn more about source control integrations in [Connect source control](/docs/pipelines/source-control). --- ### Overview URL: https://buildkite.com/docs/pipelines/integrations/plugins #### Buildkite plugins Plugins are small self-contained pieces of extra functionality that help you customize Buildkite to your specific workflow. Plugins modify your build [command steps](/docs/pipelines/configure/step-types/command-step) at one or more of the ten [job lifecycle hooks](/docs/agent/hooks). Each hook modifies a different part of the job lifecycle, for example: - Setting up the environment. - Checking out the code. - Running commands. - Handling artifacts. - Cleaning up the environment. The following diagram shows how a plugin might hook into the job lifecycle: Plugins can be *open source* and available for anyone to use, or *private* and kept in private repositories that only your organization and agents can access. Plugins can be hosted and referenced using [a number of sources](/docs/pipelines/integrations/plugins/using#plugin-sources). Plugins can be also be *vendored* (if they are already present in the repository, and included using a relative path) or *non-vendored* (when they are included from elsewhere), which affects the [order](/docs/agent/hooks#job-lifecycle-hooks) they are run in. ##### How to use plugins Add plugins to [command steps](/docs/pipelines/configure/step-types/command-step) in your YAML pipeline to add functionality to Buildkite. Plugins can do things like execute steps in Docker containers, read values from a credential store, or add test summary annotations to builds. Reference plugins in your pipeline configuration, and when the step containing the plugin runs, your agent will override the default behavior with hooks defined in the plugin [hooks](/docs/agent/hooks). In case there is more than one, it will be with the command hook of the first plugin that defines it. > 📘 Plugin execution and conditionals > Plugins run during the job lifecycle, before the step-level `if` conditionals are evaluated. To conditionally run plugins, use either [group steps with conditionals](/docs/pipelines/configure/conditionals#conditionally-running-plugins-with-group-steps) or [dynamic pipeline uploads](/docs/pipelines/configure/conditionals#conditionally-running-plugins-with-dynamic-uploads). Some plugins allow configuration. This is usually defined in your `pipeline.yml` file and is read by the agent before the plugin hooks are run. See plugins' readme files for detailed configuration and usage instructions. See [Using plugins](/docs/pipelines/integrations/plugins/using) for more information about adding plugins to your pipeline definition. ##### Finding plugins The [Buildkite plugins directory](https://buildkite.com/resources/plugins) allows you to discover and find all plugins maintained by Buildkite, as well as those from third-party developers. buildkite.com/resources/plugins Plugins supported by the Buildkite team display the Buildkite logo in the directory, and can be found in the [Buildkite Plugins GitHub organization](https://github.com/buildkite-plugins). ##### Creating a plugin Learn more about how to create plugins, along with step-by-step instructions, on the [Writing plugins](/docs/pipelines/integrations/plugins/writing) page, along with some [useful tools](/docs/pipelines/integrations/plugins/writing#plugin-tools) to help you develop them. --- ### Plugins directory URL: https://buildkite.com/docs/pipelines/integrations/plugins/directory #### Plugins directory The _Buildkite plugins directory_ (also accessible from the main [Buildkite website](https://buildkite.com/resources/plugins)), allows you to discover and find all plugins maintained by Buildkite, as well as those from third-party developers. Once you have [created your own plugin](/docs/pipelines/integrations/plugins/writing), you can also [add it to the plugins directory](/docs/pipelines/integrations/plugins/writing#publish-to-the-buildkite-plugins-directory). buildkite.com/resources/plugins Plugins supported by the Buildkite team display the Buildkite logo in the directory, and can be found in the [Buildkite Plugins GitHub organization](https://github.com/buildkite-plugins). For the instructions on writing and adding your plugin to the Buildkite plugins directory, see [Writing plugins](/docs/pipelines/integrations/plugins/writing). --- ### Using plugins URL: https://buildkite.com/docs/pipelines/integrations/plugins/using #### Using plugins Plugins can be used in pipeline [command steps](/docs/pipelines/configure/step-types/command-step) to access a library of commands or perform actions. ##### Adding a plugin to your pipeline To add a plugin to a [command step](/docs/pipelines/configure/step-types/command-step), use the `plugins` attribute. The `plugins` attribute accepts an array, so you can add multiple plugins to the same step. When multiple plugins are listed in the same step, they will run in the [order of the hooks](/docs/agent/hooks#job-lifecycle-hooks), and within each hook, in the order they were listed in the step. ```yml steps: - command: yarn install && yarn run test plugins: - shellcheck#v1.4.0: files: scripts/*.sh - docker#v5.13.0: image: node workdir: /app ``` > 📘 > Always specify a tag or commit (for example, `v1.2.3`) to prevent the plugin changing unexpectedly, and to prevent stale checkouts of plugins on your agent machines. Not all plugins require a `command` attribute, for example: ```yml steps: - plugins: - docker-login#v3.0.0: username: xyz - docker-compose#v5.11.0: build: app image-repository: index.docker.io/myorg/myrepo ``` Although there's no `command` attribute in the above example, this is still considered a command step, so all command attributes are available for use. It is possible to define multiple hooks of the same type in both a [plugins](/docs/agent/hooks#hook-locations-plugin-hooks) and the [agent hooks](/docs/agent/hooks#hook-locations-agent-hooks) location. See [job lifecycle hooks](/docs/agent/hooks#job-lifecycle-hooks) for the overall order of hooks, and the relative order of invocation for each location. ##### Configuring plugins Plugins are configured using attributes on steps in your pipeline YAML definition. While you can't define plugins at a pipeline level, you can use [YAML anchors](/docs/pipelines/integrations/plugins/using#using-yaml-anchors-with-plugins) to avoid repeating the plugin code over multiple steps. The simplest plugin is one that accepts no configuration, such as the [Library Example plugin](https://github.com/buildkite-plugins/library-example-buildkite-plugin): ```yml steps: - label: "\:books\:" plugins: - library-example#v1.0.0: ~ ``` More commonly, plugins accept various configuration options. For example, the [Docker plugin](https://github.com/buildkite-plugins/docker-buildkite-plugin) requires the attribute `image`, and we have also included the optional `workdir` attribute: ```yml steps: - command: yarn install && yarn run test plugins: - docker#v5.13.0: image: node workdir: /app ``` More advanced plugins, such as [Docker Compose plugin](\https://github.com/buildkite-plugins/docker-compose-buildkite-plugin), are designed to be used multiple times in a pipeline, using the build's [meta-data store](/docs/pipelines/configure/build-meta-data) to share information from one step to the next. This means that you can build a Docker image in the first step of a pipeline and refer to that image in subsequent steps. ```yml steps: # Prebuild the app image, upload it to a registry for later steps - label: "\:docker\: Build" plugins: - docker-compose#v5.11.0: build: app image-repository: index.docker.io/org/repo - wait # Use the app image built above to run concurrent tests - label: "\:docker\: Test %spawn" command: test.sh parallelism: 25 plugins: - docker-compose#v5.11.0: run: app ``` See each plugin's readme for a list of which options are available. ##### Using YAML anchors with plugins YAML allows you to define an item as an anchor with the ampersand `&` character. You can then reference the anchor with the asterisk `*` character, also known as an _alias_, which includes the content of the anchor at the point it is referenced. The following example uses a YAML anchor (`docker`) to remove the need to repeat the same plugin configuration on each step: ```yml common: - docker_plugin: &docker docker#v5.13.0: image: something-quiet steps: - label: "Read in isolation" command: echo "I'm reading..." plugins: - *docker - label: "Read something else" command: echo "On to a new book" plugins: - *docker ``` This would result in the `steps` section being expanded to: ```yml ... steps: - label: "Read in isolation" command: echo "I'm reading..." plugins: docker#v5.13.0: image: something-quiet - label: "Read something else" command: echo "On to a new book" plugins: docker#v5.13.0: image: something-quiet ``` ###### Overriding YAML anchors You can override a [YAML anchor](#using-yaml-anchors-with-plugins) with the `<<:` syntax before its _alias_. This allows you to override parts of the anchor item's contents, while retaining others, therefore reducing the need to create multiple anchors with similar configurations. The following example uses a YAML anchor (`docker-step`) and overrides the `command` run in one of its aliases whilst using the same plugin version and container image: ```yml common: - docker-step: &docker-step command: "uname -a" plugins: docker#v5.13.0: image: alpine steps: - *docker-step - <<: *docker-step command: "date" ``` This would result in the `steps` section being expanded to: ```yml ... steps: - command: "uname -a" plugins: docker#v5.13.0: image: alpine - command: "date" plugins: docker#v5.13.0: image: alpine ``` ##### Plugin sources There are three main sources of plugins: - Buildkite-maintained plugins - Non-Buildkite plugins hosted on GitHub - Local, private, and non-GitHub plugins Buildkite-maintained plugins can be found in the [Buildkite Plugins GitHub organization](https://github.com/buildkite-plugins). When using these plugins, you can refer to them using only the name of the plugin, for example: ```yml steps: - command: yarn install && yarn run test plugins: # Resolves to https://github.com/buildkite-plugins/docker-buildkite-plugin - docker#v5.13.0: image: node workdir: /app ``` Non-Buildkite plugins hosted on GitHub require you to include the GitHub user or organization name as well as the plugin name, for example: ```yml steps: - command: yarn install && yarn run test plugins: # Resolves to https://github.com/my-org/docker-buildkite-plugin - my-org/docker#v5.13.0: image: node workdir: /app ``` Local, private, and non-GitHub plugins can be used by specifying the fully qualified Git URL, for example: ```yml steps: - command: yarn install && yarn run test plugins: - https://bitbucket.com/my-org/my-plugin.git#v1.0.0: ~ - ssh://git@github.com/my-org/my-plugin.git#v1.0.0: ~ - file:///a-local-path/my-plugin.git#v1.0.0: ~ ``` You can also reference plugins stored in subdirectories of a repository by appending the subdirectory path to the URL. This allows you to keep multiple plugins in a single repository: ```yml steps: - command: yarn install && yarn run test plugins: - https://github.com/my-org/my-plugins.git/my-plugin#v1.0.0: ~ ``` For more information, see [Subdirectory plugins](/docs/pipelines/integrations/plugins/writing#subdirectory-plugins). ##### Pinning plugin versions To avoid a plugin's git tag contents being changed, you can use the commit SHA of the tag, for example using `docker-compose#287293c4` in the following example: ```yml steps: - command: echo 'Hello World' plugins: - docker-compose#287293c4: run: app ``` ##### Referencing plugins from a specific branch To test plugins you can reference the branch, for example: ```yml steps: - command: echo 'Hello World' plugins: - docker-compose#feature/add-new-feature: run: app ``` ##### Disabling plugins To selectively allow and disallow plugins see [securing your Buildkite agent](/docs/agent/self-hosted/security#restrict-access-by-the-buildkite-agent-controller-allow-a-list-of-plugins). To disable plugins entirely, set the [`no-plugins`](/docs/agent/self-hosted/configure#no-plugins) option. --- ### Writing plugins URL: https://buildkite.com/docs/pipelines/integrations/plugins/writing #### Writing plugins This page shows you how to write and publish your own Buildkite plugins, and how to validate the `plugin.yml` file which describes it against the plugin schema. A [number of tools](/docs/pipelines/integrations/plugins/writing#plugin-tools) are also available to help you develop your plugin. ##### Tutorial: write a plugin In this tutorial, you will create a Buildkite plugin called "File Counter", which counts the number of files in the build directory once the command has finished, and creates a build annotation with the count. ```yml steps: - command: ls plugins: - a-github-user/file-counter#v1.0.0: pattern: '*.md' ``` ##### Step 1: Create a new git repository The most common kind of Buildkite plugin is a Git repository, with a descriptive name ending in `-buildkite-plugin`. This suffix is required to allow using the `user/plugin-name` syntax in pipelines. Let's create a new Git repository following these naming conventions: ```shell mkdir file-counter-buildkite-plugin cd file-counter-buildkite-plugin git init ``` > 📘 The `-buildkite-plugin` suffix > We recommend using the `-buildkite-plugin` suffix in the repository name because: > > You can reference the plugin in pipelines using the `user/plugin-name` syntax rather than the full URL. > It makes it easier for community members to find and use the plugin if you make it public. > It communicates the purpose of the code. > ##### Step 2: Add a plugin.yml Next, create `plugin.yml` to describe how the plugin appears in the [Buildkite plugins directory](https://buildkite.com/resources/plugins), what it requires, and what configuration options it accepts. ```yaml name: File Counter description: Annotates the build with a file count author: https://github.com/a-github-user requirements: [] configuration: properties: pattern: type: string additionalProperties: false ``` The `configuration` property defines the validation rules for the plugin configuration using the [JSON Schema](https://json-schema.org) format. The plugin in this tutorial has a single `pattern` property, of type `string`. Configuration properties are available to the hook script as environment variables with the naming pattern `BUILDKITE_PLUGIN__` where `` is not the name defined in `plugin.yml` but the repository or folder name compatible with Bash environment variables (in uppercase and only letters, numbers, and underscores). In this case, the configured value of `pattern` will be available as `BUILDKITE_PLUGIN_FILE_COUNTER_PATTERN`. > 📘 Accessing properties on plugins referenced with Git URLs > Note that if you [reference a plugin](/docs/pipelines/integrations/plugins/using#plugin-sources) with a full URL ending in `.git` and that plugin's name does not end with `-buildkite-plugin`, variable names will include `_GIT` as part of the plugin name. For example, the value of the configuration `pattern` in `https://github.com/my-org/my-plugin.git#v1.0.0` will be available as `BUILDKITE_PLUGIN_MY_PLUGIN_GIT_PATTERN`. ###### Valid plugin.yml properties | Property | Description | name | The name of the plugin, in Title Case. | description | A short sentence describing what the plugin does. | author | A URL to the plugin author (for example, website or GitHub profile). | requirements | An array of commands that are expected to exist in the agent's `$PATH`. | configuration | A [JSON Schema](https://json-schema.org) describing the valid configuration options available. ##### Step 3: Validate the plugin The [Buildkite Plugin Linter](https://github.com/buildkite-plugins/buildkite-plugin-linter) is an app that helps ensure your plugin is up-to-date and has all the files required to list it in the plugins directory. The app is available as a Docker image you can run on the command line, with a dedicated plugin, or with Docker Compose. We recommend you start by running the linter on the command line, and then include the dedicated plugin in the pipeline for your plugin. ###### Run on the command line You can run the plugin linter with the following Docker command: ```shell docker run -it --rm -v "$PWD:/plugin:ro" buildkite/plugin-linter --id a-github-user/file-counter ``` ###### Run with the dedicated plugin If your plugin has a Buildkite pipeline, you can add a step to lint it using the corresponding plugin: ```yml - label: ":shell: Lint" plugins: plugin-linter#v3.3.0: id: a-gihub-user/file-counter ``` ###### Run with Docker Compose If you want to run the linter using Docker Compose, you can add the following to a `docker-compose.yml` file: ```yml services: lint: image: buildkite/plugin-linter command: ['--id', 'a-github-user/file-counter'] volumes: - ".:/plugin:ro" ``` You can then run the tests using the following command: ```shell docker-compose run --rm lint ``` ##### Step 4: Add a hook Plugins can implement a number of [plugin hooks](/docs/agent/hooks). For this plugin, create a `post-command` hook in a `hooks` directory: ```shell mkdir hooks touch hooks/post-command chmod +x hooks/post-command ``` ```shell #!/bin/bash set -euo pipefail PATTERN="$BUILDKITE_PLUGIN_FILE_COUNTER_PATTERN" echo "--- \:1234\: Counting the number of files" COUNT=$(find . -name "$PATTERN" | wc -l) echo "Found ${COUNT} files matching ${PATTERN}" buildkite-agent annotate "Found ${COUNT} files matching ${PATTERN}" ``` {: codeblock-file="hooks/post-command"} ##### Step 5: Add a test The next step is to test the `post-command` hook using BATS, and the `buildkite/plugin-tester` Docker image. ```shell mkdir tests touch tests/post-command.bats chmod +x tests/post-command.bats ``` Create the following `tests/post-command.bats` file: ```shell #!/usr/bin/env bats load "$BATS_PLUGIN_PATH/load.bash" #### Uncomment the following line to debug stub failures #### export BUILDKITE_AGENT_STUB_DEBUG=/dev/tty @test "Creates an annotation with the file count" { export BUILDKITE_PLUGIN_FILE_COUNTER_PATTERN="*.bats" stub buildkite-agent 'annotate "Found 1 files matching *.bats" : echo Annotation created' run "$PWD/hooks/post-command" assert_success assert_output --partial "Found 1 files matching *.bats" assert_output --partial "Annotation created" unstub buildkite-agent } ``` To run the test, run the following Docker command: ```shell docker run -it --rm -v "$PWD:/plugin:ro" buildkite/plugin-tester ``` ``` ✓ Creates an annotation with the file count 1 test, 0 failures ``` To make it easier to run this command, create a Docker Compose file: ```yml version: '2' services: tests: image: buildkite/plugin-tester volumes: - ".:/plugin:ro" ``` You can now run the tests using the following command: ```shell docker-compose run --rm tests ``` ##### Step 6: Add a readme Next, add a `README.md` file to introduce the plugin to the world: ##### Developing a plugin with a feature branch When developing plugins, it is useful to have a quick feedback loop between making a change in your plugin code, and seeing the effects in a Buildkite pipeline. Let's say you're developing your feature on `my-org/plugin#dev-branch`. *By default*, if a Buildkite agent sees that it needs the plugin `my-org/plugin#dev-branch`, and it already has a checkout matching that, it will *not* pull any changes from the Git repository. But if you *do* want to see changes reflected immediately, set [`plugins-always-clone-fresh`](/docs/agent/self-hosted/configure#plugins-always-clone-fresh) to `true`. One way to try this is to add the following step to the Buildkite pipeline where you're testing your plugin. Configuring `BUILDKITE_PLUGINS_ALWAYS_CLONE_FRESH` on only one step means that other plugins, which are unlikely to be changing in the meantime, won't get unnecessarily cloned on every step invocation. You need agent version v3.37.0 or above to use `BUILDKITE_PLUGINS_ALWAYS_CLONE_FRESH`. ```yml steps: - command: ls env: BUILDKITE_PLUGINS_ALWAYS_CLONE_FRESH: "true" plugins: - a-github-user/file-counter#dev-branch: pattern: '*.md' ``` ##### Publish to the Buildkite plugins directory To publish your plugin to the [Buildkite plugins directory](https://buildkite.com/resources/plugins): 1. Host your plugin in GitHub as a public repository. 1. Ensure your repository contains a valid `plugin.yml` file containing at least the `name` and `description` fields. 1. Add the `buildkite-plugin` [GitHub repository topic tag](https://help.github.com/en/github/administering-a-repository/classifying-your-repository-with-topics) (your plugin will become discoverable under the `buildkite-plugin` [repository topic tag](https://github.com/topics/buildkite-plugin) as a result). 1. Wait until the next Sunday (UTC) for the plugins directory to sync with GitHub, and for your plugin to appear. For example: Once completed, your plugin will appear in the directory: If you would like your plugin to appear in a certain category in the plugins directory, you need to add the corresponding GitHub label(s). Currently, the following labels will be recognized by the plugins directory: - Task * Code checkout: `checkout`, `git`, `svn` * Tests: `test`, `testing`, `junit`, `jest` * Cache: `cache`, `caching` * Containers/Docker: `docker`, `container`, `containers` * Running jobs in Kubernetes : `kubernetes`, `k8s` * Secrets: `secret`, `secrets`, `vault` * Authenticate: `auth`, `authenticate` * Writing Buildkite pipelines: `pipeline`, `pipelines` * Deploy: `deploy`, `deployment`, `release` * Running jobs in VMs: `vm`, `virtual machine` * Security & compliance: `security`,`compliance`,`audit`,`scan`,`scanning`,`vulnerability` * Running jobs in Windows: `windows` * Observability: `observability`, `monitoring`, `logging`, `metrics` * Mobile app development: `mobile`, `ios`, `android`, `react-native` * Notify: `notify`, `notification` * Linting & formatting: `lint`, `linting`, `format`, `formatting`, `shellcheck` * Packages: `package`, `packaging`, `npm`, `pip` * AI/LLMs: `ai`, `llm`, `ml`, `machine learning` * Project management: `project`, `management` * Incident management: `incident`, `incident-response`, `alert` - Integration * Integrations: `integration`, `integrations`, `slack`, `discord`, `jira` * AWS: `aws`, `amazon` * GCP: `gcp`, `google-cloud`, `google` * Azure: `azure`, `microsoft` - Language * Java: `java`, `maven`, `gradle` * Ruby: `ruby`, `rails` * Golang: `go`, `golang` * JavaScript: `javascript`, `typescript`, `node`, `nodejs` * Bazel: `bazel` * Infrastructure as code: `terraform`, `cloudformation`, `cfn`, `infrastructure` * Other languages: `julia`, `python`, `rust`, `c++`, `c#`, `dhall` > 🚧 > If you've completed the above steps and your plugin doesn't appear in the directory, send an email to [support@buildkite.com](mailto:support@buildkite.com) and we'll investigate it for you. ##### Designing plugins: single-command plugins versus library plugins When writing plugins, there are two patterns you can choose from: - A single-command plugin: a small, declarative plugin, which exposes a single command for use in your pipeline steps. Most plugins follow this pattern. - A library plugin, or super-plugin: this plugin type assembles multiple commands into one plugin. Refer to the [library example Buildkite plugin](https://github.com/buildkite-plugins/library-example-buildkite-plugin) for an example of how to set up this type of plugin. ##### Vendored plugins If you don't plan to share the plugin outside of one repository, you can use a _vendored plugin_. Vendored plugins sit alongside the rest of the repository code, and you include them with a relative path: ```yml steps: - command: ls plugins: - ./relative/path/to/plugin: pattern: '*.md' ``` Vendored plugins run after non-vendored plugins and don't have access to all the same hooks. See [the documentation about job lifecycle hooks](/docs/agent/hooks#job-lifecycle-hooks) to learn more. ##### Subdirectory plugins You can store multiple plugins in subdirectories of a single Git repository and reference them by appending the subdirectory path to the plugin URL. This lets you manage a collection of related plugins in one repository instead of maintaining separate repositories for each plugin. ```yml steps: - command: ls plugins: - https://github.com/my-org/my-buildkite-plugins.git/plugin-one#v1.0.0: ~ - https://github.com/my-org/my-buildkite-plugins.git/plugin-two#v1.0.0: ~ - https://github.com/my-org/my-buildkite-plugins.git/nested/plugin-three#v1.0.0: ~ ``` Each subdirectory should contain its own `plugin.yml` and `hooks/` directory, just like a standalone plugin: ``` my-buildkite-plugins/ ├── plugin-one/ │ ├── hooks/ │ │ └── environment │ └── plugin.yml ├── plugin-two/ │ ├── hooks/ │ │ └── post-command │ └── plugin.yml └── nested/ └── plugin-three/ ├── hooks/ │ └── pre-exit └── plugin.yml ``` The agent clones the entire repository, then uses the hooks and configuration from the specified subdirectory. > 📘 > Subdirectory plugins requires v3.108.0 or later of the Buildkite agent. ##### Cross-platform plugins Plugins can support multiple operating systems by including platform-specific hook scripts. The Buildkite agent automatically selects the appropriate hook file based on the operating system it's running on. ###### How hook file selection works On Windows, the agent searches for hook files in the following order: 1. `hooks/.bat` 1. `hooks/.cmd` 1. `hooks/.ps1` 1. `hooks/.exe` 1. `hooks/` (no file extension, for Bash for Windows) On Linux and macOS, the agent only looks for no file extension hook files (for example, `hooks/`). The agent uses the first matching file it finds. ###### Writing a cross-platform plugin To support both Windows and Unix-like systems, include both hook variants in your plugin: ``` my-plugin/ ├── hooks/ │ ├── post-checkout # Linux and macOS (no file extension, executable) │ └── post-checkout.bat # Windows Batch script ├── plugin.yml └── README.md ``` The agent running the job selects the appropriate file automatically. You don't need separate plugins for different operating systems. ###### Example hooks A Linux/macOS hook (`hooks/post-checkout`): ```bash #!/bin/bash set -euo pipefail echo "Running on Unix-like system" export MY_VAR="value" ``` An equivalent Windows hook (`hooks/post-checkout.bat`): ```batch @ECHO OFF echo Running on Windows SET MY_VAR=value ``` ##### Plugin tools The following tools can be helpful when creating and maintaining your own Buildkite plugins: [:jigsaw: Template Buildkite Plugin A plugin template with customizable options you can use to create your own plugin. github.com/buildkite-plugins/template-buildkite-plugin](https://github.com/buildkite-plugins/template-buildkite-plugin) [:hammer: Buildkite Plugin Tester Docker image with a number of shell testing and stubbing tools. github.com/buildkite-plugins/plugin-tester](https://github.com/buildkite-plugins/plugin-tester) [:sparkles: Buildkite Plugin Linter Linter that checks your plugin for best practices. github.com/buildkite-plugins/buildkite-plugin-linter](https://github.com/buildkite-plugins/buildkite-plugin-linter) [:shell: Buildkite Shellcheck Plugin Plugin for detecting potential problems in your hook scripts. github.com/buildkite-plugins/shellcheck-buildkite-plugin](https://github.com/buildkite-plugins/shellcheck-buildkite-plugin) [:terminal: Buildkite CLI Command line tool for running Buildkite pipelines entirely locally. github.com/buildkite/cli](https://github.com/buildkite/cli) [:memo: Release Drafter A GitHub App to help draft your release notes. github.com/release-drafter/release-drafter](https://github.com/release-drafter/release-drafter) [:duck: Boomper A GitHub app for bumping the version numbers in your readme examples. github.com/toolmantim/boomper](https://github.com/toolmantim/boomper) For help writing the JSON Schema in the `configuration` key of your `plugin.yml` file, the following resources may be useful: [:json: JSON Schema The official JSON Schema specification. json-schema.org](http://json-schema.org) [:json: JSON Schema Lint Validating your JSON schema with YAML. jsonschemalint.com](https://jsonschemalint.com/) [:json: Understanding JSON Schema Tutorial to help understand how to write JSON Schema. spacetelescope.github.io/understanding-json-schema/](https://spacetelescope.github.io/understanding-json-schema/) --- ### PagerDuty URL: https://buildkite.com/docs/pipelines/integrations/notifications/pagerduty #### PagerDuty The [PagerDuty](http://pagerduty.com/) integration in Buildkite can send [change events](https://support.pagerduty.com/docs/change-events) to PagerDuty when your builds finish. ##### Generating a PagerDuty integration API key Before using the integration you'll need to generate a PagerDuty Integration API Key. In [PagerDuty](http://pagerduty.com/), go to the **Service Directory**: Then choose the service you'd like Buildkite to send change events to: Navigate to the **Integrations** tab and choose **Add a new integration**: Under **Integration Name**, choose a memorable name for this integration. A good example could be the name of the Buildkite pipeline you intend to add this integration to. For **Integration Type**, choose **Buildkite**. Once you've filled out this form, select **Add Integration**. Copy the **Integration Key** from your Integrations list and use it in [Sending change events from your pipeline](#sending-change-events-from-your-pipeline). ##### Sending change events from your pipeline By default, after you've added an Integration API Key, Buildkite will send PagerDuty a Change Event every build regardless of whether the build passed or failed. Add the PagerDuty [Integration API Key](#generating-a-pagerduty-integration-api-key) to the [`notify` attribute](/docs/pipelines/configure/notify) of your build configuration. Make sure that you're using a secure secrets management solution to handle the PagerDuty Integration key, and never commit it in plaintext to source control in a YAML file. See [Managing pipeline secrets](/docs/pipelines/security/secrets/managing) for more information on safely handling secrets within your infrastructure. ```yaml steps: - command: "tests.sh" - wait - command: "deploy.sh" notify: - pagerduty_change_event: "${PAGER_DUTY_API_KEY}" ``` To send change events only when the build passes, add a [condition](/docs/pipelines/configure/conditionals) to your build configuration: ```yaml notify: - pagerduty_change_event: "${PAGER_DUTY_API_KEY}" if: "build.state == 'passed'" ``` ##### Support For those of you coming from [the PagerDuty website](https://pagerduty.com) looking for assistance on this integration please reach out to us at [support@buildkite.com](mailto:support@buildkite.com?subject=PagerDuty%20Change%20Events%20Integration). --- ### Slack URL: https://buildkite.com/docs/pipelines/integrations/notifications/slack #### Slack The [Slack](https://slack.com/) notification service in Buildkite lets you receive notifications about your builds and jobs in your Slack workspace. Configuring a Slack notification service will authorize access for a required channel or user. By default, notifications will be sent to all Slack channels and users you've [added and configured as separate Slack notification services](#adding-a-notification-service) through the Buildkite interface. Setting up a notification service requires Buildkite organization admin access. > 📘 > You can use the [Slack Workspace](/docs/pipelines/integrations/notifications/slack-workspace) notification service to set up Slack notifications as a once-off process for each workspace, after which, you can then configure notifications within your YAML pipelines to be sent to any Slack channels or users. ##### Adding a notification service In your [Buildkite organization's **Notification Services** settings](https://buildkite.com/organizations/-/services), add a Slack notification service: Click the **Add to Slack** button: Once logged in to Slack, choose a workspace, and grant Buildkite the ability to post in your chosen channel or user: Once you have granted access to your chosen channel or user in your Slack workspace, use the following fields to configure when automated Slack notifications are sent: - **Description** to give this notification service a name. - **Message theme** to choose how the notifications should be displayed. - **Pipelines** to choose which pipelines are allowed to send notifications. - **Branch filtering** to specify [patterns for branches](/docs/pipelines/configure/workflows/branch-configuration#branch-pattern-examples) (each separated by a space), whose builds will trigger when notifications can be sent. - **Build state filtering** to choose the conditions for which build states send notifications. > 🚧 > There is a default maximum number of 50 Slack notification services that can be added to your Buildkite organization. If you are an [Enterprise](https://buildkite.com/pricing/) plan customer and need more Slack notification services than this limit, please contact support@buildkite.com. Alternatively, you can use a [Slack Workspace](/docs/pipelines/integrations/notifications/slack-workspace) notification service, which only requires you to configure a single service for your Slack workspace. Once your Slack notification services have been configured, notifications will automatically be sent at the pipeline level, but not on the outcomes of individual steps. The **Choose notifications to send > When a build passes > After a failure ("Fixed")** option ensures you're notified when a build next passes after the selected **When a build is** states. > 🚧 > If you're also using the [`notify` YAML attribute](/docs/pipelines/configure/notify#slack-channel-and-direct-messages) in your pipelines for more fine grained control over your Slack notifications, ensure you've selected the **Only Some Pipelines...** option, and have excluded these pipelines from receiving the automatic notifications (that is, leave these pipelines' checkboxes clear). ##### Changing channels and users Once a Slack notification service has been [added](#adding-a-notification-service), its Slack channel, user and workspace cannot be changed. To post to a different channel, user or workspace, you'll need to add a new Slack notification service. Alternatively, you can use the [Slack Workspace](/docs/pipelines/integrations/notifications/slack-workspace) notification service to set up Slack notifications as a once-off process for each workspace, after which, you can then configure notifications within your YAML pipelines to be sent to any Slack channels or users. ##### Conditional notifications By default, notifications are sent to all configured Slack channels. For more control over when each channel receives notifications, use the `notify` YAML attribute in your `pipeline.yml` file. See the [Slack channel message](/docs/pipelines/configure/notify#slack-channel-and-direct-messages) section of the Notifications guide for the configuration information. ##### Upgrading a legacy Slack service Slack stopped accepting notifications from legacy Buildkite services on January 10th, 2020. If you have Slack set up with a legacy service or are no longer receiving notifications, add a new Slack notification service in your [Buildkite organization's **Notification Services** settings](https://buildkite.com/organizations/-/services). ###### Identify where your existing services post notifications Compare the webhook URLs from your Buildkite notification service with your Slack integration to find your existing notification settings. Finding your Buildkite webhook URL: Click on the Slack notification service in Buildkite, the webhook URL will be listed here. Finding your Slack integration's webhook URL: 1. In your Slack workspace's App Directory, click the **Manage** button and find the Buildkite app. 1. Click through the Buildkite app, then click the pencil button to edit your configuration. 1. The webhook URL will be listed under **Integration Settings**. ###### Confirm which pipelines, and which events, are posted Once you've found the matching Buildkite service and Slack app, confirm where and what you're posting to Slack. Take note of the events and pipelines so that you can set up a new notification service. ###### Create a new Slack notification service which posts Using the instructions above, [add a new Buildkite notification service](/docs/pipelines/integrations/notifications/slack#adding-a-notification-service) with the same settings as the legacy integration. ##### Privacy policy For details on how Buildkite handles your information, please see Buildkite's [Privacy Policy](https://buildkite.com/about/legal/privacy-policy/). --- ### Slack Workspace URL: https://buildkite.com/docs/pipelines/integrations/notifications/slack-workspace #### Slack Workspace The Slack Workspace notification service in Buildkite lets you receive notifications about your builds in your [Slack](https://slack.com/) workspace. ##### Configuring notifications Before configuring notifications, ensure your Slack workspace is [connected to your Buildkite organization](/docs/platform/integrations/slack-workspace). Once the Slack workspace is connected, you can then use the `notify` attribute in the YAML syntax of your pipelines to [configure specific notifications](/docs/pipelines/configure/notify#slack-channel-and-direct-messages). ```yaml notify: - slack: channels: - "buildkite-community#general" - "buildkite-community#announcements" ``` ###### Mentions in build notifications Mentions occur when there's a corresponding Slack account using one of the emails connected to the Buildkite account that triggered a build. Provide the Slack user ID, which you can access via User > More options > Copy member ID. ```yaml notify: - slack: "U123ABC456" ``` ###### Notify in private channels You can notify individuals in private channels by inviting the Buildkite Builds Slack App into the channel with `/invite @Buildkite Builds`. Build-level notifications: ```yaml notify: # Notify private channel - slack: "buildkite-community#private-channel" ``` ##### Conditional notifications Use the `notify` YAML attribute in your `pipeline.yml` file to configure conditional notifications. See the [Slack channel message](/docs/pipelines/configure/notify#slack-channel-and-direct-messages) section of the Notifications guide for the configuration information. ###### Conditional notifications with pipeline states You can control conditional notifications using `pipeline.started_passing` and `pipeline.started_failing` in the `if` attribute of the `notify` key of your `pipeline.yml`. With the previous Slack integration this was done in the UI. See [Conditional Slack notifications](/docs/pipelines/configure/notify#slack-channel-and-direct-messages-conditional-slack-notifications) for more examples. --- ### CCMenu and CCTray URL: https://buildkite.com/docs/pipelines/integrations/notifications/cc-menu #### CCMenu and CCTray Buildkite has support for the `cctray.xml` format, allowing you to feed your build status updates into desktop tools such as CCMenu, or to create build dashboards to show the status of your builds and branches. ##### Feed URL You can access your organization's `cc.xml` feed using the URL: ``` https://cc.buildkite.com/[organization-slug].xml?access_token=xxx ``` You'll need to [create an API access token](https://buildkite.com/user/api-access-tokens) with scope `read_builds`. ##### Filtering by pipeline If you want to scope to a single pipeline, add the pipeline slug to the end of the URL: ``` https://cc.buildkite.com/[organization-slug]/[pipeline-slug].xml?access_token=xxx ``` For example, if your pipeline's URL is `https://buildkite.com/acme-co/my-proj` then your feed URL would be `https://cc.buildkite.com/acme-co/my-proj.xml?access_token=xxx` ##### Filtering by branch If you want to scope the builds to a particular branch, add `&branch=[branch-name]` to the end of the URL. This works for organization and pipeline feeds. For example, the following URL will provide build status updates for only the `main` branch of a pipeline: ``` https://cc.buildkite.com/[organization-slug]/[pipeline-slug].xml?branch=main&access_token=xxx ``` ##### CCMenu for OS X http://ccmenu.org/ ##### CCTray for Windows [http://sourceforge.net/projects/ccnet/files/CruiseControl.NET](http://sourceforge.net/projects/ccnet/files/CruiseControl.NET%20Releases/CruiseControl.NET%201.8.5/) ##### BuildNotify for Ubuntu https://github.com/anaynayak/buildnotify ##### BuildReactor Google Chrome extension https://github.com/AdamNowotny/BuildReactor --- ### Notification plugins URL: https://buildkite.com/docs/pipelines/integrations/notifications/plugins #### Notification plugins The _notification plugins directory_ helps you discover Buildkite plugins that deliver notifications. buildkite.com/resources/plugins/category/notify --- ### Overview URL: https://buildkite.com/docs/pipelines/integrations/observability/overview #### Observability overview Buildkite Pipelines generates detailed events about your pipelines, builds, jobs, and agents. Observability integrations let you export this data to the monitoring tools your team already uses, so you can watch CI performance alongside the rest of your infrastructure. You can stream events to Amazon EventBridge, emit metrics and traces to Datadog or Honeycomb, or adopt the OpenTelemetry integration for vendor-neutral pipelines observability. Community plugins provide additional targets and custom dashboards. Follow the guides below to set up and explore each of these observability options. To help you decide which approaches to combine, see the [monitoring and observability decision matrix](/docs/pipelines/best-practices/monitoring-and-observability#getting-metrics-out-of-buildkite-pipelines-decision-matrix). - [Datadog](/docs/pipelines/integrations/observability/datadog) - [Honeycomb](/docs/pipelines/integrations/observability/honeycomb) - [OpenTelemetry](/docs/pipelines/integrations/observability/opentelemetry) - [Amazon EventBridge](/docs/pipelines/integrations/observability/amazon-eventbridge) - [Observability plugins](/docs/pipelines/integrations/observability/plugins) --- ### Datadog URL: https://buildkite.com/docs/pipelines/integrations/observability/datadog #### Setting up Datadog tracing on a Buildkite pipeline [Datadog](https://www.datadoghq.com/) is a comprehensive monitoring and analytics platform that combines infrastructure monitoring, application performance monitoring, and log management, allowing you to track the health and performance of your systems while identifying and troubleshooting issues across your entire deployment pipeline. Datadog users can send the information about their Buildkite pipelines to Datadog's Continuous Integration (CI) Visibility product, also known as CI Pipeline Visibility, if the **Datadog Pipeline Visibility** notification service was enabled in Buildkite. This way, any organization using both Datadog and Buildkite Pipelines can gain insights into the performance of their pipelines over time and ensure optimal resource utilization throughout their development workflow. ##### Using Datadog APM To use Datadog's Application Performance Monitoring (APM) integration, launch the Buildkite agent with the `--tracing-backend datadog` flag. ```bash buildkite-agent start --tracing-backend datadog ``` This will enable Datadog APM tracing, and send the traces to a Datadog Agent at `localhost:8126` by default. > 📘 > Learn more about the Datadog Agent and how to install it from Datadog's [Agent](https://docs.datadoghq.com/agent/) documentation. If your Datadog Agent is located at another host, the Buildkite agent will respect the [`DD_AGENT_HOST`](https://docs.datadoghq.com/tracing/trace_collection/library_config/go/#agent) and [`DD_TRACE_AGENT_PORT`](https://docs.datadoghq.com/tracing/trace_collection/library_config/go/#traces) environment variables defined by [`dd-trace-go`](https://github.com/DataDog/dd-trace-go). Note that there will need to be a Datadog Agent present at the above address to ingest these traces. Once the Buildkite agent is running with `--tracing-backend datadog`, you must run at least one job on that agent to generate trace data. After the job runs, go to Datadog > APM > Traces to view the traces. Once Datadog APM is integrated with Buildkite Pipelines, you gain full visibility into your CI pipeline through detailed tracing of Buildkite agent activity. Each job execution is captured as a trace with individual spans representing key phases such as hook execution, command runtime, and lifecycle events like pre-exit or post-command. These spans provide real-time insights into duration, performance bottlenecks, and potential failures across your builds. With built-in filtering, service tagging, Datadog enables deep observability into your CI workflows, making it easier to troubleshoot, optimize, and maintain high pipeline reliability. ##### Configuring the Datadog integration in Buildkite To set up the Datadog's CI Pipeline Visibility integration for Buildkite: 1. As a [Buildkite organization administrator](/docs/pipelines/security/permissions#manage-teams-and-permissions-organization-level-permissions), go to **Settings** > **Notification Services** and select the **Add** button next to **Datadog Pipeline Visibility**. 1. Complete in the following fields: - **Description**: A description to help identify this integration in the future, for example `Datadog CI Pipeline Visibility`. - **API key**: Your Datadog API Key. You can generate it in [your Datadog account settings](https://app.datadoghq.com/organization-settings/api-keys). - **Datadog site**: The URL of your Datadog site to send notifications to, which is typically `datadoghq.com`. While this is the default value of this field, depending on your location, you might wish to use a different site, for instance, `us3.datadoghq.com` or `us5.datadoghq.com` for US or `ap1.datadoghq.com` for Japan. Learn more about these different sites and the current list of available websites to choose from in [Getting Started with Datadog Sites](https://docs.datadoghq.com/getting_started/site/#access-the-datadog-site). - **Datadog tags**: your custom tags in Datadog. You can use one tag per line in `key:value` format. - **Pipelines**: you can select a subset of pipelines you want to trace in Datadog. Select from: * **All Pipelines**. * **Only Some pipelines**, where you can select specific pipelines in your Buildkite organization. * **Pipelines in Teams**, where you can select pipelines accessible to specific teams configured in your Buildkite organization. * **Pipelines in Clusters**, where you can select pipelines associated with specific Buildkite clusters. - **Branch filtering**: specify the branches that will trigger trace notifications. You can leave this field empty to trace all branches or select a subset of branches you would like to trace, based on [branch configuration](/docs/pipelines/configure/workflows/branch-configuration) and [pattern examples](/docs/pipelines/configure/workflows/branch-configuration#branch-pattern-examples). 1. Click **Add Datadog Pipeline Visibility Notification** button to save the integration. > 📘 > For the latest compatibility information on Datadog's side regarding this integration, please check the [Datadog documentation](https://docs.datadoghq.com/continuous_integration/pipelines/buildkite/#compatibility). ##### Advanced configuration The following configurations provide additional customization options to enhance the integration between Buildkite and Datadog's CI Pipeline Visibility. These settings allow you to fine-tune how pipeline data is collected and reported, ensuring you get the most valuable insights from your CI/CD metrics. ###### Setting custom tags To create custom tags for filtering the CI Pipeline Visibility results, you can use [`buildkite-agent meta-data set` command](/docs/agent/cli/reference/meta-data). Here is an example of how a tag can be set through a YAML pipeline configuration: ```yaml steps: - key: "dd_key_test_01" label: "step_01" command: "buildkite-agent meta-data set \"dd_tags.key\" dd_key_test_01" ... ``` After setting your tag and running a build of the pipeline, you'll be able to filter the CI Pipeline Visibility output results using the tag. ###### Numerical measures Any metadata with a key that starts with `dd_measures.` and contains a numerical value, is set as a metric tag that can be used to create numerical measures. For example: ```yaml ... - key: "dd-measures-01" label: "step_02" command: "buildkite-agent meta-data set \"dd_measures.memory_usage\" {numeric value}" ... ``` In the pipeline span for the resulting pipeline, you'll see a custom tag `memory_usage:{numeric value}`, for example `memory_usage:1000`. ###### Correlating infrastructure metrics to jobs You can correlate jobs with the infrastructure that is running them. To do this, you need to install the [Datadog Agent](https://docs.datadoghq.com/agent/) in the hosts that are running your Buildkite agents. ##### Visualizing pipeline data in Datadog Once [Datadog tracing has been configured on your Buildkite pipeline](#advanced-configuration) and its builds have been completed, then in the CI Pipeline Visibility interface, navigate to the [CI Pipeline List](https://app.datadoghq.com/ci/pipelines) and [Executions](https://app.datadoghq.com/ci/pipeline-executions) pages to see the CI Pipeline Visibility interface populated with data. Note that the [CI Pipeline List](https://app.datadoghq.com/ci/pipelines) page in CI Pipeline Visibility displays data for only the default branch of each repository. ##### Additional resources Learn more about: - Datadog integration with Buildkite in the [Set up Tracing on a Buildkite Pipeline](https://docs.datadoghq.com/continuous_integration/pipelines/buildkite/) guide of Datadog's documentation. - Overall best CI/CD practices involving the use of Datadog's APM tracing and CI Pipeline Visibility integration, from the [CI/CD best practices](https://buildkite.com/resources/blog/ci-cd-best-practices/) blog post. - Getting agent fleet metrics (queue depth, agent counts) into Datadog using StatsD and DogStatsD, from the [monitoring and observability best practices](/docs/pipelines/best-practices/monitoring-and-observability#common-metrics-recipes-agent-metrics-in-datadog). > 📘 > CI Pipeline Visibility is maintained by Datadog. Therefore, for any questions or feature requests about this product, contact [Datadog Support](https://www.datadoghq.com/support/). --- ### Honeycomb URL: https://buildkite.com/docs/pipelines/integrations/observability/honeycomb #### Using Buildkite with Honeycomb [Honeycomb](https://www.honeycomb.io/) is an observability and application performance management (APM) platform that helps you monitor and debug your applications. Honeycomb offers several advantages for Buildkite Pipelines users: - **Free plan available**: start monitoring your builds without additional costs. - **Build grouping**: group traced jobs into a single build for better visibility. - **Comprehensive tracing**: track performance and identify bottlenecks in your CI/CD pipeline. ##### Honeycomb integration methods You can integrate Honeycomb with Buildkite Pipelines using three methods: - **buildevents binary**: the [buildevents binary](https://github.com/honeycombio/buildevents) captures detailed trace telemetry for each build step. Learn more about configuring this method in [Using the buildevents binary](#using-the-buildevents-binary). - **OpenTelemetry tracing**: setting your [OpenTelemetry tracing notification endpoint to Honeycomb](/docs/pipelines/integrations/observability/opentelemetry#opentelemetry-tracing-notification-service-honeycomb) sends traces directly from the Buildkite agent. Learn more about configuring this method in [Using OpenTelemetry tracing](#using-opentelemetry-tracing). - **Honeycomb Markers Buildkite plugin**: the [Honeycomb Markers Buildkite plugin](https://www.honeycomb.io/integration/buildkite-markers) adds Buildkite Pipelines markers to your traces. However, for security best practice reasons, it is not recommended using this plugin as it is community-maintained with irregular updates. ##### Using the buildevents binary The [buildevents binary](https://github.com/honeycombio/buildevents) generates trace telemetry for your builds, and captures invocation details and command outputs, creating a comprehensive trace of your entire build process. ###### How it works The buildevents binary: 1. Creates _spans_ (individual or grouped executed commands) for each build section and subsection. 1. Tracks the duration of each stage or command. 1. Records success/failure status and additional metadata. 1. Sends the complete trace to Honeycomb when the build finishes. ###### buildevents trace structure The buildevents script needs a unique [Trace ID](https://github.com/honeycombio/buildevents?tab=readme-ov-file#trace-identifier) to connect all the relevant steps and commands with its build. You can use Buildkite Pipeline's `BUILDKITE_BUILD_ID` environment variable, since its value is unique (when re-running builds, you'll get a new `BUILDKITE_BUILD_ID`), and it is also a primary value that Buildkite Pipelines uses to identify the build. You can get started with buildevents from the [installation instructions for buildevents](https://github.com/honeycombio/buildevents?tab=readme-ov-file#installation). After integration, you'll see key telemetry from Buildkite pipelines in Honeycomb's Traces dashboard. Each trace typically represents a full build, and each span represents a job or command. Metrics visible in this dashboard include: - **Spans**: count of all job steps traced. - **Duration data**: visualizes step performance for latency analysis. - **Status information**: success or failure details. - **Custom metadata**: additional data you choose to capture. - **Total Errors**: Spans marked with `error=true`, useful for tracking CI failures. - **Trace Volume**: One per build, shows build frequency. Since Honeycomb maintains the buildevents integration, direct questions and feature requests through to [Honeycomb Support](https://www.honeycomb.io/support). ##### Explore data view The **Explore Data** tab lets you inspect spans as individual structured events. You can filter by tags like `trace.trace_id`, `command_name`, `error`, or `duration_ms`. This is helpful for isolating problematic steps, long durations, or agent behavior. Selecting a trace ID opens the flame graph (trace view), showing the full build execution timeline. ##### Using OpenTelemetry tracing You can send traces from the Buildkite agent to Honeycomb with the help of OpenTelemetry by following these steps: 1. Enable OpenTelemetry tracing by setting the `--tracing-backend opentelemetry` flag on your Buildkite agent. 1. Set the following values in the environment where you are running the Buildkite agent: ```yaml OTEL_EXPORTER_OTLP_TRACES_ENDPOINT="https://api.honeycomb.io" OTEL_EXPORTER_OTLP_HEADERS="x-honeycomb-team=" OTEL_SERVICE_NAME="buildkite-agent" ``` Replace `` with your actual Honeycomb API key. For more details, see the [OpenTelemetry tracing documentation](/docs/agent/self-hosted/monitoring-and-observability/tracing#using-opentelemetry-tracing-sending-opentelemetry-traces-to-honeycomb). --- ### OpenTelemetry URL: https://buildkite.com/docs/pipelines/integrations/observability/opentelemetry #### OpenTelemetry [OpenTelemetry](https://opentelemetry.io/) is an open standard for instrumenting, processing and collecting observability data. Buildkite supports sending [OpenTelemetry Traces](https://opentelemetry.io/docs/concepts/signals/traces/) directly from the Buildkite agent, and from the Buildkite dashboard, to your OTLP endpoint. ##### OpenTelemetry tracing notification service > 📘 Preview feature > OpenTelemetry Tracing Notification Service is currently in Preview. To provide a build-wide view of Build performance, enable the OpenTelemetry Tracing ###### Creating a new service [Create a new OpenTelemetry Notification Service](https://buildkite.com/organizations/~/services/) in your organization's Notification Services settings (under Integrations). ###### Endpoint Please provide the base URL for your OTLP endpoint. Do not include the `/v1/traces` path as that automatically appended by the Buildkite OpenTelemetry exporter. ###### Limitations - We currently only support the [OTLP/HTTP](https://opentelemetry.io/docs/specs/otlp/#otlphttp) binary protobuf encoding. - We currently only support sending [trace](https://opentelemetry.io/docs/concepts/signals/traces/) data, but may introduce other OpenTelemetry signals in the future. - The endpoint must be accessible over the internet. ###### Trace structure OpenTelemetry traces from the Buildkite notification service follow a hierarchical span structure. All spans within a build share the same trace ID, allowing you to view the complete execution flow in your observability platform. ``` ─ buildkite.build └─ buildkite.build.stage ├─ buildkite.step │ └─ buildkite.job └─ buildkite.step.group └─ buildkite.step └─ buildkite.job ``` > 📘 Build stages > Buildkite builds that have finished may be resumed at a later time, eg. by unblocking a `block` step, or manually retrying a failed job. To represent that in the OpenTelemetry format, we add an extra `buildkite.build.stage` span for each period of time that the build is in the `running`, `scheduled`, `canceling` or `failing` state. We also include a `buildkite.build.stage` span attribute to indicate how many times the build has been resumed. The following attributes are included in OpenTelemetry traces from the Buildkite notification service: ###### Resource attributes [Resource](https://opentelemetry.io/docs/concepts/resources/) attributes are included in all spans and provide context about the organization, pipeline, and build: | Key | Description | | ------------------------------- | ---------------------------------------------------- | | `service.name` | Service name (configurable, defaults to `buildkite`) | | `buildkite.organization.slug` | Organization slug | | `buildkite.organization.name` | Organization name | | `buildkite.organization.id` | Organization ID | | `buildkite.pipeline.slug` | Pipeline slug | | `buildkite.pipeline.name` | Pipeline name | | `buildkite.pipeline.id` | Pipeline ID | | `buildkite.pipeline.repo` | Pipeline repository URL | | `buildkite.pipeline.graphql_id` | Pipeline GraphQL ID | | `buildkite.pipeline.web_url` | Pipeline web URL | | `buildkite.cluster.id` | Cluster ID (if pipeline uses a cluster) | | `buildkite.cluster.name` | Cluster name (if pipeline uses a cluster) | | `buildkite.cluster.graphql_id` | Cluster GraphQL ID (if pipeline uses a cluster) | ###### Span attributes [Span attributes](https://opentelemetry.io/docs/concepts/signals/traces/#attributes) are specific to certain span types: | Key | Spans | Description | | ------------------------------------ | ---------------------------------------------------------------------------------- | ----------------------------------------------------------- | | `buildkite.build.number` | All | Build number | | `buildkite.build.commit` | All | Build commit SHA | | `buildkite.build.message` | All | Build commit message | | `buildkite.build.branch` | All | Build branch | | `buildkite.build.source` | All | Build source (`ui`, `api`, `webhook`, etc.) | | `buildkite.build.graphql_id` | All | Build GraphQL ID | | `buildkite.build.web_url` | All | Build web URL | | `buildkite.build.creator.id` | All (when build creator exists) | Build creator ID | | `buildkite.build.creator.email` | All (when build creator exists) | Build creator email | | `buildkite.build.creator.name` | All (when build creator exists) | Build creator name | | `buildkite.build.creator.graphql_id` | All (when build creator exists) | Build creator GraphQL ID | | `buildkite.build.state` | `buildkite.build`, `buildkite.build.stage` | Build state (running, passed, failed, etc.) | | `buildkite.build.blocked_state` | `buildkite.build`, `buildkite.build.stage` (when blocked) | Build blocked state (if blocked) | | `buildkite.build.stage` | `buildkite.build.stage`, `buildkite.job`, `buildkite.step.group`, `buildkite.step` | Build stage/phase number | | `buildkite.step.id` | `buildkite.job`, `buildkite.step`, `buildkite.step.group` | Step ID | | `buildkite.step.key` | `buildkite.job`, `buildkite.step`, `buildkite.step.group` | Step key | | `buildkite.step.command` | `buildkite.job`, `buildkite.step` (command steps only) | Step command script | | `buildkite.step.label` | `buildkite.job`, `buildkite.step`, `buildkite.step.group` | Step label | | `buildkite.step.type` | `buildkite.step`, `buildkite.step.group` | Step type | | `buildkite.step.matrix` | `buildkite.step`, `buildkite.step.group` (matrix steps) | Whether step uses matrix (true) | | `buildkite.step.group.label` | `buildkite.step`, `buildkite.step.group` (group steps) | Group step label | | `buildkite.step.group.key` | `buildkite.step`, `buildkite.step.group` (group steps) | Group step key | | `buildkite.job.id` | `buildkite.job` | Job ID | | `buildkite.job.graphql_id` | `buildkite.job` | Job GraphQL ID | | `buildkite.job.type` | `buildkite.job` | Job type (script, manual, waiter, etc.) | | `buildkite.job.label` | `buildkite.job` | Job label/name | | `buildkite.job.command` | `buildkite.job` | Job command | | `buildkite.job.agent_query_rules` | `buildkite.job` | Job agent query rules | | `buildkite.job.exit_status` | `buildkite.job` | Job exit status | | `buildkite.job.passed` | `buildkite.job` | Whether job passed | | `buildkite.job.soft_failed` | `buildkite.job` | Whether job soft failed | | `buildkite.job.state` | `buildkite.job` | Job state | | `buildkite.job.runnable_at` | `buildkite.job` | When job became runnable | | `buildkite.job.started_at` | `buildkite.job` | When job started | | `buildkite.job.finished_at` | `buildkite.job` | When job finished | | `buildkite.job.wait_time_ms` | `buildkite.job` | Job wait time in milliseconds | | `buildkite.job.unblocked_by` | `buildkite.job` (when unblocked) | User who unblocked job (object with uuid, graphql_id, name) | | `buildkite.job.retried_in_job_id` | `buildkite.job` (when retried) | ID of retry job (if retried) | | `buildkite.job.signal_reason` | `buildkite.job` (when terminated by signal) | Signal reason (if terminated by signal) | | `buildkite.job.matrix` | `buildkite.job` (matrix jobs only) | Job matrix configuration (JSON) | | `buildkite.agent.name` | `buildkite.job` (when agent assigned) | Agent name | | `buildkite.agent.id` | `buildkite.job` (when agent assigned) | Agent ID | | `buildkite.agent.queue` | `buildkite.job` (when agent assigned) | Agent queue | | `buildkite.agent.meta_data` | `buildkite.job` (when agent assigned) | Agent metadata | | `error.type` | All (when error status) | Error type description | ###### Headers Add any additional HTTP headers to the request. Depending on the destination, you may need to specify API keys or other headers to influence the behaviour of the downstream collector. Values for headers are always stored encrypted server-side. Here are some common examples. ###### Bearer token Key: `Authorization` Value: `Bearer ` See [Bearer Token example](https://github.com/buildkite/opentelemetry-notification-service-examples/blob/main/collector-config/bearer-token-auth-debug.yml) for example OpenTelemetry Collector configuration. ###### Basic auth First, create a base64-encoded string of the username and password separated by a colon. ```bash echo -n "${USER}:${PASSWORD}" | base64 ``` Key: `Authorization` Value: `Basic ` See [Basic Authentication example](https://github.com/buildkite/opentelemetry-notification-service-examples/blob/main/collector-config/basic-auth-debug.yml) for example OpenTelemetry Collector configuration. ###### Honeycomb Set the Endpoint to `https://api.honeycomb.io`, or `https://api.eu1.honeycomb.io ` if your Honeycomb team is in the EU instance. Add the required header: | Key | Value | | ------------------ | --------------------- | | `x-honeycomb-team` | `` | For more information, see the honeycomb documentation: https://docs.honeycomb.io/send-data/opentelemetry/#using-the-honeycomb-opentelemetry-endpoint ###### Datadog agent-less OpenTelemetry > 🚧 Preview feature > The Datadog OTLP traces intake endpoint is currently in preview. Contact your Datadog account representative to request access. Set the endpoint to the OTLP traces intake base URL for your [Datadog site](https://docs.datadoghq.com/getting_started/site/). Do not include the `/v1/traces` path, as it is automatically appended. For example: - US1: `https://otlp.datadoghq.com` - US3: `https://otlp.us3.datadoghq.com` - US5: `https://otlp.us5.datadoghq.com` - EU1: `https://otlp.datadoghq.eu` - AP1: `https://otlp.ap1.datadoghq.com` Add the required headers: | Key | Value | | ---------------- | ------------------- | | `dd-api-key` | `` | | `dd-otlp-source` | `` | The `dd-otlp-source` value is a specific identifier provided by Datadog after your organization is on the allowlist for the OTLP traces intake endpoint. For more information, see [Datadog's OTLP traces intake documentation](https://docs.datadoghq.com/opentelemetry/setup/otlp_ingest/traces/). ###### Datadog APM via OpenTelemetry collector See [Bearer token Datadog example](https://github.com/buildkite/opentelemetry-notification-service-examples/blob/main/collector-config/bearer-token-auth-datadog.yml) for more information on forwarding traces to Datadog APM using the Datadog exporter. ###### Computing metrics from OpenTelemetry traces The OpenTelemetry collector can be used to process incoming trace spans and generate custom metrics on the fly using the [signaltometrics](https://github.com/open-telemetry/opentelemetry-collector-contrib/tree/main/connector/signaltometricsconnector) processor, which can be stored in metric stores like Prometheus or InfluxDB. See [signaltometrics example](https://github.com/buildkite/opentelemetry-notification-service-examples/blob/main/collector-config/bearer-token-auth-signal-to-metrics-otlp.yml) ###### OpenTelemetry collector The OpenTelemetry collector is an open source service for collecting, exporting and processing telemetry signals. See [collector-config](https://github.com/buildkite/opentelemetry-notification-service-examples/tree/main/collector-config) for examples of OpenTelemetry collector configuration. If using the `otel/opentelemetry-collector-contrib` Docker image, you can configure the collector by mounting the your config file at `/etc/otelcol-contrib/config.yaml` or by overriding the `command` to `--config=env:OTEL_CONFIG` and setting the `OTEL_CONFIG` environment variable to the _contents_ of your config file. Consult the [Deployment](https://opentelemetry.io/docs/collector/deployment/) guide in the OpenTelemetry documentation for more information about hosting the collector. The OpenTelemetry collector also supports many downstream data stores via [exporters](https://opentelemetry.io/docs/collector/configuration/#exporters) including StatsD, Prometheus, Kafka, Tempo, OTLP as well as many other observability tools and vendors. See the [OpenTelemetry registry](https://opentelemetry.io/ecosystem/registry/?s=&component=exporter&language=collector) for a more complete list of supported exporters. ###### References - https://opentelemetry.io/docs/collector/ - https://github.com/open-telemetry/opentelemetry-collector - https://github.com/open-telemetry/opentelemetry-collector-contrib - https://opentelemetry.io/docs/collector/configuration/#authentication - https://hub.docker.com/r/otel/opentelemetry-collector-contrib ###### Validating OpenTelemetry collector configuration `otelcol validate` lets you validate your [collector configuration](https://opentelemetry.io/docs/collector/configuration/). For example, to validate one of the example configuration files in [examples repository](https://github.com/buildkite/opentelemetry-notification-service-examples/tree/main/collector-config), say `basic-auth-debug.yml` you could run the following command: ```bash docker run --rm -it -v $(pwd)/collector-config:/config otel/opentelemetry-collector-contrib validate --config=/config/basic-auth-debug.yml && echo "config valid" ``` Or for a configuration file like `bearer-token-auth-datadog.yml` that references environment variables, you would run the following command, noting the `-e` flags to provide the environment variables: ```bash docker run --rm -e DD_API_KEY=abcd -e OTLP_HTTP_BEARER_TOKEN=example -it -v $(pwd)/collector-config:/config otel/opentelemetry-collector-contrib validate --config=/config/bearer-token-auth-datadog.yml && echo "config valid" config valid ``` You can also use an online validation tool available at https://www.otelbin.io/. ##### OpenTelemetry tracing from Buildkite agent See [Tracing in the Buildkite agent](/docs/agent/self-hosted/monitoring-and-observability/tracing#using-opentelemetry-tracing). ###### Required agent flags / environment variables To propagate traces from the Buildkite control plane through to the agent running the job, include the following CLI flags to `buildkite-agent start` and include the appropriate environment variables to specify OpenTelemetry collector details. | Flag | Environment Variable | Value | | --------------------------------- | ----------------------------------------- | --------------------------------------- | | `--tracing-backend` | `BUILDKITE_TRACING_BACKEND` | `opentelemetry` | | `--tracing-propagate-traceparent` | `BUILDKITE_TRACING_PROPAGATE_TRACEPARENT` | `true` (default: `false`) | | `--tracing-service-name` | `BUILDKITE_TRACING_SERVICE_NAME` | `buildkite-agent` (default) | | | `OTEL_EXPORTER_OTLP_ENDPOINT` | `http://otel-collector:4317` | | | `OTEL_EXPORTER_OTLP_HEADERS` | see the _Authentication_ section below | | | `OTEL_EXPORTER_OTLP_PROTOCOL` | `grpc` (default) or `http/protobuf` | | | `OTEL_RESOURCE_ATTRIBUTES` | `key1=value1,key2=value2` | Note: `http/protobuf` protocol is only supported on Buildkite agent [v3.101.0](https://github.com/buildkite/agent/releases/tag/v3.101.0) or newer. See [OpenTelemetry SDK documentation](https://opentelemetry.io/docs/specs/otel/configuration/sdk-environment-variables/) for more information on available environment variables. ###### Authentication Authentication headers vary by provider. Below are the most commonly used authentication patterns. For specific requirements, consult the provider's documentation. ###### Bearer token For [Honeycomb](https://docs.honeycomb.io/get-started/), [Lightstep](https://docs.lightstep.com/), and most other providers: ```bash OTEL_EXPORTER_OTLP_HEADERS="Authorization=Bearer " ``` ###### Basic authentication [Grafana Cloud](https://grafana.com/docs/grafana-cloud/) requires Basic authentication with an instance ID and token, base64-encoded in the format `instance_id:token`: ```bash OTEL_EXPORTER_OTLP_HEADERS="Authorization=Basic " ``` To encode the token in base64, run the following command: ```bash echo -n "your-instance-id:your-token" | base64 ``` ###### Custom headers Some providers (such as Honeycomb) also support custom headers: ```bash OTEL_EXPORTER_OTLP_HEADERS="x-honeycomb-team=" ``` ###### Multiple headers Multiple headers can be specified by separating values with commas: ```bash OTEL_EXPORTER_OTLP_HEADERS="Authorization=Bearer ,x-custom-header=value" ``` ###### Propagating traces to Buildkite agents Propagating trace spans from the OpenTelemetry Notification service requires Buildkite agent [v3.100](https://github.com/buildkite/agent/releases/tag/v3.100.0) or newer, and the `--tracing-propagate-traceparent` flag or equivalent environment variable. ###### Propagating traces from Buildkite agents to commands Trace contexts are propagated automatically from a Buildkite agent to all its child processes. See [Tracing in the Buildkite agent](/docs/agent/self-hosted/monitoring-and-observability/tracing#using-opentelemetry-tracing-trace-context-propagation). ###### Buildkite hosted agents To export OpenTelemetry traces from hosted agents, this currently requires using a custom Agent Image with the following Environment variables set. Custom images can be created in Cluster settings, and is currently supported for Linux only. ```dockerfile #### this is the same as --tracing-backend opentelemetry ENV BUILDKITE_TRACING_BACKEND="opentelemetry" #### this is the same as --tracing-propagate-traceparent ENV BUILDKITE_TRACING_PROPAGATE_TRACEPARENT="true" #### service name is configurable ENV OTEL_SERVICE_NAME="buildkite-agent" #### http/protobuf available on Buildkite agent v3.101.0 or newer ENV OTEL_EXPORTER_OTLP_PROTOCOL="grpc" #### the gRPC transport requires a port to be specified in the URL ENV OTEL_EXPORTER_OTLP_ENDPOINT="http://otel-collector:4317" #### Authentication can vary by provider - see the authentication examples above #### Bearer is the most common method of Authentication: ENV OTEL_EXPORTER_OTLP_HEADERS="Authorization=Bearer " #### For Grafana Cloud, use Basic Authentication instead: ENV OTEL_EXPORTER_OTLP_HEADERS="Authorization=Basic " ``` --- ### Amazon EventBridge URL: https://buildkite.com/docs/pipelines/integrations/observability/amazon-eventbridge #### Amazon EventBridge The [Amazon EventBridge](https://aws.amazon.com/eventbridge/) notification service in Buildkite lets you stream events in real-time from your Buildkite account to your AWS account. ##### Events Once you've configured an Amazon EventBridge notification service in Buildkite, the following events are published to the partner event bus: | Detail Type | Description | [Build Created](#events-build-created) | A build has been created | [Build Started](#events-build-started) | A build has started | [Build Finished](#events-build-finished) | A build has finished | [Build Failing](#events-build-failing) | A build is failing | [Build Blocked](#events-build-blocked) | A build has been blocked | [Job Scheduled](#events-job-scheduled) | A job has been scheduled | [Job Started](#events-job-started) | A command step job has started running on an agent | [Job Finished](#events-job-finished) | A job has finished. To check a job's result, use the `passed` field. The value is `true` when the job passed, and `false` otherwise. | [Job Activated](#events-job-activated) | A block step job has been unblocked using the web or API | [Agent Connected](#events-agent-connected) | An agent has connected to the API | [Agent Lost](#events-agent-lost) | An agent has been marked as lost. This happens when Buildkite stops receiving pings from the agent | [Agent Disconnected](#events-agent-disconnected) | An agent has disconnected. This happens when the agent shuts down and disconnects from the API | [Agent Stopping](#events-agent-stopping) | An agent is stopping. This happens when an agent is instructed to stop from the API. It first transitions to stopping and finishes any current jobs | [Agent Stopped](#events-agent-stopped) | An agent has stopped. This happens when an agent is instructed to stop from the API. It can be graceful or forceful | [Agent Blocked](#events-agent-blocked) | An agent has been blocked. This happens when an agent's IP address is no longer included in the agent token's [allowed IP addresses](/docs/pipelines/security/clusters/manage#restrict-an-agent-tokens-access-by-ip-address) | [Cluster Token Registration Blocked](#events-cluster-token-registration-blocked) | An attempted agent registration is blocked because the request IP address is not included in the agent token's [allowed IP addresses](/docs/pipelines/security/clusters/manage#restrict-an-agent-tokens-access-by-ip-address) | [Audit Event Logged](#audit-event-logged) | An audit event has been logged for the organization See [build states](/docs/pipelines/configure/defining-steps#build-states) and [job states](/docs/pipelines/configure/defining-steps#job-states) to learn more about the sequence of these events. ##### Configuring In your Buildkite [Organization's Notification Settings](https://buildkite.com/organizations/-/services), add an Amazon EventBridge notification service: Once you've entered your AWS region and AWS Account ID, a Partner Event Source will be created in your AWS account matching the **Partner Event Source Name** shown on the settings page: You can then start consuming the events in your AWS account. The links to **Partner Event Sources Console** and **Event Rules** take you to the relevant pages in your AWS Console. ##### Filtering When creating your EventBridge rule you can specify an **Event pattern** filter to limit which events will be processed. You can use this to respond only to certain events based on the type, or any attribute from within the event payload. For example, to only process [Build Finished](#events-build-finished) events you'd configure your rule with the following event pattern: You can use any event property in your custom event pattern. For example, the following event pattern allows only "Build Started" and "Build Finished" events containing a particular pipeline slug: ```json { "detail-type": [ "Build Started", "Build Finished" ], "detail": { "pipeline": { "slug": [ "some-pipeline" ] } } } ``` See the [Example Event Payloads](#example-event-payloads) for full list of properties, and the [AWS EventBridge Event Patterns documentation](https://docs.aws.amazon.com/eventbridge/latest/userguide/filtering-examples-structure.html) for full details on the pattern syntax. ##### Logging To debug your EventBridge events you can configure a rule to route the event bus directly to AWS CloudWatch Logs: You can then use [CloudWatch Logs Insights](https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/AnalyzingLogData.html) to query and inspect the live events from your event bus, by choosing the event log group configured above: ##### Lambda example: Track agent wait times using CloudWatch metrics You can use the following [AWS Lambda](https://docs.aws.amazon.com/lambda/latest/dg/welcome.html) and [Job Started](#events-job-started) event to publish a [CloudWatch metric](https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/working_with_metrics.html) which tracks how long jobs are waiting for agents to become available: ```js const AWS = require("aws-sdk"); const cloudWatch = new AWS.CloudWatch(); exports.handler = (event, context, callback) => { const waitTime = new Date(event.detail.job.started_at) - new Date(event.detail.job.runnable_at); console.log(`Job started after waiting ${waitTime} seconds`); cloudWatch.putMetricData( { Namespace: "Buildkite", MetricData: [ { MetricName: "Job Agent Wait Time", Timestamp: new Date(), StorageResolution: 1, Unit: "Seconds", Value: waitTime, Dimensions: [ { Name: "Pipeline", Value: event.detail.pipeline.slug } ] } ] }, (err, data) => { if (err) console.log(err, err.stack); callback(null, "Finished"); } ); }; ``` ##### Amazon EventBridge guidance Amazon EventBridge's [CI/CD with Buildkite](https://aws.amazon.com/eventbridge/integrations/buildkite/) page on the AWS web site provides guidelines on how to integrate Amazon EventBridge with Buildkite to build workflows that evaluates build start events from Buildkite, to visualize build events from Buildkite, and to interpret build alerts from Buildkite. These examples make use of [AWS Step Functions](https://aws.amazon.com/step-functions/), [Amazon QuickSight](https://aws.amazon.com/quicksight/), as well as [Amazon SNS](https://aws.amazon.com/sns/) and [AWS Lambda](https://aws.amazon.com/lambda/). ##### Example event payloads AWS EventBridge has strict limits on the size of the payload as documented in [Amazon EventBridge quotas](https://docs.aws.amazon.com/eventbridge/latest/userguide/eb-quota.html). As such, the information included in payloads is restricted to basic information about the event. If you need more information, you can query from the Buildkite [APIs](/docs/apis) using the data in the event. ###### Build Created ```json { "version": "0", "id": "bb57638d-a095-48da-e507-dc07e4d9a7cf", "detail-type": "Build Created", "source": "aws.partner/buildkite.com/buildkite/0106-187c-12cd4fe", "account": "123123123123", "time": "2024-08-19T05:15:47Z", "region": "us-east-1", "resources": [], "detail": { "version": 1, "build": { "uuid": "8fcaa7b9-e175-4110-9f48-f79949806a31", "graphql_id": "QnVpbGQtLS04ZmNhYTdiOS1lMTc1LTQxMTAtOWY0OC1mNzk5NDk4MDZhMzE=", "number": 123456, "commit": "5a741616cdf07dc87c5adafe784321eeeb639e33", "message": "Merge pull request #456 from my-org/chore/update-deps", "branch": "main", "state": "scheduled", "started_at": null, "finished_at": null, "source": "webhook", "started_at": null, "finished_at": null, "meta_data": {} }, "pipeline": { "uuid": "88d73553-5533-4f56-9c16-fb38d7817d8f", "graphql_id": "UGlwZWxpbmUtLS04OGQ3MzU1My01NTMzLTRmNTYtOWMxNi1mYjM4ZDc4MTdkOGY=", "slug": "my-pipeline", "repo": "git@somewhere.com:project.git" }, "organization": { "uuid": "a98961b7-adc1-41aa-8726-cfb2c46e42e0", "graphql_id": "T3JnYW5pemF0aW9uLS0tYTk4OTYxYjctYWRjMS00MWFhLTg3MjYtY2ZiMmM0NmU0MmUw", "slug": "my-org" } } } ``` ###### Build Started ```json { "version": "0", "id": "a06fb840-7d19-708c-7f99-319f7abd480f", "detail-type": "Build Started", "source": "aws.partner/buildkite.com/buildkite/0106-187c-12cd4fe", "account": "123123123123", "time": "2024-08-19T05:15:58Z", "region": "us-east-1", "resources": [], "detail": { "version": 1, "build": { "uuid": "8fcaa7b9-e175-4110-9f48-f79949806a31", "graphql_id": "QnVpbGQtLS04ZmNhYTdiOS1lMTc1LTQxMTAtOWY0OC1mNzk5NDk4MDZhMzE=", "number": 123456, "commit": "5a741616cdf07dc87c5adafe784321eeeb639e33", "message": "Merge pull request #456 from my-org/chore/update-deps", "branch": "main", "state": "started", "blocked_state": null, "started_at": "2019-08-11 06:01:16 UTC", "finished_at": null, "source": "webhook" }, "pipeline": { "uuid": "88d73553-5533-4f56-9c16-fb38d7817d8f", "graphql_id": "UGlwZWxpbmUtLS04OGQ3MzU1My01NTMzLTRmNTYtOWMxNi1mYjM4ZDc4MTdkOGY=", "slug": "my-pipeline", "repo": "git@somewhere.com:project.git" }, "organization": { "uuid": "a98961b7-adc1-41aa-8726-cfb2c46e42e0", "graphql_id": "T3JnYW5pemF0aW9uLS0tYTk4OTYxYjctYWRjMS00MWFhLTg3MjYtY2ZiMmM0NmU0MmUw", "slug": "my-org" } } } ``` ###### Build Finished ```json { "version": "0", "id": "bd2f894c-6778-b65d-011a-8898a9df8ee6", "detail-type": "Build Finished", "source": "aws.partner/buildkite.com/buildkite/0106-187c-12cd4fe", "account": "123123123123", "time": "2024-08-19T07:08:54Z", "region": "us-east-1", "resources": [], "detail": { "version": 1, "build": { "uuid": "8fcaa7b9-e175-4110-9f48-f79949806a31", "graphql_id": "QnVpbGQtLS04ZmNhYTdiOS1lMTc1LTQxMTAtOWY0OC1mNzk5NDk4MDZhMzE=", "number": 123456, "commit": "5a741616cdf07dc87c5adafe784321eeeb639e33", "message": "Merge pull request #456 from my-org/chore/update-deps", "branch": "main", "state": "passed", "blocked_state": null, "source": "webhook", "started_at": "2019-08-11 06:01:16 UTC", "finished_at": "2019-08-11 06:01:35 UTC", "meta_data": {} }, "pipeline": { "uuid": "88d73553-5533-4f56-9c16-fb38d7817d8f", "graphql_id": "UGlwZWxpbmUtLS04OGQ3MzU1My01NTMzLTRmNTYtOWMxNi1mYjM4ZDc4MTdkOGY=", "slug": "my-pipeline", "repo": "git@somewhere.com:project.git" }, "organization": { "uuid": "a98961b7-adc1-41aa-8726-cfb2c46e42e0", "graphql_id": "T3JnYW5pemF0aW9uLS0tYTk4OTYxYjctYWRjMS00MWFhLTg3MjYtY2ZiMmM0NmU0MmUw", "slug": "my-org" } } } ``` ###### Build Failing ```json { "version": "0", "id": "...", "detail-type": "Build Failing", "source": "aws.partner/buildkite.com/...", "account": "...", "time": "2024-09-12T10:20:54Z", "region": "...", "resources": [], "detail": { "version": 1, "build": { "uuid": "...", "graphql_id": "...", "number": 1299, "commit": "...", "message": "...", "branch": "...", "state": "failing", "blocked_state": null, "source": "ui", "started_at": "2024-09-12 10:19:49 UTC", "finished_at": null }, "pipeline": { "uuid": "...", "graphql_id": "...", "slug": "...", "repo": "..." }, "organization": { "uuid": "...", "graphql_id": "...", "slug": "..." } } } ``` ###### Build Blocked ```json { "version": "0", "id": "...", "detail-type": "Build Finished", "source": "...", "account": "...", "time": "2022-01-30T04:32:06Z", "region": "us-east-1", "resources": [], "detail": { "version": 1, "build": { "uuid": "...", "graphql_id": "...", "number": 23, "commit": "...", "message": "Update index.html", "branch": "main", "state": "blocked", "blocked_state": null, "source": "ui", "started_at": "2022-01-30 04:31:59 UTC", "finished_at": "2022-01-30 04:32:06 UTC" }, "pipeline": { "uuid": "...", "graphql_id": "...", "slug": "webhook-test", "repo": "git@github.com:nithyaasworld/add-contact-chip.git" }, "organization": { "uuid": "...", "graphql_id": "...", "slug": "nithya-bk" } } } ``` ###### Job Scheduled ```json { "version": "0", "id": "0d2a372b-df6b-97a9-8c2f-e561ef705bc5", "detail-type": "Job Scheduled", "source": "aws.partner/buildkite.com/buildkite/0106-187c-12cd4fe", "account": "123123123123", "time": "2024-08-19T07:08:47Z", "region": "us-east-1", "resources": [], "detail": { "version": 1, "job": { "uuid": "9e6c3f19-4fdb-4e8e-b925-28cd7504e17f", "graphql_id": "Sm9iLS0tOWU2YzNmMTktNGZkYi00ZThlLWI5MjUtMjhjZDc1MDRlMTdm", "type": "script", "label": "\:nodejs\: Test", "step_key": "node_test", "command": "yarn test", "agent_query_rules": [ "queue=default" ], "exit_status": null, "signal_reason": null, "passed": false, "soft_failed": false, "state": "assigned", "runnable_at": "2019-08-11 06:01:14 UTC", "started_at": null, "finished_at": null, "unblocked_by": null, "retried_in_job_id": null }, "build": { "uuid": "8fcaa7b9-e175-4110-9f48-f79949806a31", "graphql_id": "QnVpbGQtLS04ZmNhYTdiOS1lMTc1LTQxMTAtOWY0OC1mNzk5NDk4MDZhMzE=", "number": 123456, "commit": "5a741616cdf07dc87c5adafe784321eeeb639e33", "message": "Merge pull request #456 from my-org/chore/update-deps", "branch": "main", "state": "started", "blocked_state": null, "source": "webhook", "started_at": "2024-08-19 07:03:37 UTC", "finished_at": null, "meta_data": {} }, "pipeline": { "uuid": "88d73553-5533-4f56-9c16-fb38d7817d8f", "graphql_id": "UGlwZWxpbmUtLS04OGQ3MzU1My01NTMzLTRmNTYtOWMxNi1mYjM4ZDc4MTdkOGY=", "slug": "my-pipeline", "repo": "git@somewhere.com:project.git" }, "organization": { "uuid": "a98961b7-adc1-41aa-8726-cfb2c46e42e0", "graphql_id": "T3JnYW5pemF0aW9uLS0tYTk4OTYxYjctYWRjMS00MWFhLTg3MjYtY2ZiMmM0NmU0MmUw", "slug": "my-org" } } } ``` ###### Job Started ```json { "version": "0", "id": "d9ffc535-30c7-42d2-0ac2-7192d93bf332", "detail-type": "Job Started", "source": "aws.partner/buildkite.com/buildkite/0106-187c-12cd4fe", "account": "123123123123", "time": "2024-08-19T07:08:58Z", "region": "us-east-1", "resources": [], "detail": { "version": 1, "job": { "uuid": "9e6c3f19-4fdb-4e8e-b925-28cd7504e17f", "graphql_id": "Sm9iLS0tOWU2YzNmMTktNGZkYi00ZThlLWI5MjUtMjhjZDc1MDRlMTdm", "type": "script", "label": "\:nodejs\: Test", "step_key": "node_test", "command": "yarn test", "agent_query_rules": [ "queue=default" ], "exit_status": null, "signal_reason": null, "passed": false, "soft_failed": false, "state": "started", "runnable_at": "2019-08-11 06:01:14 UTC", "started_at": "2019-08-11 06:01:16 UTC", "finished_at": null, "unblocked_by": null, "retried_in_job_id": null }, "build": { "uuid": "8fcaa7b9-e175-4110-9f48-f79949806a31", "graphql_id": "QnVpbGQtLS04ZmNhYTdiOS1lMTc1LTQxMTAtOWY0OC1mNzk5NDk4MDZhMzE=", "number": 123456, "commit": "5a741616cdf07dc87c5adafe784321eeeb639e33", "message": "Merge pull request #456 from my-org/chore/update-deps", "branch": "main", "state": "started", "blocked_state": null, "source": "webhook", "started_at": "2024-08-19 07:07:44 UTC", "finished_at": null, "meta_data": {} }, "pipeline": { "uuid": "88d73553-5533-4f56-9c16-fb38d7817d8f", "graphql_id": "UGlwZWxpbmUtLS04OGQ3MzU1My01NTMzLTRmNTYtOWMxNi1mYjM4ZDc4MTdkOGY=", "slug": "my-pipeline", "repo": "git@somewhere.com:project.git" }, "organization": { "uuid": "a98961b7-adc1-41aa-8726-cfb2c46e42e0", "graphql_id": "T3JnYW5pemF0aW9uLS0tYTk4OTYxYjctYWRjMS00MWFhLTg3MjYtY2ZiMmM0NmU0MmUw", "slug": "my-org" } } } ``` ###### Job Finished These types of events [may contain a `signal_reason` field value](#signal-reason). ```json { "version": "0", "id": "e8e9fdf8-d21b-fa2d-04c4-09465919673e", "detail-type": "Job Finished", "source": "aws.partner/buildkite.com/buildkite/0106-187c-12cd4fe", "account": "123123123123", "time": "2024-08-19T07:10:05Z", "region": "us-east-1", "resources": [], "detail": { "version": 1, "job": { "uuid": "9e6c3f19-4fdb-4e8e-b925-28cd7504e17f", "graphql_id": "Sm9iLS0tOWU2YzNmMTktNGZkYi00ZThlLWI5MjUtMjhjZDc1MDRlMTdm", "type": "script", "label": "\:nodejs\: Test", "step_key": "node_test", "command": "yarn test", "agent_query_rules": [ "queue=default" ], "exit_status": 0, "signal_reason": "see-reason-below", "passed": true, "soft_failed": false, "state": "finished", "runnable_at": "2019-08-11 06:01:14 UTC", "started_at": "2019-08-11 06:01:16 UTC", "finished_at": "2019-08-11 06:01:35 UTC", "unblocked_by": null, "retried_in_job_id": null }, "build": { "uuid": "8fcaa7b9-e175-4110-9f48-f79949806a31", "graphql_id": "QnVpbGQtLS04ZmNhYTdiOS1lMTc1LTQxMTAtOWY0OC1mNzk5NDk4MDZhMzE=", "number": 123456, "commit": "5a741616cdf07dc87c5adafe784321eeeb639e33", "message": "Merge pull request #456 from my-org/chore/update-deps", "branch": "main", "state": "started", "source": "webhook", "started_at": "2024-08-19 07:00:14 UTC", "finished_at": null, "meta_data": {} }, "pipeline": { "uuid": "88d73553-5533-4f56-9c16-fb38d7817d8f", "graphql_id": "UGlwZWxpbmUtLS04OGQ3MzU1My01NTMzLTRmNTYtOWMxNi1mYjM4ZDc4MTdkOGY=", "slug": "my-pipeline", "repo": "git@somewhere.com:project.git" }, "organization": { "uuid": "a98961b7-adc1-41aa-8726-cfb2c46e42e0", "graphql_id": "T3JnYW5pemF0aW9uLS0tYTk4OTYxYjctYWRjMS00MWFhLTg3MjYtY2ZiMmM0NmU0MmUw", "slug": "my-org" }, "agent": { "uuid": "0191695c-920d-4644-8be9-a674252ac" } } } ``` ###### Signal reason in job finished events The `signal_reason` field of a [job finished](#example-event-payloads-job-finished) event is only be present when the `exit_status` field value in the same event is not `0`. The `signal_reason` field's value indicates the reason why a job was either stopped, or why the job never ran. | Signal Reason | Description | | --- | --- | | `agent_refused` | The agent refused to run the job, as it was not allowed by a [pre-bootstrap hook](/docs/agent/self-hosted/security#restrict-access-by-the-buildkite-agent-controller-strict-checks-using-a-pre-bootstrap-hook) | | `agent_stop` | The agent was stopped while the job was running | | `cancel` | The job was cancelled by a user | | `signature_rejected` | The job was rejected due to a mismatch with the [step's signature](/docs/agent/self-hosted/security/signed-pipelines) | | `process_run_error` | The job failed to start due to an error in the process run. This is usually a bug in the agent, contact support if this is happening regularly. | ###### Job Activated ```json { "version": "0", "id": "e8e9fdf8-d21b-fa2d-04c4-09465919673e", "detail-type": "Job Activated", "source": "aws.partner/buildkite.com/buildkite/0106-187c-12cd4fe", "account": "123123123123", "time": "2024-08-19T07:10:05Z", "region": "us-east-1", "resources": [], "detail": { "version": 1, "job": { "uuid": "9e6c3f19-4fdb-4e8e-b925-28cd7504e17f", "graphql_id": "Sm9iLS0tOWU2YzNmMTktNGZkYi00ZThlLWI5MjUtMjhjZDc1MDRlMTdm", "type": "manual", "label": ":rocket: Deploy", "step_key": "manual_deploy", "command": null, "agent_query_rules": [], "exit_status": null, "passed": false, "soft_failed": false, "state": "finished", "runnable_at": null, "started_at": null, "finished_at": null, "unblocked_by": { "uuid": "c07c69c6-11d2-4375-9148-9d0338b0a836", "graphql_id": "VXNlci0tLWMwN2M2OWM2LTExZDItNDM3NS05MTQ4LTlkMDMzOGIwYTgzNg==", "name": "bell" } }, "build": { "uuid": "8fcaa7b9-e175-4110-9f48-f79949806a31", "graphql_id": "QnVpbGQtLS04ZmNhYTdiOS1lMTc1LTQxMTAtOWY0OC1mNzk5NDk4MDZhMzE=", "number": 123456, "commit": "5a741616cdf07dc87c5adafe784321eeeb639e33", "message": "Merge pull request #456 from my-org/chore/update-deps", "branch": "main", "state": "started", "started_at": "2024-08-19 07:00:14 UTC", "source": "webhook", "meta_data": {} }, "pipeline": { "uuid": "88d73553-5533-4f56-9c16-fb38d7817d8f", "graphql_id": "UGlwZWxpbmUtLS04OGQ3MzU1My01NTMzLTRmNTYtOWMxNi1mYjM4ZDc4MTdkOGY=", "slug": "my-pipeline", "repo": "git@somewhere.com:project.git" }, "organization": { "uuid": "a98961b7-adc1-41aa-8726-cfb2c46e42e0", "graphql_id": "T3JnYW5pemF0aW9uLS0tYTk4OTYxYjctYWRjMS00MWFhLTg3MjYtY2ZiMmM0NmU0MmUw", "slug": "my-org" } } } ``` ###### Agent Connected ```json { "version": "0", "id": "2759e87f-4462-9335-4835-4d2a90c6997c", "detail-type": "Agent Connected", "source": "aws.partner/buildkite.com/buildkite/0106-187c-12cd4fe", "account": "123123123123", "time": "2024-08-19T05:18:17Z", "region": "us-east-1", "resources": [], "detail": { "version": 1, "agent": { "uuid": "288139c5-728d-4c22-88e3-5a926b6c4a51", "graphql_id": "QWdlbnQtLS0yODgxMzljNS03MjhkLTRjMjItODhlMy01YTkyNmI2YzRhNTE=", "connection_state": "connected", "name": "my-agent-1", "version": "3.13.2", "ip_address": "3.80.193.183", "hostname": "ip-10-0-2-73.ec2.internal", "pid": "18534", "priority": 0, "meta_data": [ "aws:instance-id=i-0ce2c738afbfc6c83" ], "connected_at": "2019-08-10 09:44:40 UTC", "disconnected_at": null, "lost_at": null }, "organization": { "uuid": "a98961b7-adc1-41aa-8726-cfb2c46e42e0", "graphql_id": "T3JnYW5pemF0aW9uLS0tYTk4OTYxYjctYWRjMS00MWFhLTg3MjYtY2ZiMmM0NmU0MmUw", "slug": "my-org" }, "token": { "uuid": "df75860c-94f9-4275-91cb-3986590f45b5", "created_at": "2019-08-10 07:44:40 UTC", "description": "Default agent token" } } } ``` ###### Agent Disconnected ```json { "version": "0", "id": "62042586-2760-088d-bc10-63f7ab9bbf8a", "detail-type": "Agent Disconnected", "source": "aws.partner/buildkite.com/buildkite/0106-187c-12cd4fe", "account": "123123123123", "time": "2024-08-19T05:18:08Z", "region": "us-east-1", "resources": [], "detail": { "version": 1, "agent": { "uuid": "288139c5-728d-4c22-88e3-5a926b6c4a51", "graphql_id": "QWdlbnQtLS0yODgxMzljNS03MjhkLTRjMjItODhlMy01YTkyNmI2YzRhNTE=", "connection_state": "disconnected", "name": "my-agent-1", "version": "3.13.2", "ip_address": "3.80.193.183", "hostname": "ip-10-0-2-73.ec2.internal", "pid": "18534", "priority": 0, "meta_data": [ "aws:instance-id=i-0ce2c738afbfc6c83" ], "connected_at": "2019-08-10 09:44:40 UTC", "disconnected_at": "2019-08-10 09:54:40 UTC", "lost_at": null }, "organization": { "uuid": "a98961b7-adc1-41aa-8726-cfb2c46e42e0", "graphql_id": "T3JnYW5pemF0aW9uLS0tYTk4OTYxYjctYWRjMS00MWFhLTg3MjYtY2ZiMmM0NmU0MmUw", "slug": "my-org" }, "token": { "uuid": "df75860c-94f9-4275-91cb-3986590f45b5", "created_at": "2019-08-10 07:44:40 UTC", "description": "Default agent token" } } } ``` ###### Agent Lost ```json { "version": "0", "id": "62042586-2760-088d-bc10-63f7ab9bbf8a", "detail-type": "Agent Lost", "source": "aws.partner/buildkite.com/buildkite/0106-187c-12cd4fe", "account": "123123123123", "time": "2024-08-19T05:18:08Z", "region": "us-east-1", "resources": [], "detail": { "version": 1, "agent": { "uuid": "288139c5-728d-4c22-88e3-5a926b6c4a51", "graphql_id": "QWdlbnQtLS0yODgxMzljNS03MjhkLTRjMjItODhlMy01YTkyNmI2YzRhNTE=", "connection_state": "lost", "name": "my-agent-1", "version": "3.13.2", "ip_address": "3.80.193.183", "hostname": "ip-10-0-2-73.ec2.internal", "pid": "18534", "priority": 0, "meta_data": [ "aws:instance-id=i-0ce2c738afbfc6c83" ], "connected_at": "2019-08-10 09:44:40 UTC", "disconnected_at": "2019-08-10 09:54:40 UTC", "lost_at": "2019-08-10 09:54:40 UTC" }, "organization": { "uuid": "a98961b7-adc1-41aa-8726-cfb2c46e42e0", "graphql_id": "T3JnYW5pemF0aW9uLS0tYTk4OTYxYjctYWRjMS00MWFhLTg3MjYtY2ZiMmM0NmU0MmUw", "slug": "my-org" }, "token": { "uuid": "df75860c-94f9-4275-91cb-3986590f45b5", "created_at": "2019-08-10 07:44:40 UTC", "description": "Default agent token" } } } ``` ###### Agent Stopping ```json { "version": "0", "id": "62042586-2760-088d-bc10-63f7ab9bbf8a", "detail-type": "Agent Stopping", "source": "aws.partner/buildkite.com/buildkite/0106-187c-12cd4fe", "account": "123123123123", "time": "2024-08-19T05:18:08Z", "region": "us-east-1", "resources": [], "detail": { "version": 1, "agent": { "uuid": "288139c5-728d-4c22-88e3-5a926b6c4a51", "graphql_id": "QWdlbnQtLS0yODgxMzljNS03MjhkLTRjMjItODhlMy01YTkyNmI2YzRhNTE=", "connection_state": "stopping", "name": "my-agent-1", "version": "3.13.2", "ip_address": "3.80.193.183", "hostname": "ip-10-0-2-73.ec2.internal", "pid": "18534", "priority": 0, "meta_data": [ "aws:instance-id=i-0ce2c738afbfc6c83" ], "connected_at": "2019-08-10 09:44:40 UTC", "disconnected_at": null, "lost_at": null }, "organization": { "uuid": "a98961b7-adc1-41aa-8726-cfb2c46e42e0", "graphql_id": "T3JnYW5pemF0aW9uLS0tYTk4OTYxYjctYWRjMS00MWFhLTg3MjYtY2ZiMmM0NmU0MmUw", "slug": "my-org" }, "token": { "uuid": "df75860c-94f9-4275-91cb-3986590f45b5", "created_at": "2019-08-10 07:44:40 UTC", "description": "Default agent token" } } } ``` ###### Agent Stopped ```json { "version": "0", "id": "62042586-2760-088d-bc10-63f7ab9bbf8a", "detail-type": "Agent Stopped", "source": "aws.partner/buildkite.com/buildkite/0106-187c-12cd4fe", "account": "123123123123", "time": "2024-08-19T05:18:08Z", "region": "us-east-1", "resources": [], "detail": { "version": 1, "agent": { "uuid": "288139c5-728d-4c22-88e3-5a926b6c4a51", "graphql_id": "QWdlbnQtLS0yODgxMzljNS03MjhkLTRjMjItODhlMy01YTkyNmI2YzRhNTE=", "connection_state": "stopped", "name": "my-agent-1", "version": "3.13.2", "ip_address": "3.80.193.183", "hostname": "ip-10-0-2-73.ec2.internal", "pid": "18534", "priority": 0, "meta_data": [ "aws:instance-id=i-0ce2c738afbfc6c83" ], "connected_at": "2019-08-10 09:44:40 UTC", "disconnected_at": "2019-08-10 09:54:40 UTC", "lost_at": null }, "organization": { "uuid": "a98961b7-adc1-41aa-8726-cfb2c46e42e0", "graphql_id": "T3JnYW5pemF0aW9uLS0tYTk4OTYxYjctYWRjMS00MWFhLTg3MjYtY2ZiMmM0NmU0MmUw", "slug": "my-org" }, "token": { "uuid": "df75860c-94f9-4275-91cb-3986590f45b5", "created_at": "2019-08-10 07:44:40 UTC", "description": "Default agent token" } } } ``` ###### Agent Blocked ```json { "version": "0", "id": "62042586-2760-088d-bc10-63f7ab9bbf8a", "detail-type": "Agent Blocked", "source": "aws.partner/buildkite.com/buildkite/0106-187c-12cd4fe", "account": "123123123123", "time": "2024-08-19T05:18:08Z", "region": "us-east-1", "resources": [], "detail": { "version": 1, "blocked_ip": "204.124.80.36", "cluster_token": { "uuid": "c1164b28-bace-436-ac44-4133e1d18ca5", "description": "Default agent token", "allowed_ip_addresses": "202.144.160.0/24", }, "agent": { "uuid": "0188f51c-7bc8-4b14-a702-002c485ae2dc", "graphql_id": "QWdlbnQtLSOMTg4ZjUxYy03YmM4LTRiMTQtYTcwMi@ MDJjNDg1YWUyZGM=", "connection_state": "disconnected", "name": "rogue-agent-1", "version": "3.40.0", "token": null, "ip_address": "127.0.0.1", "hostname": "rogue-agent", "pid": "26089", "priority": 0, "meta_data": ["queue=default"], "connected_at": "2023-06-26 00:31:04 UTC", "disconnected_at": "2023-06-26 00:31:18 UTC", "lost_at": null, }, "organization": { "uuid": "a98961b7-adc1-41aa-8726-cfb2c46e42e0", "graphql_id": "T3JnYW5pemF0aW9uLS0tYTk4OTYxYjctYWRjMS00MWFhLTg3MjYtY2ZiMmM0NmU0MmUw", "slug": "my-org" } } } ``` ###### Cluster Token Registration Blocked ```json { "version": "0", "id": "62042586-2760-088d-bc10-63f7ab9bbf8a", "detail-type": "Cluster Token Registration Blocked", "source": "aws.partner/buildkite.com/buildkite/0106-187c-12cd4fe", "account": "123123123123", "time": "2024-08-19T05:18:08Z", "region": "us-east-1", "resources": [], "detail": { "version": 1, "blocked_ip": "204.124.80.36", "cluster_token": { "uuid": "c1164b28-bace-436-ac44-4133e1d18ca5", "description": "Default agent token", "allowed_ip_addresses": "202.144.160.0/24", }, "organization": { "uuid": "a98961b7-adc1-41aa-8726-cfb2c46e42e0", "graphql_id": "T3JnYW5pemF0aW9uLS0tYTk4OTYxYjctYWRjMS00MWFhLTg3MjYtY2ZiMmM0NmU0MmUw", "slug": "my-org" } } } ``` ###### Audit Event Logged [Audit log](/docs/platform/audit-log) is only available to Buildkite customers on the [Enterprise](https://buildkite.com/pricing) plan. ```json { "version": "0", "id": "8212ed90-edcc-0936-187c-d466e46575b6", "detail-type": "Audit Event Logged", "source": "aws.partner/buildkite.com/buildkite/0106-187c-12cd4fe", "account": "123123123123", "time": "2023-03-07T23:14:43Z", "region": "us-east-1", "resources": [], "detail": { "version": 1, "organization": { "uuid": "ae85860c-94f9-4275-91cb-3986590f45b5", "graphql_id": "T3JnYWMDE4NjDAtNzk1YS00YWMwLWE112jUtM12jEGMzYTNkZDQx", "slug": "buildkite" }, "event": { "uuid": "da55860c-94f9-4275-91cb-3986590f45b5", "occurred_at": "2023-03-25 23:14:43 UTC", "type": "ORGANIZATION_UPDATED", "data": { "name": "Buildkite" }, "subject_type": "Organization", "subject_uuid": "af7e863c-94f9-4275-91sb-3986590f45b5", "subject_name": "Buildkite", "context": "{\"request_id\":\"pemF0aW9uLStMDE4NjDAtNzk1YS00YW\",\"request_ip\":\"127.0.0.0\",\"session_key\":\"pemF0aW9uLStMDE4NjDAtNzk1YS00YW\",\"session_user_uuid\":\"da55860c-94f9-4275-91cb-3986590f45b5\",\"request_user_agent\":\"Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/110.0.0.0 Safari/537.36\",\"session_created_at\":\"2023-03-25T23:30:54.559Z\"}" }, "actor": { "name": "Buildkite member", "type": null, "uuid": "df75860c-94f9-4275-91cb-3986590f45b5" } } } ``` --- ### Observability plugins URL: https://buildkite.com/docs/pipelines/integrations/observability/plugins #### Observability plugins The _observability plugins directory_ helps you discover Buildkite plugins for observability. buildkite.com/resources/plugins/category/observability --- ### Security and compliance plugins URL: https://buildkite.com/docs/pipelines/integrations/security-and-compliance/plugins #### Security and compliance plugins The _security & compliance plugins directory_ helps you discover Buildkite plugins focused on security, governance, or compliance. buildkite.com/resources/plugins/category/security-compliance --- ### SonarScanner CLI tutorial URL: https://buildkite.com/docs/pipelines/integrations/security-and-compliance/sonar #### SonarScanner CLI integration tutorial The [SonarScanner CLI](https://docs.sonarqube.org/latest/analysis/scan/sonarscanner/) integration enables static code analysis of your projects using SonarQube or SonarCloud directly within your Buildkite pipelines. SonarScanner analyzes your code for bugs, vulnerabilities, and code smells across 25+ programming languages. This integration is designed for self-hosted SonarQube instances, with optional support for SonarCloud as an alternative. This page is a tutorial that covers both self-hosted SonarQube instances and SonarCloud integration. ##### Prerequisites Before configuring SonarScanner in your Buildkite pipeline, ensure you have: 1. **SonarQube account** or **SonarCloud account** 1. **Authentication token** that is: - Generated from your SonarQube/SonarCloud account settings - Stored securely using [Buildkite secrets management](/docs/pipelines/security/secrets/managing) 1. **Java Runtime Environment (JRE) 11 or higher** - Required by SonarScanner CLI to run - Needs to be installed for the [pre-installed binary implementation approach](/docs/pipelines/integrations/security-and-compliance/sonar#implementation-approaches-pre-installed-binary-approach) - Comes pre-installed in most Buildkite agent environments - Not required for [Docker image-based implementation approach](/docs/pipelines/integrations/security-and-compliance/sonar#implementation-approaches-docker-image-approach) (Java is included in the container) 1. **Secrets management solutions** - this tutorial demonstrates [AWS Secrets Manager](https://aws.amazon.com/secrets-manager/) with the [AWS Secrets Manager Buildkite Plugin](https://buildkite.com/resources/plugins/seek-oss/aws-sm-buildkite-plugin/). ##### Configuration strategy SonarScanner supports two configuration methods: - **Environment variables**: Recommended for runtime settings and sensitive authentication data (tokens, URLs). - **Properties files**: Recommended for project-specific settings. > 📘 Configuration precedence > Environment variables take precedence over the settings in the properties file. This design allows you to keep project configuration in version control while securely managing authentication through Buildkite's secrets management. ##### Environment variables Use environment variables in your pipeline for authentication and server configuration: | Environment Variable | Description | | --- | --- | | `SONAR_TOKEN` | **Required.** Authentication token for your SonarQube/SonarCloud server *SonarQube example:* `sqp_1234567890abcdef1234567890abcdef12345678` *SonarCloud example:* `a1b2c3d4e5f6a7b8c9d0e1f2a3b4c5d6e7f8a9b0` **Security:** Store using [secrets management](/docs/pipelines/security/secrets/managing) | | `SONAR_HOST_URL` | **Required.** URL of your SonarQube server *Example:* `https://sonarqube.mycompany.com` or `https://sonar.internal.mycompany.com` | ##### Properties file configuration Create a `sonar-project.properties` file in your repository root to define project-specific settings: ```properties #### SonarQube configuration sonar.host.url=https://sonarqube.mycompany.com sonar.projectKey=sample-project sonar.projectName=Multi-Language Sample Project sonar.projectVersion=1.0 #### Source configuration sonar.sources=src,lib,scripts sonar.sourceEncoding=UTF-8 #### Working directory (adjust based on execution environment; default is root) sonar.working.directory=./.scannerwork #### Exclusions sonar.exclusions=**/.git/**,**/.buildkite/**,**/node_modules/**,**/target/**,**/*.jar,**/*.class #### Language-specific settings (optional) sonar.javascript.lcov.reportPaths=coverage/lcov.info sonar.python.coverage.reportPaths=coverage.xml sonar.java.binaries=target/classes ``` ###### Understanding key properties - **sonar.sources**: comma-separated list of directories containing source code, relative to project root. - **sonar.working.directory**: directory where SonarScanner stores temporary analysis files. Execution user must have `write` permissions to this directory. - **sonar.exclusions**: files and directories to exclude from analysis using Ant-style patterns (`**` = any subdirectories, `*` = any characters). - **sonar.tests**: directories containing test files, separate from the main source analysis. ##### Implementation approaches Choose between two deployment approaches based on your infrastructure preferences and agent setup: - [Pre-installed binary](/docs/pipelines/integrations/security-and-compliance/sonar#implementation-approaches-pre-installed-binary-approach) - install SonarScanner directly on your Buildkite agents for faster execution and reduced container overhead. - [Docker image](/docs/pipelines/integrations/security-and-compliance/sonar#implementation-approaches-docker-image-approach) - use the official SonarScanner Docker image for consistent environments and simplified agent setup. ###### Pre-installed binary approach This approach uses the SonarScanner CLI binary installed directly on your Buildkite agents. Below is an example for [Buildkite Elastic CI Stack for AWS](/docs/agent/self-hosted/aws/elastic-ci-stack). ###### Update launch template userdata Add the following installation script to your Auto Scaling Group's launch template userdata: ```bash #!/bin/bash -v #### Download and install SonarScanner CLI echo "Installing SonarScanner CLI..." cd /opt sudo wget https://binaries.sonarsource.com/Distribution/sonar-scanner-cli/sonar-scanner-cli-5.0.1.3006-linux.zip sudo unzip sonar-scanner-cli-5.0.1.3006-linux.zip sudo ln -s sonar-scanner-5.0.1.3006-linux sonar-scanner #### Set proper permissions for buildkite-agent user sudo chown -R root:root /opt/sonar-scanner-5.0.1.3006-linux sudo chmod +x /opt/sonar-scanner/bin/sonar-scanner #### Add to system PATH sudo tee /etc/profile.d/sonar-scanner.sh ls -ld /path/to/workdir ``` ##### Using SonarCloud instead of SonarQube While this tutorial describes the implementation of self-hosted SonarQube, you can also use [SonarCloud](https://docs.sonarcloud.io/) (the hosted SaaS version) by making a few changes. ###### Environment variables changes - `SONAR_HOST_URL`: Optional (defaults to `https://sonarcloud.io`) - `SONAR_TOKEN`: Required (also saved in your choice of Buildkite secrets management service) ###### Properties file changes ```conf #### SonarCloud configuration #### Optional (defaults to https://sonarcloud.io) sonar.host.url=https://sonarcloud.io #### Required sonar.projectKey=my-org_sample-project #### Required sonar.organization=my-org ``` ###### Token generation changes Generate your SonarCloud token from **My Account > Security > Generate Tokens** in your SonarCloud dashboard. ##### Additional resources - [SonarQube documentation](https://docs.sonarqube.org/latest/) - [SonarCloud documentation](https://docs.sonarcloud.io/) - [SonarScanner CLI reference](https://docs.sonarqube.org/latest/analysis/scan/sonarscanner/) - [Buildkite secrets Management](/docs/pipelines/security/secrets/managing) documentation page. --- ### Secrets plugins URL: https://buildkite.com/docs/pipelines/integrations/secrets/plugins #### Secrets plugins The _secrets plugins directory_ helps you discover Buildkite plugins for secrets management. buildkite.com/resources/plugins/category/secrets --- ### Artifactory URL: https://buildkite.com/docs/pipelines/integrations/artifacts-and-packages/artifactory #### Artifactory There are many ways to use [Artifactory](https://jfrog.com/artifactory/) with Buildkite. This document covers how to configure the Buildkite agent's built-in Artifactory support, as well as how to use Artifactory's package management features in your Buildkite pipelines. ##### Buildkite agent's Artifactory support The Buildkite agent can upload and download artifacts directly from Artifactory. Export the following environment variables in your [Agent environment hook](/docs/agent/hooks) to configure the Agent's Artifactory support. See the [Managing pipeline secrets](/docs/pipelines/security/secrets/managing) documentation for how to securely set up these environment variables. Required environment vars: | `BUILDKITE_ARTIFACT_UPLOAD_DESTINATION` | The Artifactory repository and path that will be used to upload and download artifacts, starting with an rt:// prefix _Example:_ `"rt://some-repo/build-$BUILDKITE_BUILD_NUMBER/$BUILDKITE_JOB_ID/"` | `BUILDKITE_ARTIFACTORY_URL` | Your Artifactory instance URL, including the `/artifactory` suffix _Example:_ `https://my-artifactory-server/artifactory` | `BUILDKITE_ARTIFACTORY_USER` | The username of a user configured in your Artifactory instance _Example:_ `some-user` | `BUILDKITE_ARTIFACTORY_PASSWORD` | The [API Key](https://jfrog.com/help/r/jfrog-platform-administration-documentation/api-key), [Access Token](https://jfrog.com/help/r/jfrog-platform-administration-documentation/access-tokens), or password for your Artifactory user _Example:_ `AKCp5dKiQ9syTzu9GFhpF3iTzDcFhYAa4...` Once the above environment variables are configured, all artifact uploads and downloads will use Artifactory. For example, the following [command step](/docs/pipelines/configure/step-types/command-step) will build a binary and upload it to Artifactory using the `artifact_paths` attribute: ```yml steps: - label: "\:golang\: \:package\:" command: "go build -v -o myapp-darwin-amd64" artifact_paths: "myapp-darwin-amd64" plugins: - docker#v5.13.0: image: "golang:1.11" ``` > 📘 Retrieving artifacts using the Buildkite agent > The Buildkite agent uses Buildkite's APIs to fetch the correct URLs to download artifacts from Artifactory. By default, the agent searches for artifacts uploaded within the same build. To download artifacts that were uploaded in different builds using [`buildkite-agent artifact download`](/docs/agent/cli/reference/artifact#downloading-artifacts) or [artifacts-buildkite-plugin](https://github.com/buildkite-plugins/artifacts-buildkite-plugin), pass the [`BUILDKITE_BUILD_ID`](/docs/agent/cli/reference/artifact#downloading-artifacts-options) of the job through which the artifact was uploaded, as additional information to the `--build` option's argument. ##### Using Artifactory for package management To help cache and secure your build dependencies, you can use [Artifactory's package management](https://jfrog.com/help/r/jfrog-artifactory-documentation/package-management) features in your Buildkite pipelines. Each package management platform is configured differently. For example, to use an [Artifactory NPM repositories](https://jfrog.com/help/r/jfrog-artifactory-documentation/npm-repositories) in your build steps, you can configure the following [Agent environment hook](/docs/agent/hooks) to instruct the [npm command](https://docs.npmjs.com/cli/npm) to use Artifactory instead of npmjs.com: ```bash export NPM_CONFIG_REGISTRY="https://${BUILDKITE_ARTIFACTORY_USER}:${BUILDKITE_ARTIFACTORY_PASSWORD}@my-artifactory-server/artifactory/api/npm/npm-local/" ``` You can use this same approach for [Ruby gem repositories](https://jfrog.com/help/r/jfrog-artifactory-documentation/rubygems-repositories), [Docker repositories](https://jfrog.com/help/r/jfrog-artifactory-documentation/docker-repositories), and any other Artifactory supported package managers. If you're running build steps in a Docker container, you'll need to ensure the package management configuration is available inside the container. For example, if you're testing Node in a container, you'll need to pass through the above `NPM_CONFIG_REGISTRY` environment variable into the container: ```yml steps: - label: "\:node\: \:hammer\:" commands: - npm install - npm test plugins: docker#v5.13.0: image: "node:11" environment: - NPM_CONFIG_REGISTRY ``` --- ### Artifact plugins URL: https://buildkite.com/docs/pipelines/integrations/artifacts-and-packages/artifact-plugins #### Artifact plugins The _artifact plugins directory_ helps you discover Buildkite plugins for managing or uploading build artifacts. buildkite.com/resources/plugins/category/artifacts --- ### Package plugins URL: https://buildkite.com/docs/pipelines/integrations/artifacts-and-packages/package-plugins #### Package plugins The _package plugins directory_ helps you discover Buildkite plugins for publishing or retrieving packages. buildkite.com/resources/plugins/category/packages --- ### Build status badges URL: https://buildkite.com/docs/pipelines/integrations/other/build-status-badges #### Build status badges Build status badges help to visually show the current build state for a pipeline in places such as readmes and dashboards. You can find your pipeline's status badge on the pipeline's **Settings** > **Build Badges** page. ##### Scoping to a branch By default the build status badge will show the last build's status. You can scope it to a specific branch by adding a `?branch` parameter to the URL. For example, to scope your badge to the `main` branch you would add: `?branch=main` to the URL. ##### Scoping to a step If you want to create a badge that represents a single step in the last build, you can scope it that step by adding a `?step` parameter to the URL. For example, to scope your badge to the `iOS Build` step you would add: `?step=iOS%20Build` to the URL. If you have multiple steps that match the given name it will show as passing only if all of the matching steps passed. ##### Styles You can set the style of the badge by passing in a `style` parameter: | Default | | | | `?style=` | | | The `square` style can also be referred to as `flat-square` to match any [shields.io badges](http://shields.io) you may use. ##### Themes You can change the colors of the badges by passing in a `theme` parameter: | Default | | | | `?theme=` | | | ##### Custom themes You can also create your own theme by passing a comma-separated list of color values instead of the theme name. The format is `passing-bg-color,failing-bg-color,unknown-bg-color[,label-bg-color[,text-color,status-text-color]]` For example: | `?theme=` | ##### Sample badge URLs You can use the following URLs for testing your theme: * /sample.svg?status=passing * /sample.svg?status=failing * /sample.svg?status=unknown ##### JSON output You can get the JSON value of the status badge by specifying `.json` in the badge URL instead of `.svg`, including [branch scoping](#scoping-to-a-branch) and [step scoping](#scoping-to-a-step). For example: ```shell $ curl https://badge.buildkite.com/3826789cf8890b426057e6fe1c4e683bdf04fa24d498885489.json?branch=main {"status": "passing"} ``` Possible values for the `"status"` key are: * `"passing"` * `"failing"` * `"unknown"` ##### Contributing Want to contribute a theme? Send a pull request to [buildkite/build-status-badge-themes](https://github.com/buildkite/build-status-badge-themes). --- ### Docker Hub URL: https://buildkite.com/docs/pipelines/integrations/other/docker-hub #### Docker Hub [Docker Hub](https://hub.docker.com/) is a public registry of docker images, hosting popular images used in many build pipelines. On 2nd November 2020, Docker Hub introduced [strict rate limits](https://docs.docker.com/docker-hub/download-rate-limit/) on image downloads by unauthenticated clients, and authenticated clients on a free plan. For Buildkite customers using images hosted on Docker Hub, this results in intermittent job failures. How to prevent job failure caused by the Docker Hub rates limits depends on exactly how you are using Docker images, here are a few solutions for common scenarios. ##### Elastic CI Stack for AWS, authenticating with a paid Docker Hub account If you're using the [Elastic CI Stack for AWS](https://github.com/buildkite/elastic-ci-stack-for-aws), you can authenticate with Docker Hub by adding the [two key environment variables](/docs/agent/self-hosted/aws/elastic-ci-stack/ec2-linux-and-windows/managing-elastic-ci-stack#docker-registry-support) to your secrets bucket and accessing them from your build. Add your Docker Hub credentials to one of the following two environment hooks, which are downloaded at the start of each job: - `/env` - An agent environment hook, run for every job the agent runs - `/{pipeline-slug}/env` - An agent environment hook, specific to a pipeline Either one of these could be configured with Docker Hub credentials to ensure Docker Hub requests are authenticated: ```bash #!/bin/bash DOCKER_LOGIN_USER="the-user-name" DOCKER_LOGIN_PASSWORD="the-password" ``` ##### Other Buildkite agents authenticating with a paid Docker Hub account All agents check the local file system for [hook scripts to execute during a job](/docs/agent/hooks). A [pre-command hook](/docs/agent/hooks#job-lifecycle-hooks) script like this is one option for authenticating with Docker Hub, and can be configured to fetch credentials from the system you use to store them in: ```bash #!/bin/bash echo "~~~ Logging into Docker Hub" docker login --username "the-user-name" --password-stdin /nginx:1.14.2`). Learn more about migrating from Google's deprecated Container Registry to GAR in [Transition from Container Registry](https://cloud.google.com/artifact-registry/docs/transition/transition-from-gcr). ##### Mirroring Docker images into AWS Elastic Container Registry AWS doesn't have specific documentation, however their [advice on dealing with Docker Hub rate limits](https://aws.amazon.com/blogs/containers/advice-for-customers-dealing-with-docker-hub-rate-limits-and-a-coming-soon-announcement/) suggests mirroring public images into AWS Elastic Container Registry (ECR). A similar solution to the one proposed by Google Cloud Platform (GCP): 1. A regular process (for example, nightly) that mirrors the Docker images you need 1. Updating all pipelines to use the mirrored ECR image instead of the original one ##### Configuring the docker daemon to use the GCR mirror of popular Docker Hub images GCP host a mirror of popular Docker Hub images in the `mirror.gcr.io` registry. For agents running on GCP, it's possible to configure docker to try the mirror first, and transparently fall back to the public Docker Hub registry when the mirror doesn't have an image. Google have [documented how to set this up](https://cloud.google.com/container-registry/docs/pulling-cached-images). This will avoid the rate limits for many, but Google don't guarantee which images will be on the mirror, so depending on the specific images in use you may continue to hit Docker Hub rate limits. ##### Running a read-through caching registry There are two popular options for running a private caching Docker registry, where requests for missing images result in the image being fetched from an origin registry (like Docker Hub). - https://docs.docker.com/registry/recipes/mirror/ - https://github.com/rpardini/docker-registry-proxy Once the caching registry is operating, pipelines can be updated to use images from that registry (for example, from `nginx:1.14.2` to `example.com/nginx:1.14.2`) and new images will be transparently fetched from Docker Hub. --- ### Backstage URL: https://buildkite.com/docs/pipelines/integrations/other/backstage #### Backstage [Backstage](https://backstage.io/) is an open source framework for building developer portals that provide a unified front end for many development and CI/CD tools in one place. The [Buildkite plugin for Backstage](https://github.com/buildkite/backstage-plugin) integrates your Buildkite CI/CD pipelines directly into your Backstage service catalog, providing real-time build monitoring and management capabilities. ##### Features The Buildkite plugin for Backstage provides integration capabilities that allow for: - Real-time build status monitoring - view the current status of your builds and build information directly in Backstage. - Comprehensive build log tracking - access detailed build logs with syntax highlighting without leaving Backstage. - Advanced filtering and search capabilities - quickly find specific builds using powerful multiple-criteria filters. - Interactive build management - trigger rebuilds and manage builds from within Backstage or click through to Buildkite. - Customization options - configure the plugin to match your team's workflow with custom styling and time settings. ##### Installation requirements Before installing the Buildkite Backstage plugin, ensure you have: - At least one existing Buildkite pipeline. - An up-to-date Backstage instance. - A [Buildkite API access token](/docs/apis/managing-api-tokens) with the following permissions: * `read_pipelines` * `read_builds` * `read_user` * `write_builds` (for rebuild functionality) ##### Installation Regardless of whether you are installing the Buildkite plugin for Backstage from your project's plugins directory or from an external package, run the following command to install the plugin: ```bash yarn workspace app add @buildkite/backstage-plugin-buildkite ``` ##### Plugin configuration Follow these steps to configure the Buildkite plugin for Backstage after the installation. ###### Add proxy configuration Add the proxy configuration to your `app-config.yaml`: ```yaml proxy: endpoints: '/buildkite/api': target: https://api.buildkite.com/v2 headers: Authorization: Bearer ${BUILDKITE_API_TOKEN} Accept: application/json allowedHeaders: ['Authorization'] buildkite: apiToken: ${BUILDKITE_API_TOKEN} organization: ${BUILDKITE_ORGANIZATION} ``` Make sure to set the `BUILDKITE_API_TOKEN` environment variable with your Buildkite API access token. ###### Add the API factory Add the API factory in `packages/app/src/apis.ts`: ```typescript import { buildkiteAPIRef, BuildkiteClient } from '@buildkite/backstage-plugin-buildkite'; export const apis: AnyApiFactory[] = [ createApiFactory({ api: buildkiteAPIRef, deps: { discoveryApi: discoveryApiRef, fetchApi: fetchApiRef, configApi: configApiRef }, factory: ({ discoveryApi, fetchApi, configApi }) => { const buildkiteConfig = configApi.getOptionalConfig('buildkite'); return new BuildkiteClient({ discoveryAPI: discoveryApi, fetchAPI: fetchApi, config: { organization: buildkiteConfig?.getOptionalString('organization') ?? 'default-org', defaultPageSize: buildkiteConfig?.getOptionalNumber('defaultPageSize') ?? 25, apiBaseUrl: buildkiteConfig?.getOptionalString('apiBaseUrl') ?? 'https://api.buildkite.com/v2', }, }); }, }), ]; ``` ###### Add the routes Add the routes to the Buildkite plugin in `packages/app/src/App.tsx`: ```typescript import { PipelinePage } from '@buildkite/backstage-plugin-buildkite'; const routes = ( {/* Other routes... */} {/* Buildkite Plugin Routes */} } /> } /> ); ``` ###### Add the plugin to your Entity Page Add the Buildkite plugin for Backstage to your [Entity Page](https://backstage.io/docs/features/software-catalog/life-of-an-entity) in Backstage: ```typescript import { isBuildkiteAvailable, BuildkiteWrapper } from '@buildkite/backstage-plugin-buildkite'; const cicdContent = ( ); const defaultEntityPage = ( {/* Other routes... */} {cicdContent} ); ``` ##### Configuration To link a component in your Backstage catalog to a Buildkite pipeline, add the Buildkite annotation to the component's `catalog-info.yaml`: ```yaml metadata: annotations: buildkite.com/pipeline-slug: organization-slug/pipeline-slug ``` The `pipeline-slug` should be in the format `organization-slug/pipeline-slug`, where: - `organization-slug` is your Buildkite organization's slug. - `pipeline-slug` is the specific pipeline's slug. ##### Deployment tracking The Buildkite plugin for Backstage can track deployments across your pipelines. Here are the ways to mark builds as deployments. Choose the one that suits your use case. ###### Using the metadata You can mark builds for deployment by setting the `environment` metadata field in your Buildkite pipeline build using the following command: ```yaml #### In your pipeline.yml steps: - label: "Deploy to Production" command: | buildkite-agent meta-data set "environment" "production" ./scripts/deploy.sh ``` ###### Using the deployment pattern settings If you would like to track both the application name and environment for your deployments, use the `app:environment:deployed pattern`: ```yaml #### In your pipeline.yml steps: - label: "Deploy Frontend to Staging" command: | buildkite-agent meta-data set "frontend:staging:deployed" "true" ./scripts/deploy-frontend-staging.sh branches: "main" - label: "Deploy Backend to Staging" command: | buildkite-agent meta-data set "backend:staging:deployed" "true" ./scripts/deploy-backend-staging.sh branches: "main" ``` This way, you can track multiple applications deployed to different environments. Backstage will display both the application name and environment in the deployments view. ###### Using environment-specific deployment flags If you would like to track multiple deployments from a single build as they sequentially progress through your environments (for example, from staging to production), you can use environment-specific flags: ```yaml #### In your pipeline.yml steps: - label: "Deploy to Staging" command: | buildkite-agent meta-data set "staging_deployment" "true" ./scripts/deploy-staging.sh branches: "main" - block: "Promote to Production?" branches: "main" - label: "Deploy to Production" command: | buildkite-agent meta-data set "production_deployment" "true" ./scripts/deploy-production.sh branches: "main" ``` ##### Usage Once you have configured the Buildkite plugin for Backstage and marked your builds for deployment tracking, you can: - View build status directly on the component's overview page in Backstage. - Navigate to the CI/CD tab to see detailed build information. - Filter builds by status, branch, or other criteria. - Click on individual builds to view logs and artifacts. - Trigger new builds directly from Backstage. See [Deployment visibility with Backstage](/docs/pipelines/deployments/deployment-visibility-with-backstage) page for an in-depth coverage of deployment visibility and tracking, as well as some optimization and troubleshooting tips. ##### Further reading - [Buildkite plugin for Backstage GitHub repository](https://github.com/buildkite/backstage-plugin?tab=readme-ov-file#buildkite-backstage-plugin) - [Backstage documentation](https://backstage.io/docs/overview/what-is-backstage) --- ## Test Engine ### Test Engine URL: https://buildkite.com/docs/test-engine #### Buildkite Test Engine Scale out your testing across any framework with _Buildkite Test Engine_. Speed up builds with real-time flaky test management and intelligent test splitting. Drive accountability and get more out of your existing CI compute with performance insights and analytics. Where [Buildkite Pipelines](/docs/pipelines) helps you automate your CI/CD pipelines, Test Engine helps you track and analyze the steps in these pipelines, by: - Shipping code to production faster through test optimization. - Working directly with Buildkite Pipelines, as well as other CI/CD applications. - Identifying, fixing, and monitoring test performance. - Tracking, improving, and monitoring test reliability. ##### Get started Run through the [Getting started](/docs/test-engine/getting-started) tutorial for a step-by-step guide on how to use Buildkite Test Engine. If you're familiar with the basics, understand how to run your tests within your development project, and analyze and report on them through a Test Engine [_test suite_](/docs/test-engine/test-suites). As part of configuring a test suite, you'll need to configure [test collection](/docs/test-engine/test-collection) for your development project. Do this by setting it up with the required Buildkite _test collectors_ for your project's testing frameworks (also known as _test runners_), which sends the required test data information to Test Engine: If a Buildkite test collector is not available for one of these test runners, you can use [other test collection](/docs/test-engine/other-collectors) mechanisms instead. ##### Core features > 📘 Data retention > The execution data uploaded to Test Engine is stored in S3 and deleted after 120 days. ##### API & references Learn more about: - Test Engine's APIs through the [REST API documentation](/docs/apis/rest-api), and related endpoints, starting with [test suites](/docs/apis/rest-api/test-engine/suites). - The [Buildkite MCP server](/docs/apis/mcp-server) and its Test Engine-specific MCP [tools](/docs/apis/mcp-server/tools#available-mcp-tools-test-engine) and [toolsets](/docs/apis/mcp-server/tools/toolsets#available-toolsets). - Test Engine's [webhooks](/docs/apis/webhooks/test-engine). - Test Engine [glossary](/docs/test-engine/glossary) of important terms. --- ### Overview URL: https://buildkite.com/docs/test-engine #### Buildkite Test Engine Scale out your testing across any framework with _Buildkite Test Engine_. Speed up builds with real-time flaky test management and intelligent test splitting. Drive accountability and get more out of your existing CI compute with performance insights and analytics. Where [Buildkite Pipelines](/docs/pipelines) helps you automate your CI/CD pipelines, Test Engine helps you track and analyze the steps in these pipelines, by: - Shipping code to production faster through test optimization. - Working directly with Buildkite Pipelines, as well as other CI/CD applications. - Identifying, fixing, and monitoring test performance. - Tracking, improving, and monitoring test reliability. ##### Get started Run through the [Getting started](/docs/test-engine/getting-started) tutorial for a step-by-step guide on how to use Buildkite Test Engine. If you're familiar with the basics, understand how to run your tests within your development project, and analyze and report on them through a Test Engine [_test suite_](/docs/test-engine/test-suites). As part of configuring a test suite, you'll need to configure [test collection](/docs/test-engine/test-collection) for your development project. Do this by setting it up with the required Buildkite _test collectors_ for your project's testing frameworks (also known as _test runners_), which sends the required test data information to Test Engine: If a Buildkite test collector is not available for one of these test runners, you can use [other test collection](/docs/test-engine/other-collectors) mechanisms instead. ##### Core features > 📘 Data retention > The execution data uploaded to Test Engine is stored in S3 and deleted after 120 days. ##### API & references Learn more about: - Test Engine's APIs through the [REST API documentation](/docs/apis/rest-api), and related endpoints, starting with [test suites](/docs/apis/rest-api/test-engine/suites). - The [Buildkite MCP server](/docs/apis/mcp-server) and its Test Engine-specific MCP [tools](/docs/apis/mcp-server/tools#available-mcp-tools-test-engine) and [toolsets](/docs/apis/mcp-server/tools/toolsets#available-toolsets). - Test Engine's [webhooks](/docs/apis/webhooks/test-engine). - Test Engine [glossary](/docs/test-engine/glossary) of important terms. --- ### Getting started URL: https://buildkite.com/docs/test-engine/getting-started #### Getting started with Test Engine 👋 Welcome to Buildkite Test Engine! You can use Test Engine to help you track and analyze the test steps automated through CI/CD using either [Buildkite Pipelines](/docs/pipelines) or another CI/CD application. This getting started page is a tutorial that helps you understand Buildkite Test Engine's fundamentals, by providing you with high level guidance on how you'd create a new Test Engine [test suite](/docs/test-engine/test-suites), configure a test collector for your development project (to send data to your test suite), and how to automate your tests with Buildkite Pipelines. ##### Before you start To complete this tutorial, you'll need: - A Buildkite account and a basic familiarity with [Buildkite Pipelines](/docs/pipelines). If you don't already have a Buildkite account and want to gain some familiarity with this product, run through the [Getting started with Pipelines](/docs/pipelines/getting-started) tutorial first. Otherwise, you can create a free personal Buildkite account from the [sign-up page](https://buildkite.com/signup). - [Git](https://git-scm.com/downloads), to work with a locally cloned project you want to implement Test Engine test suites on. ##### Create a test suite To begin creating a new test suite: 1. Select **Test Suites** in the global navigation to access the **Test Suites** page. 1. Select **New test suite**. 1. On the **Identify, track and fix problematic tests** page, enter an optional **Application name**, for example, `My project`. 1. Enter a mandatory **Test suite name**, for example, `My project test suite`. 1. Enter the **Default branch name**, which is the default branch that Test Engine shows trends for, and can be changed any time, for example (and usually), `main`. 1. Enter an optional **Suite emoji**, using [emoji syntax](/docs/pipelines/emojis), for example, `\:test_tube\:` for a test tube emoji. 1. Enter an optional **Suite color**, using the `#RRGGBB` syntax. See the [HTML Color Codes](https://htmlcolorcodes.com/) page to help you choose a color. **Note:** At this point, you can select one of the buttons towards the end of this page which match your project's testing framework (or test runners) for instructions on how to set up [test collection](/docs/test-engine/test-collection) for your project. This opens up the relevant documentation page with detailed instructions on how to set up test collection for your test runners, which you'll be doing in the next section. Otherwise, if your project's testing framework is not listed, see [Collecting test data from other test runners](/docs/test-engine/test-collection/other-collectors) for details on how to implement test collection for other testing frameworks. Regardless, keep the relevant documentation page/s open. 1. Select **Set up suite**. 1. If your Buildkite organization has the [teams feature](/docs/test-engine/permissions) enabled, select the relevant **Teams** to be granted access to this test suite, followed by **Continue**. The new test suite's **Complete test suite setup** page is displayed, requesting you to [configure your test collector within your development project](#configure-your-project-with-its-test-collector). ##### Configure your project with its test collector Next, configure your project's test runners with its Buildkite test collector: 1. On the **Complete test suite setup** page, under **Set up an integrated test collector**, select the test collector option for your test runners. 1. Follow the instructions on the right of the page (along with the relevant documentation page you opened above for more detailed information) to implement the relevant test collection capabilities for your project. **Note:** When instructed to add the `BUILDKITE_ANALYTICS_TOKEN` to your CI environment, this is referring to the **Test Suite API Token** at the top of this **Complete test suite setup** page. You'll be using this in the last step of this section, as well as in the section on how to [Automate your test runner with Buildkite Pipelines](#automate-your-test-runner-with-buildkite-pipelines). 1. Add and commit your test collector changes to your project to a new branch. For example: ```bash git add . git commit -m "Install and set up test collector for Buildkite Test Engine" git push ``` 1. At this point, you can now run your project's test runner at the command line, by passing in `BUILDKITE_ANALYTICS_TOKEN=` as an environment variable to the test runner command. Once the test runner has completed running, check your test suite page to see the results collected by your Test Engine test suite! ##### Automate your test runner with Buildkite Pipelines You can automate your test suite by automating builds of your project in Buildkite Pipelines. To do this: 1. Follow the [Create your own pipeline](/docs/pipelines/create-your-own) instructions to create a Buildkite pipeline that at least builds your project and runs its test runners. 1. Copy the value of your **Test Suite API Token** (which you can later retrieve through your test suite's **Settings** > **Suite token** page) and configure it as a [Buildkite secret](/docs/pipelines/security/secrets/buildkite-secrets). You can create this secret with a name like `MY_PROJECT_TEST_SUITE_TOKEN`, and reference it in a pipeline using syntax like: ```yaml steps: - label: "Run tests" command: - test-runner-execution-command # Assumes your agent is running the required resources for this. secrets: BUILDKITE_ANALYTICS_TOKEN: MY_PROJECT_TEST_SUITE_TOKEN ``` Learn more about how to create a Buildkite secret and use it in a Buildkite pipeline in [Create a secret](/docs/pipelines/security/secrets/buildkite-secrets#create-a-secret) and [Use a Buildkite secret in a job](/docs/pipelines/security/secrets/buildkite-secrets#use-a-buildkite-secret-in-a-job), respectively. ##### Next steps That's it! You've successfully created a test suite, configured your development project with a test collector, executed the project's test runner to send its test data to your test suite, and automated the process in Buildkite Pipelines. 🎉 Learn more about: - How to work with [test suites](/docs/test-engine/test-suites) in Buildkite Test Engine. - [CI environment variables](/docs/test-engine/test-collection/ci-environments) that test collectors (and other test collection mechanisms) provide to your Buildkite test suites, when your test runs are automated through CI/CD. - Other tutorials for specific testing frameworks, such as [Setting up a Ruby project for Test Engine](/docs/test-engine/tutorials/setting-up-a-ruby-project). --- ### Speed up builds with bktec URL: https://buildkite.com/docs/test-engine/speed-up-builds-with-bktec #### Speed up builds with the Test Engine Client The Buildkite Test Engine Client ([bktec](https://github.com/buildkite/test-engine-client)) is a powerful tool that leverages your Test Engine [test suite](/docs/test-engine/test-suites) data to make your Buildkite pipelines run faster and be more reliable. ##### Faster build times with test splitting Intelligently partition your pipeline with bktec to substantially reduce build times on your critical path to delivery. bktec split tests automatically based on your historical timing data, and maintains peak speed through continuous optimization and automated re-balancing. The following image from Test Engine's test splitting setup page illustrates how this feature works. In this example, _without_ bktec, the test suite build time would take as long as it takes for the slowest combination of tests to run on a single partition (Buildkite job), which is 10 minutes. Since the sum of all test executions across all agents is 16 minutes, _with_ test splitting implemented, all four partitions would take approximately 4 minutes to run, such that the overall test suite build time would be approximately 4 minutes, or a 6-minute reduction. ##### Increase build reliability with test states bktec uses [test state](/docs/test-engine/glossary#test-state) data from your test suite to _mute_ or _skip_ problematic tests, which [quarantines](/docs/test-engine/glossary#quarantine) them, so that [flaky tests](/docs/test-engine/glossary#flaky-test) don't affect the result of your build. Quarantining reduces build times by ensuring passing builds, first time, without having to retry jobs with failing tests. A test marked _skip_ within a test suite won't be executed as part of its test run. A test marked with _mute_ within a test suite will still be executed, but the result of the test will be ignored. Buildkite recommends muting tests rather than skipping them, as a muted test will still report its result to Test Engine, so if the test's reliability improves over time, it can be re-enabled. ##### Learn more bktec and its test splitting feature is available to all [Pro and Enterprise](https://buildkite.com/pricing/) plan customers, and test state is available for all Enterprise plan customers. If you are on a legacy plan please contact sales@buildkite.com to gain access these feature and try it out. Learn more about how to install and configure bktec in their respective [configuring](/docs/test-engine/bktec/configuring) and [installing](/docs/test-engine/bktec/installing-the-client) pages. --- ### Reduce flaky tests URL: https://buildkite.com/docs/test-engine/reduce-flaky-tests #### Reduce flaky tests Flaky tests are automated tests that produce inconsistent or unreliable results, despite being run on the same code and environment, and can cause frustration, decrease confidence in testing, and waste time while you investigate whether or not the failure is due to a genuine bug. Test Engine allows you to set up a [workflow](/docs/test-engine/workflows) to manage your flaky tests. You can configure the workflow to automatically detect and label flaky tests, and to notify the relevant people when a new flaky test appears. ##### Detecting flaky tests Test Engine's workflows feature has a number of monitors that can be used to detect flaky tests. Choosing the best flaky test monitor for your test suite depends on the shape of your test data and the configuration of your test pipeline. See [Monitors](/docs/test-engine/workflows/monitors) to learn more about the different monitors. If you're unsure which monitor is best suited for your test suite, we recommend using the transition count monitor. Once you've chosen your monitor, add a Workflow action to label this test as "flaky". Having the flaky test labelled as such means that Test Engine can drive other automatic behavior from this, and you can easily surface "flaky" tests in the Test Engine UI and on the Tests tab in the Builds page. By default, Test Engine provides a [saved view](/docs/test-engine/test-suites/saved-views) called "Flaky" which shows you all test with the flaky label. ##### Quarantining flaky tests Optionally, if your test suite has test state enabled, you can quarantine a flaky test by changing its state to "muted" or "skipped". You can do this manually through the Test Engine interface, using the Test Engine API, or by configuring a Workflow action for this to happen automatically. Once a test has been quarantined, you can speed up your builds by using bktec to ignore quarantined tests as part of your test suite execution. Learn more about quarantining in [Test state and quarantine](/docs/test-engine/test-suites/test-state-and-quarantine). ##### Remediating flaky tests Once the flaky test has been identified, it needs to be fixed or removed so that it stops impacting everyone. Workflows provides a number of actions to surface the flaky test so that it can be remediated. You can set up a Workflow action to automatically: - send a webhook - post a Slack message - create a Linear issue This allows the relevant team(s) to be notified about any newly identified flaky tests and prioritise a fix for this. With Workflow actions, you can set up a trigger to automatically remove the "flaky" label (and transition its state back to "enabled", if applicable) once an acceptable level of reliability has been reached for the given test. This means you don't have to do any manual monitoring of flaky test fixes. Learn more about setting up a Workflow [here](/docs/test-engine/workflows). --- ### Glossary URL: https://buildkite.com/docs/test-engine/glossary #### Test Engine glossary The following terms describe key concepts to help you use Test Engine. ##### Action An action is part of a [workflow](#workflow) and provides a user defined operation that is triggered automatically when a workflow [monitor](#monitor) enters the [alarm](#alarm) or [recover](#recover) event state for a [test](#test). Actions can be for operations that happen within the Test Engine system (that is, changing a test's [state](#test-state) or [label](/docs/test-engine/test-suites/labels)), or externally to Test Engine (for example, sending a Slack notification about the test). Learn more about actions in [Alarm and recover actions](/docs/test-engine/workflows/actions). ##### Alarm Alarm, along with [recover](#recover), is one of the two types of events that a workflow [monitor](#monitor) can alert on. Alarm events are reported by the monitor when the alarm conditions are met. Depending on the monitor type, these alarm conditions are configurable. Alarm [actions](#action) are performed when the alarm event is reported by the monitor. Repeated occurrences of the test meeting the alarm conditions do not retrigger alarm actions. ##### Dimensions In the context of Test Engine, dimensions are structured data, consisting of [tags](#tag), which can be used to filter or group (that is, aggregate) test [executions](#execution). Dimensions are added to test executions using the tags feature, which you can learn more about in [Tags](/docs/test-engine/test-suites/tags). ##### Execution An execution is an instance of a single test, which is generated as part of a [run](#run). An execution tracks several aspects of a test, including its _result_ (passed, failed, skipped, other), _duration_ (time), and [dimensions](#dimensions) (that is, [tags](#tag)). ##### Flaky test A flaky test is a [test](#test) that produce inconsistent or unreliable results, despite being run in the same code and environment. Flaky tests are identified via [workflows](/docs/test-engine/workflows). Learn more about flaky tests in [reduce flaky tests](/docs/test-engine/reduce-flaky-tests). ##### Managed test A managed test refers to any [test](#test) (within all test suites of a Buildkite organization) that can be uniquely identified by its combination of [test suite](#test-suite), [scope](#scope), and name of the test. For example, each of the following three tests are unique managed tests: - Test Suite 1 - here.is.scope.one - Login Test name - Test Suite 1 - here.is.another.scope - Login Test name - Test Suite 2 - here.is.scope.one - Login Test name Test Engine uses managed tests to track key areas [test runs](#run), and for billing purposes. Learn more about managed tests in [Usage and billing](/docs/test-engine/usage-and-billing). ##### Monitor A monitor is a part of a [workflow](#workflow) and is used to observe [tests](#test) over time. Monitors help to surface valuable qualitative information about the tests in your [test suite](#test-suite), which can be difficult to surmise from raw execution data. Monitors can report on special events (for example, a passed on retry event) or produce scores (such as, transition count score). A single monitor watches over all the tests in your test suite (apart from those excluded by filters) and generates individual [alarm](#alarm) and [recover](#recover) events for each test, which then trigger the associated alarm and recover [action](#action). Learn more about the different monitors types in [Monitors](/docs/test-engine/workflows/monitors). ##### Quarantine Quarantine is a classification applied to a [test](#test) that, based on the [state of the test](#test-state), changes how Test Engine [executes](#execution) that test as part of a [run](#run). When a test is quarantined, and its test state is flagged as: - **muted**, the test is [executed](#execution) as part of the [run](#run), but its failure does not cause the pipeline build to fail, allowing the test's metadata to still be collected. - **skipped**, the test is not be [executed](#execution) as part of the [run](#run), which can allow pipeline builds to execute more rapidly and can reduce costs, but no data is recorded from the test. Learn more about quarantining tests in [Test state and quarantine](/docs/test-engine/test-suites/test-state-and-quarantine). ##### Recover Recover, along with [alarm](#alarm), is one of the two types of events that a workflow [monitor](#monitor) can alert on. Recover events are [hysteric](https://en.wikipedia.org/wiki/Hysteresis), meaning that the recover event can only be reported on a test that has a previous alarm event. In such a situation, when the monitor detects that the test has met the recover conditions, a recover event is reported. Depending on the monitor type, these recover conditions can be configurable. Recover [actions](#action) are performed when the recover event is reported by the monitor. Repeated occurrences of the test meeting the recover conditions do not retrigger recover actions. ##### Run A run is the [execution](#execution) of one or more tests in a [test suite](#test-suite). A _run_ is sometimes referred to as a _test run_, bearing in mind that a single test run usually involves the [execution](#execution) of multiple [tests](#test). A Test Engine _run_ is analogous to a Pipeline [_build_](/docs/pipelines/glossary#build). ##### Scope A scope is a mechanism that can be implemented to differentiate between two or more identically named tests. For example, the following hypothetical tests have the same name as they both test the process of a user logging into a product platform. However, one of these applies to this test being done on a mobile device, while the other applies to a desktop web setting. Therefore, a scope can be used to differentiate between these two tests. | Name | Scope | Description | | ----- | ---- | ----------- | | User logs into platform | Mobile | A mobile user logs into the platform | | User logs into platform | Web | A web user logs into the platform | A test's scope is used in determining a [managed test](#managed-test)'s uniqueness. ##### Tag A tag is a `key:value` pair containing two parts: - The tag's `key` is the identifier, which can only exist once on each test, and is case sensitive. - The tag's `value` is the specific data or information associated with the `key`. In Test Engine, tags add [dimensions](#dimensions) to test execution metadata, so that [tests](#test) and their [executions](#execution) can be better filtered, aggregated, and compared in Test Engine visualizations. Tagging can be used to observe aggregated data points—for example, to observe aggregated performance across several tests, and (optionally) narrow the dataset further based on specific constraints. Learn more about tags in the [Tags](/docs/test-engine/test-suites/tags) topic. ##### Test A test is an individual piece of code that runs as part of an application's or component's (for example, a library's) building process (which can be automated by [Pipelines](/docs/pipelines)), to ensure that a specific area of the application or component functions as expected. ##### Test collection Test collection is the process of collecting test data from a development project. Test collection may consist of one or more [test collectors](#test-collector) configured within a development project, or make use other methods based on common standards such as JUnit XML or JSON to collect tests. While a development project's [test runners](#test-runner) (such RSpec or Jest) are typically configured with their respective test collectors, the JUnit XML or JSON test collection mechanisms can be used to collect test data from multiple test runners. ##### Test collector A test collector is a dedicated open source source library (developed by Buildkite) that can be implemented into your development project, to collect test data from a [test runner](#test-runner) within your project. Buildkite offers [a number of test collectors](/docs/test-engine/test-collection) for a range of languages and their test runners. ##### Test runner A test runner is a synonymous term used for a _test framework_, which is typically a code library that can be integrated into a development project to facilitate the implementation of [tests](#test) for that project. ##### Test state A test state is a configurable flag that can be applied to a [test](#test) (typically [flaky tests](#flaky-test)), which [quarantines](#quarantine) the test and affects how the test is [executed](#execution) as part of a [run](#run). When a test is in a trusted state, its test state is flagged as **enabled**, and the following test state flags are supported when a test is being quarantined: - **muted**, the test is [executed](#execution) as part of the [run](#run), but its failure does not cause the pipeline build to fail, allowing the test's metadata to still be collected. - **skipped**, the test is not be [executed](#execution) as part of the [run](#run), which can allow pipeline builds to execute more rapidly and can reduce costs, but no data is recorded from the test. Learn more about test states in [Test state and quarantine](/docs/test-engine/test-suites/test-state-and-quarantine). ##### Test suite A test suite is a collection of [tests](#test), which is managed through Buildkite Test Engine. A _test suite_ is sometimes abbreviated to _suite_. In a development project configured with of one or more [test runners](#test-runner), it is usually typical to configure a separate test suite each of the project's test runners. ##### Workflow A workflow defines a process that's composed of a single [monitor](#monitor) and any number of [actions](#action). A workflow enables a user to define a custom identification and management system for tests of interest in their suite. Flaky test management is a common use case for workflows. Learn more about workflows in the [Workflows overview](/docs/test-engine/workflows). --- ### Overview URL: https://buildkite.com/docs/test-engine/test-suites #### Test suites overview In Test Engine, a _test suite_ (or _suite_) is a collection of [tests](/docs/test-engine/glossary#test). A suite has a _run_, which is the execution of tests in a suite. A pipeline's build may create one or more of these runs. Many organizations set up one suite per test framework, for example one suite for RSpec, and another suite for Jest. Others use a common standard, such as JUnit XML, to combine tests from multiple frameworks to set up custom backend and frontend suites. Each suite inside Test Engine has a unique API token that you can use to route test information to the correct suite. Pipelines and test suites do not need to have a one-to-one relationship. > 📘 Test suite API token versus user API access token > The test suite API token is only for uploading test results to the test suite. To perform other operations using the [Test Engine REST API](/docs/apis/rest-api/test-engine/suites) (such as listing [suites](/docs/apis/rest-api/test-engine/suites#list-all-suites), [runs](/docs/apis/rest-api/test-engine/runs#list-all-runs), or [tests](/docs/apis/rest-api/test-engine/tests#list-tests)), use a [Buildkite API access token](/docs/apis/managing-api-tokens) for your user account with the appropriate scopes. When [creating a test suite](/docs/test-engine/getting-started#create-a-test-suite) for your development project, you'll need to have configured the appropriate _test collectors_ for your project's test runners before your test suite can fully function and start collecting test data. Learn more about how to do this from the [Test collection](/docs/test-engine/test-collection) section of these docs. To delete a suite, or regenerate its API token, go to suite settings. ##### Tests tab on build pages Test Engine information is available on your test pipeline's build pages, in the [new build view](/docs/pipelines/build-page). You can view the failing tests in a given build, and filter the test executions to analyze and surface trends about your test suite. You can filter by result, state, owner, label, suite, and tag. Filtering by suite is useful when a build has tests from multiple suites (for example, RSpec and Jest), allowing you to focus on the results from specific suites. You can also select **Display** to change the columns displayed on the **Tests** tab, so that other types of aggregate data (for example, average duration) appear. By default, the executions are grouped by test so that retried tests appear together. When tests are grouped, select **Expand failures** to expand all failure reasons at once, or select **Collapse failures** to collapse them again. The suite filter on the **Tests** tab supports two operators: - `is`: includes only the selected suite in the results - `is not`: excludes the selected suite from the results You can combine multiple suite filters to include or exclude several suites at once. You can save your filter and display column selections as [saved views](/docs/test-engine/test-suites/saved-views) directly from the **Tests** tab. ##### Parallelized builds In CI/CD, a build's tests can be made to run in parallel using features of your own CI/CD pipeline or workflow tool. Parallelized pipeline/workflow builds typically run and complete faster than builds which are not parallelized. In Buildkite Pipelines, you can run tests in parallel when they are configured as [parallel jobs](/docs/pipelines/tutorials/parallel-builds#parallel-jobs). > 📘 > When tests are run in parallel across multiple agents, they can be grouped into the same run by defining the same `run_env[key]` environment variable. Learn more about this environment variable and others in [CI environments](/docs/test-engine/test-collection/ci-environments). > The best way to coordinate the distribution of tests in a parallelized build is by implementing [test splitting](/docs/test-engine/test-splitting). ##### View by branch All test suites have a _default branch_ so you can track trends for your most important branch, and compare it to results across _all branches_. Organizations typically choose their main production branch as their default, although this is not required. All Test Engine views are filtered automatically to the default branch. In addition to the default branch, you can add any number of additional _stored branches_. Stored branches accept prefix wildcard operators, and are useful for merge queues and other similar naming conventions. You can filter Test Engine views by a stored branch, or any branch, by using the branch filter. To configure your branches, go to suite settings. In most cases, branch name is tracked automatically as part of the [core tags](/docs/test-engine/test-suites/tags#core-tags) Test Engine ingests on your behalf. ##### Tracking reliability Test Engine calculates reliability of both your entire test suite and individual tests as a measure of pass/fail rate over time. _Reliability_ is defined as percentage calculated by: - Test suite reliability = `passed_runs / (passed_runs + failed_runs) * 100` - Individual test reliability = `passed_test_executions / (passed_test_executions + failed_test_executions) * 100` Other test execution results such as `unknown` and `skipped` are ignored in the test reliability calculation. In Test Engine, a run is marked as `failed` as soon as a test execution fails, regardless of whether it passes on a retry. This helps surface unreliable tests. You can have a situation where a build eventually passes on retry in a Pipeline, and the related run is marked as `failed` in Test Engine. ##### Trends and analysis Once your test suite is set up, you'll have many types of information automatically calculated and displayed to help you surface and investigate problems in your test suite. The Summary and Test pages are able to be filtered by branch, result (e.g. pass, fail), state (e.g. enabled, disabled), owner (e.g. core-team, platform-team), label (e.g. flaky, slow, feature-test) and [tag](/docs/test-engine/test-suites/tags). This allows greater flexibility and deeper analysis into the performance of your test suite. Select any individual test execution to see more trend and deep-dive information. You can also annotate span information to help investigate problems, and see detailed log information inside Test Engine for any failed test or run. --- ### Test state and quarantine URL: https://buildkite.com/docs/test-engine/test-suites/test-state-and-quarantine #### Test state and quarantine Test Engine's **Test state** management feature provides the [test state](/docs/test-engine/glossary#test-state) flags of **enabled**, **muted** and **skipped**. [_Quarantine_](/docs/test-engine/glossary#quarantine) refers to the action of moving a test from a trusted state (**enabled**) to one of the untrusted states (**muted** or **skipped**). Tests can be quarantined [automatically](#automatic-quarantine) or [manually](#manual-quarantine). Quarantining [flaky tests](/docs/test-engine/reduce-flaky-tests) and then using [bktec](/docs/test-engine/speed-up-builds-with-bktec#increase-build-reliability-with-test-states) on pipeline's builds allows the pipeline to be built more rapidly, and run with a higher success rate. > 📘 Pro and Enterprise plan features > The _test state_ management and _automatic quarantining_ features are only available to customers on the [Pro or Enterprise](https://buildkite.com/pricing) plan. ##### Lifecycle states Users with the [**Full Access** permission to a test suite](/docs/test-engine/permissions#manage-teams-and-permissions-test-suite-level-permissions) can enable a **Test state** in a test suite's **Settings**, by selecting the appropriate test states that quarantining can be based upon. ###### Mute (recommended) Muted tests will still execute as jobs in your pipeline builds, but any failed results of these test jobs are handled as a _soft fail_. A soft fail result does not affect the result of your pipeline build, and allows the pipeline build to pass. However, metadata about the test is still collected by Test Engine. ###### Skip Skipped tests are not run during your pipeline builds. Since these tests are not executed, no data is recorded from them by Test Engine. To collect metadata about your [flaky tests](/docs/test-engine/reduce-flaky-tests), it is recommended that you only use the **Skip** option when you have a scheduled pipeline that is running skipped tests. ##### Automatic quarantine You can automatically quarantine tests using [workflows](/docs/test-engine/reduce-flaky-tests#quarantining-flaky-tests). To do this, use the [workflow change state action](/docs/test-engine/workflows/actions#change-state), to automatically transition tests into different states. Using [labelling](/docs/test-engine/test-suites/labels) on a test when it is quarantined and removing the label when the test is released from quarantine is also recommended. Learn more about automatic labelling in [workflow label actions](/docs/test-engine/workflows/actions#add-or-remove-label). ##### Manual quarantine You can manually quarantine flaky tests via the dropdown menu within the test's page itself or the test digest. This helps unblock builds affected by unreliable tests in real time. Manually quarantining a test either mutes or skips that test when the pipeline is built on any branch. ##### Configuring builds with quarantine ###### bktec The easiest way to respect test states in your builds is to run the [Buildkite Test Engine Client (bktec)](https://github.com/buildkite/test-engine-client) command in your pipelines. The `bktec` command automatically excludes quarantined tests from your test runs, preventing [flaky tests](/docs/test-engine/reduce-flaky-tests) from causing build failures, leading to faster, more reliable builds, and less need for retries. Currently, bktec supports the following test frameworks for: - muting tests—RSpec, Jest, and Playwright - skipping tests—RSpec only When using a supported test framework, bktec automatically handles quarantined tests, along with providing the benefits of efficient [test splitting](/docs/test-engine/test-splitting) and retry support. ```yaml - name: "Run tests, excluding quarantined ones, with bktec" command: bktec parallelism: 10 env: BUILDKITE_TEST_ENGINE_TEST_RUNNER: rspec|jest|playwright ``` ###### REST API If you are not using bktec, you can [query the REST API's `tests` endpoint](/docs/apis/rest-api/test-engine/quarantine) for your test suite to retrieve a list of tests that are currently skipped or muted and configure your build scripts accordingly. --- ### Tags URL: https://buildkite.com/docs/test-engine/test-suites/tags #### Tags Tags is a Test Engine feature that adds dimensions to test execution metadata so tests and executions can be better filtered, aggregated, and compared in Test Engine visualizations. Tagging can be used to observe aggregated data points—for example, to observe aggregated performance across several tests, and (optionally) narrow the dataset further based on specific constraints. Tags are `key:value` pairs containing two parts: - The tag's `key` is the identifier, which can only exist once on each test, and is case sensitive. - The tag's `value` is the specific data or information associated with the `key`. ##### Core tags The following core tags are vital to helping you understand and improve the performance of your test suite. These tags are included in the [managed tests](/docs/test-engine/usage-and-billing#managed-tests) price. Where possible, Test Engine will automatically ingest this data on your behalf. | Tag key | Use case | `build.id` | Filtering and aggregating based on the build identifier. | `build.job_id` | Filtering and aggregating based on the job identifier. | `build.step_id` | Filtering and aggregating based on the step identifier. | `cloud.provider` | Filtering and aggregating based on your cloud provider to compare cloud provider performance and reliability in your test suite. _Example:_ `aws` vs `gcp`. | `cloud.region` | Filtering and aggregating based on your cloud region to compare region performance and reliability in your test suite. _Example:_ `us-east-1` vs `us-east-2`. | `code.file.path` | Filtering and aggregating based on the file path or subsection of the file path. | `collector.name` | Filtering and aggregating based on the Test Engine collector you are using. Useful when onboarding or updating your Test Engine collector. | `collector.version` | Filtering and aggregating based on the Test Engine collector version you are using. Useful when onboarding or updating your Test Engine collector. | `host.arch` | Filtering and aggregating based on the architecture to compare architecture performance and reliability in your test suite. _Example:_ `arm64` vs `x86_64`. | `host.type` | Filtering and aggregating based on the instance type to compare instance performance and reliability in your test suite. _Example:_ `m4.large` vs `m5.large`. | `language.name` | Filtering and aggregating based on the programming language to compare language performance and reliability in your test suite. _Example:_ `python` vs `javascript`. | `language.version` | Filtering and aggregating based on the language version to compare version performance and reliability in your test suite. _Example:_ `3.0.2` vs `2.5.3`. | `scm.branch` | Filtering and aggregating based on source code branch to compare branch performance and reliability. For example you might be rolling out a new dependency and you are testing this in a branch. | `scm.commit_sha` | Filtering and aggregating based on commit_sha to compare specific commit performance and reliability. | `test.framework.name` | Filtering and aggregating based on testing framework to compare performance and reliability. | `test.framework.version` | Filtering and aggregating based on testing framework version to compare performance and reliability. ##### Custom tags In addition to the [core tags](#core-tags), you can tag executions with your own custom tags. Test Engine customers can tag executions with an additional 10 custom tags beyond the included core tags. ###### Defining tags Test Engine has the following tagging requirements: - Up to 10 tags may be specified at the upload level (applying to all executions), per upload. - Up to 10 tags may be specified on each execution. ###### Tag keys - Must not be blank. - Must begin with a letter, and may contain letters, numbers, underscores, hyphens and periods. - Must be less than 64 bytes of UTF-8 text. - Must not be a dot-separated prefix of another key. If a key like `service.instance.id` exists, you cannot create keys for its prefixes such as `service.instance` or `service`. ###### Tag values - Must not be blank. - Must be less than 128 bytes of UTF-8 text. ###### Tagging methods Tags may be assigned using the following collection methods: - [Java (using JUnit XML import)](/docs/test-engine/test-collection/importing-junit-xml) - [JavaScript (Jest, Cypress, Playwright, Mocha, Jasmine, Vitest)](/docs/test-engine/test-collection/javascript-collectors#upload-custom-tags-for-test-executions) - [Python (PyTest)](/docs/test-engine/test-collection/python-collectors#pytest-collector-upload-custom-tags-for-test-executions) - [Ruby (RSpec, minitest)](/docs/test-engine/test-collection/ruby-collectors#upload-custom-tags-for-test-executions) - [Importing JSON](/docs/test-engine/test-collection/importing-json#json-test-results-data-reference-execution-level-custom-tags) ##### Usage After you have assigned tags at the test collection level, start using them to filter and group your test results. Tags are used in the following areas of the Buildkite Platform. ###### Test execution drawer On the test page, you can open the execution drawer by selection an execution. This presents all the tags which have been applied to the test execution. ###### Group by tag Grouping by tag on the test page breaks down the test reliability and duration (p50, p95), so that you can compare performance across the tag values. ###### Filter by tag Filtering by tag on the test page will constrain all executions for the test which match the filter conditions. Filtering by tag on the test index page will constrain all tests to those that had executions matching the conditions of the filter. In the following case all tests that ran on the `t3.large` instance type. You can filter by tag using the **Filter** dropdown. ###### Test tab To filter tests by tags in [Pipelines](/docs/pipelines), select the **Tests** tab in either the job or build interface and apply your desired filters. --- ### Test ownership URL: https://buildkite.com/docs/test-engine/test-suites/test-ownership #### Test ownership Test ownership is critical in adopting a healthy testing culture at your organization. Defining one or more teams as test owners allows these teams to become accountable for maintaining tests within your test suite, ensuring it is fast and reliable, and providing confidence when you deploy your code. Test ownership can be assigned to [teams](/docs/test-engine/permissions#manage-teams-and-permissions), and is managed through team assignments in a TESTOWNERS file. ##### TESTOWNERS file format A TESTOWNERS file uses Buildkite team slugs instead of user names. Your team slug will be your team name in [kebab-case](https://en.wikipedia.org/wiki/Letter_case#Kebab_case). You can view your teams in your organization settings, or fetch them from our API: - [List teams from REST API](/docs/apis/rest-api/teams#list-teams) - [List teams from GraphQL API](/docs/apis/graphql/schemas/object/team) ```bash #### Example team name to slug Pipelines => pipelines Test Engine => test-engine 📦 Package Registries => package-registries ``` The following example TESTOWNERS file, which you can copy as a starting point, explains the syntax of this file and how it works: ```bash #### This is a comment. #### Only Buildkite teams can be specified as test owners. #### Teams must have explicit access to the suite the test belongs to. #### Each line is a file pattern followed by one or more team slugs. #### The following example teams will be the test owners for all test #### location metadata (that is, test files) from your pipeline builds #### in this repository. While both these example teams own these #### tests, the first team specified in this file pattern is the #### default owner for all test files from your pipeline builds and #### will be notified about issues with their corresponding tests. #### Other teams specified from the second position onwards will also #### be identified as owners and appear in reports about the #### reliability of these tests. However, unlike the default team #### owner, these additional teams will not be notified about test #### issues. Any file pattern matches defined later in this file take #### precedence and override any file patterns defined further up #### this file. * team-slug-1 team-slug-2 #### In this example, any test file ending with `_spec.rb` will be #### assigned to the `test-engine` team and not `team-slug-1`. *_spec.rb test-engine # This is an inline comment. #### In this example, the `pipelines` team owns all `.rb` test files. *.rb pipelines #### In this example, the `packages` team owns any test files in the #### `spec/packages/` directory at the root of the test location and #### in any of its subdirectories. /spec/packages/ packages #### In this example, the `spec/features/*` pattern matches test files #### like `spec/features/application_spec.rb`, but not any test files #### nested in any subdirectories of `spec/features`, such as #### `spec/features/pipelines/application_spec.rb`. spec/features/* test-engine #### In this example, the `pipelines` team owns any test file in any #### `pipelines` directory, anywhere within the test location. pipelines/ pipelines #### In this example, the `test-engine` team owns any test files #### within an `/test-engine` directory such as `/models/test-engine`, #### `/features/test-engine`, and `/models/organizations/test-engine`. #### Any test files directly within the `/test-engine` directory itself #### will also belong to the `test-engine` team. **/test-engine test-engine #### In this example, the `pipelines` team owns any test files in the #### `/spec` directory at the root of the test location. However, the #### test files contained within the `/spec/models/packages` #### subdirectory, are owned by the `packages` team. /spec/ pipelines /spec/models/packages packages ``` ###### Permission requirements The teams listed in your TESTOWNERS file must have [permission to access the test suite](/docs/test-engine/permissions#manage-teams-and-permissions-test-suite-level-permissions) _before_ ownership records are created. ##### Setting test ownership You can upload a TESTOWNERS file via this API endpoint: ```bash curl --location 'https://analytics-api.buildkite.com/v1/test-ownerships' \ --header "Authorization: Bearer " \ -F 'file=@' ``` You can upload the same TESTOWNERS file to multiple test suites. However, a test suite can only have one active TESTOWNERS file. > 📘 > You can also create a new pipeline to automatically upload your TESTOWNERS file when changes are detected. ##### Viewing test ownership You can view the current test ownership rules for a test suite in your **Test Suite** > **Settings** > **Test ownership** page. ##### Troubleshooting A TESTOWNERS file [follows the same rules as a `.gitignore` or `CODEOWNERS` file](https://docs.github.com/en/repositories/managing-your-repositorys-settings-and-features/customizing-your-repository/about-code-owners#example-of-a-codeowners-file), with the exception of the `.gitignore` rule that allows a file path to have no corresponding team. ```bash #### In a regular `.gitignore` or `CODEOWNER` file, the following #### block would set the `test-engine` team as the owner of any #### file in the `/specs` directory at the root of your test location #### except for the `/specs/features` subdirectory, as its owners are #### left empty. #### This functionality is not supported in a Buildkite `TESTOWNERS` #### file, where `/spec/features` would be also be owned by the #### `test-engine` team. /specs/ test-engine /specs/features ``` --- ### Labels URL: https://buildkite.com/docs/test-engine/test-suites/labels #### Labels Labels allow you to: - Organize tests to be more meaningful to your team and organization. - Categorize tests, and therefore, can be used to filter tests within Test Engine. Labels are created at the [test suite](/docs/test-engine/glossary#test-suite) level. Therefore, labels belonging to one test suite will not impact the labels associated with other test suites. ##### Label a test Labels may be applied to or removed from tests: - Manually through the [Buildkite interface](#label-a-test-using-the-buildkite-interface). - Automatically through the [workflow](#label-a-test-using-workflows) or [test execution tags](#label-a-test-using-execution-tags) features. - The [REST API](#label-a-test-using-the-rest-api). ###### Using the Buildkite interface From the details page of a test (accessible through its test suite's **Test** page), select **Add labels** and either: - Select a label from the list of existing used in the test's suite. - Specify a **New label**, select its **Label color**, and select **Save**. > 📘 > To remove a label from a test, select **Add labels** from the test's details page, and from its dropdown, clear the checkbox next to the label. ###### Using workflows Using [workflows](/docs/test-engine/workflows), you can automate the addition and removal of labels when a workflow [monitor](/docs/test-engine/workflows/monitors) condition is met. ###### Using execution tags A test execution [tag](/docs/test-engine/glossary#tag) value can be applied as a label on a test. Also, when Test Engine detects a change to such a tag's value, its label on the respective test is also updated to this value. Test Engine only labels tests from execution tag values when their test suite is configured to do so in the suite's **Settings** > **Test labels** (tab) page. Learn more about test execution tagging in [Tags](/docs/test-engine/test-suites/tags). > 📘 > A label added to a test through a test execution tag is automatically removed when the tag is removed from the test execution. > Labels are only applied from execution tags for tests run on the configured default branch. Tests running on other branches (such as feature branches) will not have labels automatically applied from their execution tags. ###### Using the REST API You can label tests using the [REST API](/docs/apis/rest-api) with the [Tests API](/docs/apis/rest-api/test-engine/tests) endpoint. Learn more about this in [Add/remove labels from a test](/docs/apis/rest-api/test-engine/tests#add-or-remove-labels-from-a-test). ##### Filter tests You can filter tests using labels through the [Buildkite interface](#filter-tests-using-the-buildkite-interface) or [REST API](#filter-tests-using-the-rest-api). ###### Using the Buildkite interface On the test suite's **Tests** page, or a build page's **Tests** tab, either: - Select **Filter**, then **Label**, and select or search for your label. - For any existing test with at least one label applied to it, select the test's label > **Filter by** from its drop down to filter the test suite for all tests with that label applied to them. [add image of build test tab] ###### Using the REST API You can fetch all tests with a label using the [REST API](/docs/apis/rest-api) with the [Tests API](/docs/apis/rest-api/test-engine/tests) endpoint. Learn more about this in [List tests](/docs/apis/rest-api/test-engine/tests#list-tests), and by specifying the optional `label` query string parameter. --- ### Saved views URL: https://buildkite.com/docs/test-engine/test-suites/saved-views #### Saved views Saved views let you create, name, and easily access custom test views within Buildkite Test Engine. This is useful for teams who frequently search using the same set of tags or labels. You can create saved views from three locations: - The test suite's **Summary** page in Test Engine - The test suite's **Tests** page in Test Engine - The **Tests** tab on a build page in [Buildkite Pipelines](/docs/pipelines) Saved views created from any of these locations are shared across your Buildkite organization or test suite and are visible to all users. ##### Creating views from the Summary or Tests page 1. On the test suite's **Summary** or **Tests** page, select **Filter**, then select as many filter values as you would like. 1. Select **Display**, then select the columns you would like to appear in your view. 1. Select **Save** in the filter bar. 1. Select one of the following options: * **Save as default view:** Available on the **Summary** page only. Sets the view as the default for the test suite. * **Create a new view:** Available on both the **Summary** and **Tests** pages. Give your view a name, then select **Save view**. ##### Creating views from the build Tests tab You can also create a default saved view directly from the **Tests** tab on a build page. > 📘 > This feature is only available to users who are Buildkite organization administrators. 1. Navigate to a build page and select the **Tests** tab. 1. Select **Filter**, then select your desired filter values. 1. Select **Display**, then select the columns you would like to appear in your view. 1. Select **Save** in the filter bar. 1. Select **Save as default view**. This view will now be the default **Tests** tab view for all builds in your Buildkite organization. ##### Deleting views Saved views can be deleted from the test suite's settings: 1. Navigate to the test suite's **Settings** > **Saved Views**. 1. On the view to be deleted, select its **Delete** button. --- ### Public test suites URL: https://buildkite.com/docs/test-engine/test-suites/public #### Public test suites If you're working on an open-source project or just want to share your test suite analytics with the world, you can make your test suite public. Making a suite public gives read-only access to all users. This means users who are unauthenticated or belong to another organization can view the following: - All test suite data - Run results - Test analytics - Test executions - Test execution data. For those using Buildkite's Ruby test collector, this includes SQL query data, HTTP request paths, and the execution timeline. - Environment variables that occur on each run: * `commit_sha` * `branch` * `message` * `url` * `number` * `job_id` - Tags - Workflows Before making a suite public, you should verify that runs do not expose sensitive information in their logs or environment variables. This applies to both new and historical runs. ##### Make a test suite public using the UI You make a test suite public from the suite's **Settings**: Only organization admins have permission to make a test suite public by default. Admins can extend this permission to all organization members from the **Security** tab in the organization settings. --- ### Overview URL: https://buildkite.com/docs/test-engine/test-collection #### Test collection overview To allow your [test suite](/docs/test-engine/test-suites) to collect test data from your development project, you need to configure a Buildkite _test collector_ for your project's test runners (for example, RSpec or minitest for Ruby, or Jest or Cypress for JavaScript), or some other mechanism for collecting data from your project's test runners to send to Test Engine. A test collector is a library or plugin that runs inside your test runner to gather the required test data information to send back to Buildkite for Test Engine to interpret, analyze and report on. Test collectors are available for the following languages and their test runners: - [Android](/docs/test-engine/test-collection/android-collectors) - [Elixir (ExUnit)](/docs/test-engine/test-collection/elixir-collectors) - [Go (gotestsum)](/docs/test-engine/test-collection/golang-collectors) - [Java (via JUnit XML import)](/docs/test-engine/test-collection/importing-junit-xml) - [JavaScript (Jest, Cypress, Playwright, Mocha, Jasmine, Vitest)](/docs/test-engine/test-collection/javascript-collectors) - [.NET (xUnit)](/docs/test-engine/test-collection/dotnet-collectors) - [Python (pytest)](/docs/test-engine/test-collection/python-collectors) - [Ruby (RSpec, minitest)](/docs/test-engine/test-collection/ruby-collectors) - [Rust (Cargo test)](/docs/test-engine/test-collection/rust-collectors) - [Swift (XCTest)](/docs/test-engine/test-collection/swift-collectors) - [Other languages or test runners](/docs/test-engine/other-collectors) Note that you can also [create your own test collectors](/docs/test-engine/test-collection/your-own-collectors). If your test runner executions are automated through CI/CD, learn more about the [CI environment variables](/docs/test-engine/test-collection/ci-environments) that test collectors (and other test collection mechanisms) provide to your Buildkite test suites, for reporting in test runs. Once you have configured the appropriate test collectors for your projects, you can proceed to run your tests, and analyze and report on their data through their [test suites](/docs/test-engine/test-suites). --- ### Android URL: https://buildkite.com/docs/test-engine/test-collection/android-collectors #### Android collectors To use Test Engine with your Android projects use the :github: [`test-collector-android`](https://github.com/buildkite/test-collector-android) package. You can also upload test results by importing [JSON](/docs/test-engine/test-collection/importing-json) or [JUnit XML](/docs/test-engine/test-collection/importing-junit-xml). ##### Android Before you start, make sure your tests run with access to [CI environment variables](/docs/test-engine/test-collection/ci-environments). 1. [Create a test suite](/docs/test-engine) and copy the test suite API token. 1. [Securely](/docs/pipelines/security/secrets/managing) set the `BUILDKITE_ANALYTICS_TOKEN` secret on your CI to the API token from the previous step. This will need to be on your CI server, if running the BuildKite collector via CI, or otherwise on your local machine. 1. **Unit Test Collector.** In your top-level build.gradle.kts file, add the following to your classpath: ``` buildScript { ... dependencies { ... classpath("com.buildkite.test-collector-android:unit-test-collector-plugin:0.1.0") } } ``` Then, in your app-level build.gradle.kts, add the following plugin: ``` plugins { id("com.buildkite.test-collector-android.unit-test-collector-plugin") } ``` That's it! 1. **Instrumented Test Collector.** In your app-level build.gradle.kts file, Add the following dependency: ``` androidTestImplementation("com.buildkite.test-collector-android:instrumented-test-collector:0.1.0") ``` ``` android { ... defaultConfig { ... buildConfigField( "String", "BUILDKITE_ANALYTICS_TOKEN", "\"${System.getenv("BUILDKITE_ANALYTICS_TOKEN")}\"" ) } } ``` Sync Gradle, and rebuild the project to ensure the `BuildConfig` is generated. Create the following class in your `androidTest` directory, i.e. `src/androidTest/java/com/myapp/MyTestCollector.kt` ``` class MyTestCollector : InstrumentedTestCollector( apiToken = BuildConfig.BUILDKITE_ANALYTICS_TOKEN ) ``` Again, in your app-level build.gradle.kts file, instruct Gradle to use your test collector: ``` testInstrumentationRunnerArguments += mapOf( "listener" to "com.mycompany.myapp.MyTestCollector" // Make sure to use the correct package name here ) ``` Note: This test collector uploads test data via the device under test. Make sure your Android device/emulator has network access. 1. Commit and push your changes: ```bash git checkout -b add-buildkite-test-engine git commit -am "Add Buildkite Test Engine" git push origin add-buildkite-test-engine ``` Once you're done, in your Test Engine dashboard, you'll see analytics of test executions on all branches that include this code. If you don't see branch names, build numbers, or commit hashes in Test Engine, then read [CI Environments](/docs/test-engine/test-collection/ci-environments) to learn more about exporting your environment to the collector. ###### Debugging To enable debugging output, create and set `BUILDKITE_ANALYTICS_DEBUG_ENABLED` environment variable to `true` on your test environment (CI server or local machine). For instrumented tests debugging, access the variable using `buildConfigField` and pass it through your `MyTestCollector` class. Refer the [example project](https://github.com/buildkite/test-collector-android/tree/main/example) for implementation. --- ### Elixir URL: https://buildkite.com/docs/test-engine/test-collection/elixir-collectors #### Elixir collectors To use Test Engine with your Elixir projects use :github: [`test_collector_elixir`](https://github.com/buildkite/test_collector_elixir) with ExUnit. You can also upload test results by importing [JSON](/docs/test-engine/test-collection/importing-json) or [JUnit XML](/docs/test-engine/test-collection/importing-junit-xml). ##### ExUnit [ExUnit](https://hexdocs.pm/ex_unit/) is a Elixir unit test library. Before you start, make sure ExUnit runs with access to [CI environment variables](/docs/test-engine/test-collection/ci-environments). 1. Create a [test suite](/docs/test-engine/test-suites) and copy the API token that it gives you. 1. Add `buildkite_test_collector` to your list of dependencies in `mix.exs`: ```elixir def deps do [ {:buildkite_test_collector, "~> 0.3.1", only: [:test]} ] end ``` 1. Set up your API token: In your `config/test.exs` (or other environment configuration as appropriate) add the Buildkite Test Engine API token. We suggest that you retrieve the token from the environment, and configure your CI environment accordingly (for example using secrets). ```elixir import Config config :buildkite_test_collector, api_key: System.get_env("BUILDKITE_ANALYTICS_TOKEN") ``` 1. Add `BuildkiteTestCollectorFormatter` to your ExUnit configuration in `test/test_helper.exs`: ```elixir ExUnit.configure formatters: [BuildkiteTestCollector.Formatter, ExUnit.CLIFormatter] ExUnit.start ``` 1. Run your tests like normal: Note that we attempt to detect the presence of several common CI environments. However if this fails you can set the `CI` environment variable to any value and it will work. ```sh $ mix test ... Finished in 0.01 seconds (0.003s on load, 0.004s on tests) 3 tests, 0 failures Randomized with seed 12345 ``` 1. Verify that it works. If all is well, you should see the test run analytics on the Buildkite Test Engine dashboard. --- ### Go URL: https://buildkite.com/docs/test-engine/test-collection/golang-collectors #### Configuring Go with Test Engine To use Test Engine with your [Go](https://go.dev/) language projects use [gotestsum](https://github.com/gotestyourself/gotestsum) to generate JUnit XML files, then [upload the JUnit XML files](/docs/test-engine/test-collection/importing-junit-xml) to Test Engine. 1. Install [gotestsum](https://github.com/gotestyourself/gotestsum): ```sh go install gotest.tools/gotestsum@latest ``` 1. Use gotestsum to run your tests and output JUnit XML, by replacing `go test` with `gotestsum`, for example: ```sh gotestsum --junitfile junit.xml ./... ``` 1. Upload the JUnit.xml to Buildkite: ```sh curl \ -X POST \ --fail-with-body \ -H "Authorization: Token token=\"$BUILDKITE_ANALYTICS_TOKEN\"" \ -F "data=@junit.xml" \ -F "format=junit" \ -F "run_env[CI]=buildkite" \ -F "run_env[key]=$BUILDKITE_BUILD_ID" \ -F "run_env[number]=$BUILDKITE_BUILD_NUMBER" \ -F "run_env[job_id]=$BUILDKITE_JOB_ID" \ -F "run_env[branch]=$BUILDKITE_BRANCH" \ -F "run_env[commit_sha]=$BUILDKITE_COMMIT" \ -F "run_env[message]=$BUILDKITE_MESSAGE" \ -F "run_env[url]=$BUILDKITE_BUILD_URL" \ https://analytics-api.buildkite.com/v1/uploads ``` See [gotestsum](https://github.com/gotestyourself/gotestsum) for full documentation of its features and command-line flags. --- ### JavaScript URL: https://buildkite.com/docs/test-engine/test-collection/javascript-collectors #### JavaScript collectors To use Test Engine with your JavaScript (npm) projects, use the :github: [`test-collector-javascript`](https://github.com/buildkite/test-collector-javascript) package with a supported test framework. Test Engine supports the following test frameworks: - [Jest](https://jestjs.io/) - [Jasmine](https://jasmine.github.io/) - [Mocha](https://mochajs.org/) - [Cypress](https://www.cypress.io) - [Playwright](https://playwright.dev) - [Vitest](https://vitest.dev/) You can also upload test results by importing [JSON](/docs/test-engine/test-collection/importing-json) or [JUnit XML](/docs/test-engine/test-collection/importing-junit-xml). ##### Add the test collector package Whichever test framework you use, you first need to add and authenticate the [`buildkite-test-collector`](https://www.npmjs.com/package/buildkite-test-collector). To add the test collector package: 1. In your CI environment, set the `BUILDKITE_ANALYTICS_TOKEN` environment variable to your Test Engine API token. To learn how to set environment variables securely in Pipelines, see [Managing pipeline secrets](/docs/pipelines/security/secrets/managing). 1. On the command line, create a new branch by running: ``` git checkout -b install-buildkite-test-engine ``` 1. Install [`buildkite-test-collector`](https://www.npmjs.com/package/buildkite-test-collector) using your package manager. For npm, run: ```shell npm install --save-dev buildkite-test-collector ``` For yarn, run: ```shell yarn add --dev buildkite-test-collector ``` ##### Configure the test framework With the test collector installed, you need to configure it in the test framework. ###### Jest If you're already using Jest, you can add `buildkite-test-collector/jest/reporter` to the list of reporters to collect test results into your Test Engine dashboard. To configure Jest: 1. Make sure Jest runs with access to [CI environment variables](/docs/test-engine/test-collection/ci-environments). 1. Add `"buildkite-test-collector/jest/reporter"` to [Jest's `reporters` configuration array](https://jestjs.io/docs/configuration#reporters-arraymodulename--modulename-options) (typically found in `jest.config.js`, `jest.config.js`, or `package.json`): ```json { "reporters": ["default", "buildkite-test-collector/jest/reporter"], "testLocationInResults": true, } ``` **Note:** The `"testLocationInResults": true` setting enables column and line capture for Test Engine. ###### Jasmine To configure Jasmine: 1. Follow the [Jasmine docs](https://jasmine.github.io/setup/nodejs.html#reporters) to add the Buildkite reporter. For example: ```js // SpecHelper.js var BuildkiteReporter = require("buildkite-test-collector/jasmine/reporter"); var buildkiteReporter = new BuildkiteReporter(); jasmine.getEnv().addReporter(buildkiteReporter); ``` 1. (Optional) To pass in the API token using a custom environment variable, use the following report options: ```js // SpecHelper.js var buildkiteReporter = new BuildkiteReporter(undefined, { token: process.env.CUSTOM_ENV_VAR, }); ``` ###### Mocha To configure Mocha: 1. Install the [mocha-multi-reporters](https://github.com/stanleyhlng/mocha-multi-reporters) in your project by running: ``` npm install mocha-multi-reporters --save-dev ``` 1. Configure it to run your desired reporter and the Buildkite reporter: ```js // config.json { "reporterEnabled": "spec, buildkite-test-collector/mocha/reporter" } ``` 1. Update your test script to use the Buildkite reporter via mocha-multi-reporters: ```js // package.json "scripts": { "test": "mocha --reporter mocha-multi-reporters --reporter-options configFile=config.json" }, ``` 1. (Optional) To pass in the API token using a custom environment variable, use the report options. Since the reporter options are passed in as a JSON file, we recommend you put the environment variable name as a string value in the `config.json`, which is retrieved using [dotenv](https://github.com/motdotla/dotenv) in the mocha reporter. ```js // config.json { "reporterEnabled": "spec, buildkite-test-collector/mocha/reporter", "buildkiteTestCollectorMochaReporterReporterOptions": { "token_name": "CUSTOM_ENV_VAR_NAME" } } ``` ###### Cypress To configure Cypress: 1. Make sure Cypress runs with access to [CI environment variables](/docs/test-engine/test-collection/ci-environments). 1. Update your [Cypress configuration](https://docs.cypress.io/guides/references/configuration). ```js // cypress.config.js // Send results to Test Engine reporter: "buildkite-test-collector/cypress/reporter", ``` **Note:** To pass in the API token using a custom environment variable, add the `reporterOptions` option to your Cypress configuration: ```js // cypress.config.js // Send results to Test Engine reporterOptions: { token_name: "CUSTOM_ENV_VAR_NAME" } ``` ###### Playwright If you're already using Playwright, you can add `buildkite-test-collector/playwright/reporter` to the list of reporters to collect test results into your Test Engine dashboard. To configure Playwright: 1. Make sure Playwright runs with access to [CI environment variables](/docs/test-engine/test-collection/ci-environments). 1. Add `"buildkite-test-collector/playwright/reporter"` to [Playwright's `reporter` configuration array](https://playwright.dev/docs/test-reporters#multiple-reporters) (typically found in `playwright.config.js`): ```js // playwright.config.js { "reporter": [ ["line"], ["buildkite-test-collector/playwright/reporter"] ] } ``` ###### Vitest If you are already using Vitest, you can add `buildkite-test-collector/vitest/reporter` to the list of reporters to collect test results in your Test Engine dashboard. To configure Vitest: Update your [Vitest configuration](https://vitest.dev/config/): ```js // vitest.config.js OR vite.config.js OR vitest.workspace.js test: { // Send results to Test Engine reporters: [ 'default', 'buildkite-test-collector/vitest/reporter' ], // Enable column + line capture for Test Engine includeTaskLocation: true, } ``` If you would like to pass in the API token using a custom environment variable, you can do so using the report options. ```js // vitest.config.js OR vite.config.js OR vitest.workspace.js test: { // Send results to Test Engine reporters: [ 'default', [ "buildkite-test-collector/vitest/reporter", { token: process.env.CUSTOM_ENV_VAR }, ], ], } ``` ##### Save the changes When your collector is installed, commit and push your changes: 1. Add the changes to the staging area by running: ```shell git add . ``` 1. Commit the changes by running: ```shell git commit -m "Install and set up Buildkite Test Engine" ``` 1. Push the changes by running: ```shell git push ``` ##### View the results After completing these steps, you'll see the analytics of test executions on all branches that include this code in the Test Engine dashboard. If you don't see branch names, build numbers, or commit hashes in the Test Engine dashboard, see [CI environments](/docs/test-engine/test-collection/ci-environments) to learn more about exporting your environment. ##### Upload custom tags for test executions You can group test executions using custom tags to compare metrics across different dimensions, such as: - Language versions - Cloud providers - Instance types - Team ownership - and more ###### Upload-level tags Tags configured on the collector will be included in each upload batch, and will be applied server-side to every execution therein. This is an efficient way to tag every execution with values that don't vary within one configuration, e.g. cloud environment details, language/framework versions. Upload-level tags may be overwritten by execution-level tags. ```js // Jest -- jest.config.js reporters: [ 'default', 'buildkite-test-collector/jest/reporter' ['buildkite-test-collector/jest/reporter', { tags: { hello: "jest" } }] ], // Cypress -- cypress.config.js reporterOptions: { tags: { "hello": "cypress" }, }, // Mocha -- config.js "buildkiteTestCollectorMochaReporterReporterOptions": { "tags": { "hello": "mocha" } } // Playwright -- playwright.config.js reporter: [ ['line'], ['buildkite-test-collector/playwright/reporter', { tags: { "hello": "playwright" } }] ], ``` ##### Troubleshooting missing test executions and --forceExit Using the [`--forceExit`](https://jestjs.io/docs/cli#--forceexit) option when running Jest could result in missing test executions from Test Engine. `--forceExit` could potentially terminate any ongoing processes that are attempting to send test executions to Buildkite. We recommend using [`--detectOpenHandles`](https://jestjs.io/docs/cli#--detectopenhandles) to track down open handles which are preventing Jest from exiting cleanly. --- ### .NET URL: https://buildkite.com/docs/test-engine/test-collection/dotnet-collectors #### .NET collector To use Test Engine with your .NET projects use the :github: [`test-collector-dotnet`](https://github.com/buildkite/test-collector-dotnet) package with xUnit. You can also upload test results by importing [JSON](/docs/test-engine/test-collection/importing-json) or [JUnit XML](/docs/test-engine/test-collection/importing-junit-xml). Before you start, make sure .NET runs with access to [CI environment variables](/docs/test-engine/test-collection/ci-environments). 1. Create a [test suite](/docs/test-engine/test-suites) and copy the API token that it gives you. 1. Add `Buildkite.TestAnalytics.Xunit` to your list of dependencies in your xUnit test project: ```sh $ dotnet add package Buildkite.TestAnalytics.Xunit ``` 1. Set up your API token Add the `BUILDKITE_ANALYTICS_TOKEN` environment variable to your build system's environment. 1. Run your tests Run your tests like normal. Note that we attempt to detect the presence of several common CI environments, however if this fails you can set the `CI` environment variable to any value and it will work. ```sh $ dotnet test Buildkite.TestAnalytics.Tests ``` 1. Verify that it works If all is well, you should see the test run analytics on the Buildkite Test Engine dashboard. --- ### Python URL: https://buildkite.com/docs/test-engine/test-collection/python-collectors #### Python collectors To use Test Engine with your Python projects use the [`buildkite-test-collector`](https://pypi.org/project/buildkite-test-collector/) package with pytest. You can also upload test results by importing [JSON](/docs/test-engine/test-collection/importing-json) or [JUnit XML](/docs/test-engine/test-collection/importing-junit-xml). ##### pytest collector pytest is a testing framework for Python. If you're already using pytest, then you can install `buildkite-test-collector` to collect test results into your Test Engine dashboard. Before you start, make sure pytest runs with access to [CI environment variables](/docs/test-engine/test-collection/ci-environments). To get started with `buildkite-test-collector`: 1. In your CI environment, set the `BUILDKITE_ANALYTICS_TOKEN` environment variable [securely](/docs/pipelines/security/secrets/managing) to your Buildkite Test Engine API token. 1. Add `buildkite-test-collector` to your list of dependencies. Some examples: If you're using a `requirements.txt` file, add `buildkite-test-collector` on a new line. If you're using a `setup.py` file, add `buildkite-test-collector` to the `extras_require` argument, like this: `extras_require={"dev": ["pytest", "buildkite-test-collector"]}` If you're using Pipenv, run `pipenv install --dev buildkite-test-collector`. If you're using another tool, see your dependency management system's documentation for help. 1. Commit and push your changes: ```shell $ git add . $ git commit -m "Install and set up Buildkite Test Engine" $ git push ``` Once you're done, in your Test Engine dashboard, you'll see analytics of test executions on all branches that include this code. If you don't see branch names, build numbers, or commit hashes in Test Engine, then read [CI environments](/docs/test-engine/test-collection/ci-environments) to learn more about exporting your environment to the collector. ###### Upload custom tags for test executions You can group test executions using custom tags to compare metrics across different dimensions, such as: - Language versions - Cloud providers - Instance types - Team ownership - and more We offer a tagging solution based on [pytest custom markers](https://docs.pytest.org/en/stable/example/markers.html). ###### Upload-level tags In your `conftest.py` file, you can use `pytest` global hook to tag all your test executions in a centralized way. ```python import pytest import sys def pytest_itemcollected(item): # add execution tag to all tests item.add_marker(pytest.mark.execution_tag("test.framework.name", "pytest")) item.add_marker(pytest.mark.execution_tag("test.framework.version", pytest.__version__)) item.add_marker(pytest.mark.execution_tag("cloud.provider", "aws")) item.add_marker(pytest.mark.execution_tag("language.version", sys.version)) ``` ###### Execution-level tags For more granular control, you can programmatically or statically add tags to target individual tests. To do it statically, targeting a single test or module: ```python import pytest @pytest.mark.execution_tag("team", "frontend") def test_add(): assert 1 + 1 == 2 ``` To do it programmatically, for example: ```python import pytest import sys def pytest_itemcollected(item): # You can use the rich data provided by pytest to selectively add execution tag to tests if "e2e" in item.location[0]: item.add_marker(pytest.mark.execution_tag("type", "browser")) ``` --- ### Ruby URL: https://buildkite.com/docs/test-engine/test-collection/ruby-collectors #### Ruby collectors To use Test Engine with your [Ruby](https://www.ruby-lang.org/) projects use the :github: [`test-collectors-ruby`](https://github.com/buildkite/test-collector-ruby) gem with RSpec or minitest. You can also upload test results by importing [JSON](/docs/test-engine/test-collection/importing-json) or [JUnit XML](/docs/test-engine/test-collection/importing-junit-xml). ##### RSpec collector [RSpec](https://rspec.info/) is a behavior-driven development library for Ruby. If you're already using RSpec for your tests, add the `buildkite-test_collector` gem to your code to collect your test results into your Test Engine dashboard. Before you start, make sure RSpec runs with access to [CI environment variables](/docs/test-engine/test-collection/ci-environments). 1. Create a new branch: ``` git checkout -b install-buildkite-test-engine ``` 2. Add `buildkite-test_collector` to your `Gemfile` in the `:test` group: ```rb group :test do gem "buildkite-test_collector" end ``` 3. Run `bundle` to install the gem and update your `Gemfile.lock`: ```sh bundle ``` 3. Add the Test Engine code to your application in `spec/spec_helper.rb`, and set the BUILDKITE_ANALYTICS_TOKEN [securely](/docs/pipelines/security/secrets/managing) on your agent or agents. Please ensure gems that patch `Net::HTTP`, like [httplog](https://github.com/trusche/httplog) and [sniffer](https://github.com/aderyabin/sniffer), are required before `buildkite/test_collector` to avoid conflicts. ```rb require "buildkite/test_collector" Buildkite::TestCollector.configure(hook: :rspec) ``` 4. Commit and push your changes: ```sh $ git add . $ git commit -m "Install and set up Buildkite Test Engine" $ git push ``` Once you're done, in your Test Engine dashboard, you'll see analytics of test executions on all branches that include this code. If you don't see branch names, build numbers, or commit hashes in the Test Engine UI, then see [CI environments](/docs/test-engine/test-collection/ci-environments) to learn more about exporting your environment to the collector. > 🚧 > Test Engine identifies tests using their descriptions and example group descriptions. To avoid test identity conflicts, ensure all test descriptions are unique. You can enforce uniqueness by using the RuboCop cops [RSpec/RepeatedDescription](https://docs.rubocop.org/rubocop-rspec/latest/cops_rspec.html#rspecrepeateddescription) and [RSpec/RepeatedExampleGroupDescription](https://docs.rubocop.org/rubocop-rspec/latest/cops_rspec.html#rspecrepeatedexamplegroupdescription), where [RuboCop](https://github.com/rubocop/rubocop) is a static code analyzer for Ruby. ###### Troubleshooting allow_any_instance_of errors If you're using RSpec and seeing errors related to `allow_any_instance_of` that look like this: ```ruby Failure/Error: allow_any_instance_of(Object).to receive(:sleep) Using `any_instance` to stub a method (sleep) that has been defined on a prepended module (Buildkite::TestCollector::Object::CustomObjectSleep) is not supported. ``` You can fix them by being more specific in your stubbing by replacing `allow_any_instance_of(Object).to receive(:sleep)` with `allow_any_instance_of(TheClassUnderTest).to receive(:sleep)`. ###### Troubleshooting test grouping issues RSpec supports anonymous test cases—tests which are automatically named based on the subject and/or inputs to the expectations within the test. However, this can lead to unstable test names across different test runs, incorporating elements such as object IDs, database IDs, timestamps, and more. As a consequence, each test is assigned a new identity per run within Test Engine. This poses a challenge for using the Test Engine product effectively, as historical data across tests becomes difficult to track and analyze. To mitigate this issue and ensure the reliability of Test Engine, it's advisable to provide explicit and stable descriptions for each test case within your RSpec test suite. By doing so, you can maintain consistency in test identification across multiple runs, enabling better tracking and analysis of test performance over time. ##### minitest collector [minitest](https://github.com/minitest/minitest) provides a complete suite of testing facilities supporting TDD, BDD, mocking, and benchmarking. If you're already using minitest for your tests, add the `buildkite-test_collector` gem to your code to collect your test results into your Test Engine dashboard. 1. Create a new branch: ``` git checkout -b install-buildkite-collector ``` 2. Add `buildkite-test_collector` to your `Gemfile` in the `:test` group: ```rb group :test do gem "buildkite-test_collector" end ``` 3. Run `bundle` to install the gem and update your `Gemfile.lock`: ```sh bundle ``` 3. Add the Test Engine code to your application in `test/test_helper.rb`, and set the BUILDKITE_ANALYTICS_TOKEN [securely](/docs/pipelines/security/secrets/managing) on your agent or agents. Please ensure gems that patch `Net::HTTP`, like [httplog](https://github.com/trusche/httplog) and [sniffer](https://github.com/aderyabin/sniffer), are required before `buildkite/test_collector` to avoid conflicts. ```rb require "buildkite/test_collector" Buildkite::TestCollector.configure(hook: :minitest) ``` 4. Commit and push your changes: ```sh git add . git commit -m "Install and set up Buildkite Test Engine" git push ``` Once you're done, in your Test Engine dashboard, you'll see analytics of test executions on all branches that include this code. If you don't see branch names, build numbers, or commit hashes in the Test Engine UI, then see [CI environments](/docs/test-engine/test-collection/ci-environments) to learn more about exporting your environment to the minitest collector. ##### Adding annotation spans This gem allows adding custom annotations to the span data sent to Buildkite using the [`.annotate`](https://github.com/buildkite/test-collector-ruby/blob/d9fe11341e4aa470e766febee38124b644572360/lib/buildkite/test_collector.rb#L64) method. For example: ```ruby Buildkite::TestCollector.annotate("Visiting login") ``` This would appear like so: This is particularly useful for tests that generate a lot of span data such as system/feature tests. You can find all _annotations_ under **Span timeline** at the bottom of every test execution page. ##### Upload custom tags for test executions You can group test executions using custom tags to compare metrics across different dimensions, such as: - Language versions - Cloud providers - Instance types - Team ownership - and more ###### Upload-level tags Tags configured on the collector will be included in each upload batch, and will be applied server-side to every execution therein. This is an efficient way to tag every execution with values that don't vary within one configuration, e.g. cloud environment details, language/framework versions. Upload-level tags may be overwritten by execution-level tags. ```rb require "buildkite/test_collector" Buildkite::TestCollector.configure( tags: { "cloud.provider" => "aws", "host.type" => "m5.4xlarge", "language.version" => RUBY_VERSION, } ) ``` ###### Execution-level tags For more granular control, you can programmatically add tags during individual test executions using the `.tag_execution` method. For example, with RSpec: ```rb RSpec.configuration.before(:each) do |example| Buildkite::TestCollector.tag_execution("team", example.metadata[:team]) Buildkite::TestCollector.tag_execution("feature", example.metadata[:feature]) end ``` ##### VCR If your test suites use [VCR](https://github.com/vcr/vcr) to stub network requests, you'll need to modify the config to allow actual network requests to Test Engine. ```ruby VCR.configure do |c| c.ignore_hosts "analytics-api.buildkite.com" end ``` --- ### Rust URL: https://buildkite.com/docs/test-engine/test-collection/rust-collectors #### Rust collector To use Test Engine with your [Rust](https://www.rust-lang.org/) projects use the :github: [`test-collector-rust`](https://github.com/buildkite/test-collector-rust) package with `cargo test`. You can also upload test results by importing [JSON](/docs/test-engine/test-collection/importing-json) or [JUnit XML](/docs/test-engine/test-collection/importing-junit-xml). Before you start, make sure Rust runs with access to [CI environment variables](/docs/test-engine/test-collection/ci-environments). 1. Create a [test suite](/docs/test-engine/test-suites) and copy the API token that it gives you. 1. Install the `buildkite-test-collector` crate: ```sh $ cargo install buildkite-test-collector # or $ cargo install --git https://github.com/buildkite/test-collector-rust buildkite-test-collector ``` 1. Configure your environment: Set the `BUILDKITE_ANALYTICS_TOKEN` environment variable to contain the token provided by the analytics project settings. We try and detect several common CI environments based in the environment variables which are present. If this detection fails then the application will crash with an error. To force the use of a "generic CI environment" set the `CI` environment variable to any non-empty value. 1. Change your test output to JSON format: In your CI environment you will need to change your output format to `JSON` and add `--report-time` to include execution times in the output. Unfortunately, these are currently unstable options for Rust, so some extra command line options are needed. Once you have the JSON output you can pipe it through the `buildkite-test-collector` binary - the input JSON is echoed back to STDOUT so that you can still operate upon it if needed. ```sh $ cargo test -- -Z unstable-options --format json --report-time | buildkite-test-collector ``` 1. Confirm correct operation. Verify that the run is visible in the Buildkite Test Engine dashboard. --- ### Swift URL: https://buildkite.com/docs/test-engine/test-collection/swift-collectors #### Swift collectors To use Test Engine with your Swift projects use the :github: [`test-collector-swift`](https://github.com/buildkite/test-collector-swift) package with XCTest. You can also upload test results by importing [JSON](/docs/test-engine/test-collection/importing-json) or [JUnit XML](/docs/test-engine/test-collection/importing-junit-xml). ##### XCTest [XCTest](https://developer.apple.com/documentation/xctest) is a test framework to write unit tests for your Xcode projects. Before you start, make sure XCTest runs with access to [CI environment variables](/docs/test-engine/test-collection/ci-environments). 1. [Create a test suite](/docs/test-engine) and copy the test suite API token. 1. [Securely](/docs/pipelines/security/secrets/managing) set the `BUILDKITE_ANALYTICS_TOKEN` secret on your CI to the API token from the previous step. If you're testing an Xcode project, note that Xcode doesn't automatically pass environment variables to the test runner, so you need to add them manually. In your test scheme or test plan, go to the **Environment Variables** section and add the following key-value pair: ```yaml BUILDKITE_ANALYTICS_TOKEN: $(BUILDKITE_ANALYTICS_TOKEN) ``` 1. In the `Package.swift` file, add `https://github.com/buildkite/test-collector-swift` to the dependencies and add `BuildkiteTestCollector` to any test target requiring analytics: ```swift let package = Package( name: "MyProject", dependencies: [ .package(url: "https://github.com/buildkite/test-collector-swift", from: "0.3.0") ], targets: [ .target(name: "MyProject"), .testTarget( name: "MyProjectTests", dependencies: [ "MyProject", .product(name: "BuildkiteTestCollector", package: "test-collector-swift") ] ) ] ) ``` 1. Commit and push your changes: ```bash git checkout -b add-buildkite-test-engine git commit -am "Add Buildkite Test Engine" git push origin add-buildkite-test-engine ``` Once you're done, in your Test Engine dashboard, you'll see analytics of test executions on all branches that include this code. If you don't see branch names, build numbers, or commit hashes in Test Engine, then read [CI environments](/docs/test-engine/test-collection/ci-environments) to learn more about exporting your environment to the collector. ###### Debugging To enable debugging output, set the `BUILDKITE_ANALYTICS_DEBUG_ENABLED` environment variable to `true`. --- ### Overview URL: https://buildkite.com/docs/test-engine/test-collection/other-collectors #### Collecting test data from other test runners If a native Buildkite test collector is not available for your language or test runner, you can instead use any of the following mechanisms to integrate your particular test runner with Test Engine: - Importing your test run data from: * [JUnit XML](/docs/test-engine/test-collection/importing-junit-xml) * [JSON](/docs/test-engine/test-collection/importing-json) - [Writing your own test collector](/docs/test-engine/test-collection/your-own-collectors). --- ### Importing JUnit XML URL: https://buildkite.com/docs/test-engine/test-collection/importing-junit-xml #### Importing JUnit XML While most test frameworks have a built-in JUnit XML export feature, these JUnit reports do not provide detailed span information. Therefore, features in Test Engine that depend on span information aren't available when using JUnit as a data source. If you need span information, consider using the [JSON import](/docs/test-engine/test-collection/importing-json) API instead. ##### Mandatory JUnit XML attributes The following attributes are mandatory for the `` element: - `classname`: full class name for the class the test method is in. - `name`: name of the test method. To learn more about the JUnit XML file format, see [Common JUnit XML format & examples ](https://github.com/testmoapp/junitxml). ##### How to import JUnit XML in Buildkite It's possible to import XML-formatted JUnit (or [JSON](/docs/test-engine/test-collection/importing-json#how-to-import-json-in-buildkite)) test results to Buildkite Test Engine with or without the help of a plugin. ###### Using a plugin To import XML-formatted JUnit test results to Test Engine using [Test Collector plugin](https://github.com/buildkite-plugins/test-collector-buildkite-plugin) from a build step: ```yml steps: - label: "🔨 Test" command: "make test" plugins: - test-collector#v1.0.0: files: "test/junit-*.xml" format: "junit" ``` See more configuration information in the [Test Collector plugin README](https://github.com/buildkite-plugins/test-collector-buildkite-plugin). Using the plugin is the recommended way as it allows for a better debugging process in case of an issue. ###### Not using a plugin If for some reason you cannot or do not want to use the [Test Collector plugin](https://github.com/buildkite-plugins/test-collector-buildkite-plugin), or if you are looking to implement your own integration, another approach is possible. To import XML-formatted JUnit test results to Test Engine, make a `POST` request to `https://analytics-api.buildkite.com/v1/uploads` with a `multipart/form-data`. For example, to import the contents of a `junit.xml` file in a Buildkite pipeline: 1. Securely [set the Test Engine token environment variable](/docs/pipelines/security/secrets/managing) (`BUILDKITE_ANALYTICS_TOKEN`). 1. Run the following `curl` command: ```sh curl \ -X POST \ -H "Authorization: Token token=\"$BUILDKITE_ANALYTICS_TOKEN\"" \ -F "data=@junit.xml" \ -F "format=junit" \ -F "run_env[CI]=buildkite" \ -F "run_env[key]=$BUILDKITE_BUILD_ID" \ -F "run_env[url]=$BUILDKITE_BUILD_URL" \ -F "run_env[branch]=$BUILDKITE_BRANCH" \ -F "run_env[commit_sha]=$BUILDKITE_COMMIT" \ -F "run_env[number]=$BUILDKITE_BUILD_NUMBER" \ -F "run_env[job_id]=$BUILDKITE_JOB_ID" \ -F "run_env[message]=$BUILDKITE_MESSAGE" \ https://analytics-api.buildkite.com/v1/uploads ``` To learn more about passing through environment variables to `run_env`-prefixed fields, see the [Buildkite](/docs/test-engine/test-collection/ci-environments#buildkite) or [Other CI providers](/docs/test-engine/test-collection/ci-environments#other-ci-providers) (including manually) on the [CI environments](/docs/test-engine/test-collection/ci-environments) page. Note that when a payload is processed, Buildkite validates and queues each test execution result in a loop. For that reason, it is possible for some to be queued and others to be skipped. Even when some or all test executions get skipped, REST API will respond with a `202 Accepted` because the upload and the run were created in the database, but the skipped test execution results were not ingested. Currently, the errors returned contain no information on individual records that failed the validation. This may complicate the process of fixing and retrying the request. A single file can have a maximum of 5000 test results, and if that limit is exceeded then the upload request will fail. To upload more than 5000 test results for a single run upload multiple smaller files with the same `run_env[key]`. ###### Upload level custom tags You can configure custom tags at the upload level. These tags will be applied server-side to all test executions in the upload. This is an efficient way to tag every execution with values that don't vary within one configuration—for example, cloud environment details, language/framework versions. ```sh curl \ -X POST \ ... \ -F "tags[team]=frontend" \ -F "tags[feature]=alchemy" \ https://analytics-api.buildkite.com/v1/uploads ``` If you need to import per-execution level custom tags, consider using [JSON import](/docs/test-engine/test-collection/importing-json). ##### How to import JUnit XML in CircleCI To import XML-formatted JUnit test results, make a `POST` request to `https://analytics-api.buildkite.com/v1/uploads` with a `multipart/form-data`. For example, to import the contents of a `junit.xml` file in a CircleCI pipeline: 1. Securely [set the Test Engine token environment variable](/docs/pipelines/security/secrets/managing) (`BUILDKITE_ANALYTICS_TOKEN`). 1. Run the following `curl` command: ```sh curl \ -X POST \ -H "Authorization: Token token=\"$BUILDKITE_ANALYTICS_TOKEN\"" \ -F "data=@junit.xml" \ -F "format=junit" \ -F "run_env[CI]=circleci" \ -F "run_env[key]=$CIRCLE_WORKFLOW_ID-$CIRCLE_BUILD_NUM" \ -F "run_env[number]=$CIRCLE_BUILD_NUM" \ -F "run_env[branch]=$CIRCLE_BRANCH" \ -F "run_env[commit_sha]=$CIRCLE_SHA1" \ -F "run_env[url]=$CIRCLE_BUILD_URL" \ https://analytics-api.buildkite.com/v1/uploads ``` To learn more about passing through environment variables to `run_env`-prefixed fields, see [CI environments > CircleCI](/docs/test-engine/test-collection/ci-environments#circleci) page section. Note that when a payload is processed, Buildkite validates and queues each test execution result in a loop. For that reason, it is possible for some to be queued and others to be skipped. Even when some or all test executions get skipped, REST API will respond with a `202 Accepted` because the upload and the run were created in the database, but the skipped test execution results were not ingested. Currently, the errors returned contain no information on individual records that failed the validation. This may complicate the process of fixing and retrying the request. A single file can have a maximum of 5000 test results, and if that limit is exceeded then the upload request will fail. To upload more than 5000 test results for a single run upload multiple smaller files with the same `run_env[key]`. ##### How to import JUnit XML in GitHub Actions To import XML-formatted JUnit test results, make a `POST` request to `https://analytics-api.buildkite.com/v1/uploads` with a `multipart/form-data`. For example, to import the contents of a `junit.xml` file in a GitHub Actions pipeline: 1. Securely [set the Test Engine token environment variable](/docs/pipelines/security/secrets/managing) (`BUILDKITE_ANALYTICS_TOKEN`). 1. Run the following `curl` command: ```sh curl \ -X POST \ --fail-with-body \ -H "Authorization: Token token=\"$BUILDKITE_ANALYTICS_TOKEN\"" \ -F "data=@junit.xml" \ -F "format=junit" \ -F "run_env[CI]=github_actions" \ -F "run_env[key]=$GITHUB_ACTION-$GITHUB_RUN_NUMBER-$GITHUB_RUN_ATTEMPT" \ -F "run_env[number]=$GITHUB_RUN_NUMBER" \ -F "run_env[branch]=$GITHUB_REF" \ -F "run_env[commit_sha]=$GITHUB_SHA" \ -F "run_env[url]=https://github.com/$GITHUB_REPOSITORY/actions/runs/$GITHUB_RUN_ID" \ https://analytics-api.buildkite.com/v1/uploads ``` To learn more about passing through environment variables to `run_env`-prefixed fields, see [CI environments > GitHub Actions](/docs/test-engine/test-collection/ci-environments#github-actions) page section. Note that when a payload is processed, Buildkite validates and queues each test execution result in a loop. For that reason, it is possible for some to be queued and others to be skipped. Even when some or all test executions get skipped, REST API will respond with a `202 Accepted` because the upload and the run were created in the database, but the skipped test execution results were not ingested. Currently, the errors returned contain no information on individual records that failed the validation. This may complicate the process of fixing and retrying the request. A single file can have a maximum of 5000 test results, and if that limit is exceeded then the upload request will fail. To upload more than 5000 test results for a single run upload multiple smaller files with the same `run_env[key]`. --- ### Importing JSON URL: https://buildkite.com/docs/test-engine/test-collection/importing-json #### Importing JSON If a test collector is not available for your test framework, you can upload tests results directly to the Test Engine API or [write your own test collector](/docs/test-engine/test-collection/your-own-collectors). You can upload JSON-formatted test results (described in this page) or [JUnit XML](/docs/test-engine/test-collection/importing-junit-xml). ##### How to import JSON in Buildkite It's possible to import JSON (or [JUnit](/docs/test-engine/test-collection/importing-junit-xml#how-to-import-junit-xml-in-buildkite) files) to Buildkite Test Engine with or without the help of a plugin. ###### Using a plugin To import [JSON-formatted test results](#json-test-results-data-reference) to Test Engine using [Test Collector plugin](https://github.com/buildkite-plugins/test-collector-buildkite-plugin) from a build step: ```yml steps: - label: "🔨 Test" command: "make test" plugins: - test-collector#v1.0.0: files: "test-data-*.json" format: "json" ``` See more configuration information in the [Test Collector plugin README](https://github.com/buildkite-plugins/test-collector-buildkite-plugin). Using the plugin is the recommended way as it allows for a better debugging process in case of an issue. ###### Without a plugin If for some reason you cannot or do not want to use the [Test Collector plugin](https://github.com/buildkite-plugins/test-collector-buildkite-plugin), or if you are looking to implement your own integration, another approach is possible. To import [JSON-formatted test results](#json-test-results-data-reference) in Buildkite, make a `POST` request to `https://analytics-api.buildkite.com/v1/uploads` with a `multipart/form-data`. For example, to import the contents of a [JSON-formatted test results](#json-test-results-data-reference) (`test-results.json`): 1. Securely [set the Test Engine token environment variable](/docs/pipelines/security/secrets/managing) (`BUILDKITE_ANALYTICS_TOKEN`). 2. Run the following `curl` command: ```sh curl \ -X POST \ -H "Authorization: Token token=\"$BUILDKITE_ANALYTICS_TOKEN\"" \ -F "data=@test-results.json" \ -F "format=json" \ -F "run_env[CI]=buildkite" \ -F "run_env[key]=$BUILDKITE_BUILD_ID" \ -F "run_env[url]=$BUILDKITE_BUILD_URL" \ -F "run_env[branch]=$BUILDKITE_BRANCH" \ -F "run_env[commit_sha]=$BUILDKITE_COMMIT" \ -F "run_env[number]=$BUILDKITE_BUILD_NUMBER" \ -F "run_env[job_id]=$BUILDKITE_JOB_ID" \ -F "run_env[message]=$BUILDKITE_MESSAGE" \ https://analytics-api.buildkite.com/v1/uploads ``` To learn more about passing through environment variables to `run_env`-prefixed fields, see the [Buildkite](/docs/test-engine/test-collection/ci-environments#buildkite) or [Other CI providers](/docs/test-engine/test-collection/ci-environments#other-ci-providers) (including manually) on the [CI environments](/docs/test-engine/test-collection/ci-environments) page. A single file can have a maximum of 5000 test results, and if that limit is exceeded then the upload request will fail. To upload more than 5000 test results for a single run upload multiple smaller files with the same `run_env[key]`. ###### Upload level custom tags You can configure custom tags on upload level, they will be applied server-side to every execution therein. This is an efficient way to tag every execution with values that don't vary within one configuration, e.g. cloud environment details, language/framework versions. ```sh curl \ -X POST \ ... \ -F "tags[team]=frontend" \ -F "tags[feature]=alchemy" \ https://analytics-api.buildkite.com/v1/uploads ``` Upload-level tags may be overwritten by execution-level tags, check [Execution level custom tags](/docs/test-engine/test-collection/importing-json#json-test-results-data-reference-execution-level-custom-tags). ##### How to import JSON in CircleCI To import [JSON-formatted test results](#json-test-results-data-reference), make a `POST` request to `https://analytics-api.buildkite.com/v1/uploads` with a `multipart/form-data` body, including as many of the following fields as possible in the request body: For example, to import the contents of a `test-results.json` file in a CircleCI pipeline: 1. Securely [set the Test Engine token environment variable](/docs/pipelines/security/secrets/managing) (`BUILDKITE_ANALYTICS_TOKEN`). 2. Run the following `curl` command: ```sh curl \ -X POST \ -H "Authorization: Token token=\"$BUILDKITE_ANALYTICS_TOKEN\"" \ -F "data=@test-results.json" \ -F "format=json" \ -F "run_env[CI]=circleci" \ -F "run_env[key]=$CIRCLE_WORKFLOW_ID-$CIRCLE_BUILD_NUM" \ -F "run_env[number]=$CIRCLE_BUILD_NUM" \ -F "run_env[branch]=$CIRCLE_BRANCH" \ -F "run_env[commit_sha]=$CIRCLE_SHA1" \ -F "run_env[url]=$CIRCLE_BUILD_URL" \ https://analytics-api.buildkite.com/v1/uploads ``` To learn more about passing through environment variables to `run_env`-prefixed fields, see [CI environments > CircleCI](/docs/test-engine/test-collection/ci-environments#circleci) page section. A single file can have a maximum of 5000 test results, and if that limit is exceeded then the upload request will fail. To upload more than 5000 test results for a single run upload multiple smaller files with the same `run_env[key]`. ##### How to import JSON in GitHub Actions To import [JSON-formatted test results](#json-test-results-data-reference), make a `POST` request to `https://analytics-api.buildkite.com/v1/uploads` with a `multipart/form-data` body, including as many of the following fields as possible in the request body: For example, to import the contents of a `test-results.json` file in a GitHub Actions pipeline run: 1. Securely [set the Test Engine token environment variable](/docs/pipelines/security/secrets/managing) (`BUILDKITE_ANALYTICS_TOKEN`). 2. Run the following `curl` command: ```sh curl \ -X POST \ -H "Authorization: Token token=\"$BUILDKITE_ANALYTICS_TOKEN\"" \ -F "data=@test-results.json" \ -F "format=json" \ -F "run_env[CI]=github_actions" \ -F "run_env[key]=$GITHUB_ACTION-$GITHUB_RUN_NUMBER-$GITHUB_RUN_ATTEMPT" \ -F "run_env[number]=$GITHUB_RUN_NUMBER" \ -F "run_env[branch]=$GITHUB_REF" \ -F "run_env[commit_sha]=$GITHUB_SHA" \ -F "run_env[url]=https://github.com/$GITHUB_REPOSITORY/actions/runs/$GITHUB_RUN_ID" \ https://analytics-api.buildkite.com/v1/uploads ``` To learn more about passing through environment variables to `run_env`-prefixed fields, see [CI environments > GitHub Actions](/docs/test-engine/test-collection/ci-environments#github-actions) page section. A single file can have a maximum of 5000 test results, and if that limit is exceeded then the upload request will fail. To upload more than 5000 test results for a single run upload multiple smaller files with the same `run_env[key]`. ##### JSON test results data reference JSON test results data is made up of an array of one or more "test result" objects. A test result object contains an overall result and metadata. It also contains a `history` object, which is a summary of the duration of the test run. Within the history object, detailed `span` objects record the highest resolution details of the test run. Schematically, the JSON test results data is like this: - [Test results](#json-test-results-data-reference-test-result-objects) + [History](#json-test-results-data-reference-history-objects) - [Spans](#json-test-results-data-reference-span-objects) + [Detail](#json-test-results-data-reference-detail-objects) Or in a simplified code view: ```js [ { /* Test result object */ "history": { /* history object */ "children": [ /* span objects */ ] } }, { /* Test result object */ }, ] ``` ###### Test result objects A test result represents a single test run. | Key | Type | Description | Examples | `` | | | **Example:** ```js { "id": "95f7e024-9e0a-450f-bc64-9edb62d43fa9", "scope": "Analytics::Upload associations", "name": "fails", "location": "./spec/models/analytics/upload_spec.rb:24", "file_name": "./spec/models/analytics/upload_spec.rb", "result": "failed", "failure_reason": "Failure/Error: expect(true).to eq false", "failure_expanded": [ /* failure_expanded object */ ], "history": { /* history object */ } } ``` ###### Failure expanded objects A failure expanded array contains extra details about the failed test. | Key | Type | Description | `` | | **Example:** ```js { "expanded": [ " expected: false", " got: true", "", " (compared using ==)", "", " Diff:", " @@ -1 +1 @@", " -false"," +true" ], "backtrace": [ "./spec/models/analytics/upload_spec.rb:25:in `block (3 levels) in '","./spec/support/log.rb:17:in `run'", "./spec/support/log.rb:66:in `block (2 levels) in '", "./spec/support/database.rb:19:in `block (2 levels) in '", "/Users/abc/Documents/rspec-buildkite-analytics/lib/rspec/buildkite/analytics/uploader.rb:153:in `block (2 levels) in configure'", "-e:1:in `'" ] } ``` ###### History objects A history object represents the overall duration of the test run and contains detailed span data, more finely recording the test run. | Key | Type | Description | `` | | **Example:** ```js { "start_at": 347611.724809, "end_at": 347612.451041, "duration": 0.726232000044547, "children": [ /* span objects */ ] } ``` ###### Execution level custom tags You can add arbitrary tags to your test executions to enable custom grouping and filtering of test metrics. **Example:** ```json { "team": "frontend", "feature": "a-great-feature" } ``` ###### Span objects Span objects represent the finest duration resolution of a test run. It represents, for example, the duration of an individual database query within a test. | Key | Type | Description | `` | | **Example:** ```js { "section": "sql", "start_at": 347611.734956, "end_at": 347611.735647, "duration": 0.0006910000229254365 "detail": { ... } } ``` ###### Detail objects Detail objects contains additional information about the span. | Key | Type | Description | `` | | **HTTP Example:** ```js { "detail": { method: "POST", url: "https://example.com", lib: "curl" } } ``` **SQL Example:** ```js { "detail": { query: "SELECT * FROM ..." } } ``` **Annotation Example:** ```js { "detail": { content: "Visting Login" } } ``` ###### Test result format The following JSON code block shows an example of how your JSON test results should be formatted, so that these results can be successfully uploaded to Test Engine. ```json [ { "id": "95f7e024-9e0a-450f-bc64-9edb62d43fa10", "scope": "Analytics::Upload associations", "name": "fails", "location": "./spec/models/analytics/upload_spec.rb:24", "file_name": "./spec/models/analytics/upload_spec.rb", "result": "failed", "failure_reason": "Failure/Error: expect(true).to eq false", "failure_expanded": [], "history": { "start_at": 347611.724809, "end_at": 347612.451041, "duration": 0.726232000044547, "children": [ { "section": "http", "start_at": 347611.734956, "end_at": 347611.735647, "duration": 0.0006910000229254365, "detail": { "method": "POST", "url": "https://example.com", "lib": "curl" } } ] } }, { "id": "56f6e013-8e9a-340f-bc53-8edb51d32fa09", "scope": "Analytics::Upload associations", "name": "passes", "location": "./spec/models/analytics/upload_spec.rb:56", "file_name": "./spec/models/analytics/upload_spec.rb", "result": "passed", "history": { "start_at": 347611.724809, "end_at": 347612.451041, "duration": 0.726232000044547, "children": [ { "section": "http", "start_at": 347611.734956, "end_at": 347611.735647, "duration": 0.0006910000229254365, "detail": { "method": "GET", "url": "https://example.com", "lib": "curl" } } ] } } ] ``` --- ### Writing your own collectors URL: https://buildkite.com/docs/test-engine/test-collection/your-own-collectors #### Your own collectors Test Engine integrates directly with your test runner to provide in-depth information about your tests (including spans) in real time. If you're interested in developing your own fully-integrated Buildkite test collector for specific test runners, have a look at the source code for Buildkite's own [Ruby test collector](https://github.com/buildkite/test-collector-ruby) on GitHub, which can collect test data from RSpec and minitest test runners. The source code for this test collector provides details on how test data is packaged and sent to Test Engine. --- ### CI environment variables URL: https://buildkite.com/docs/test-engine/test-collection/ci-environments #### CI environments Buildkite Test Engine collectors automatically detect common continuous integration (CI) environments. If available, test collectors gather information about your test runs, such as branch names and build IDs. Test collectors gather information from the following CI environments: - [Buildkite](/docs/test-engine/test-collection/ci-environments#buildkite) - [CircleCI](/docs/test-engine/test-collection/ci-environments#circleci) - [GitHub Actions](/docs/test-engine/test-collection/ci-environments#github-actions) If you run test collectors inside [containers](/docs/test-engine/test-collection/ci-environments#containers-and-test-collectors) or use another CI system, you must set variables to report your CI details to Buildkite. If you're not using a test collector, see [Importing JSON](/docs/test-engine/test-collection/importing-json) and [Importing JUnit XML](/docs/test-engine/test-collection/importing-junit-xml) to learn how to provide run environment data. ##### Run environment ###### Required - `run_env[key]`: The identifier of a run, which may be the same across multiple uploads; often the build ID. ###### Recommended If you're manually providing environment variables, we strongly recommend setting the following variables: - `run_env[branch]`: Sends the branch or reference for this build, enabling you to filter data by branch. - `run_env[commit_sha]`: Sends the commit hash for the head of the branch, enabling automatic flaky test detection in your builds. - `run_env[message]`: Forwards the commit message for the head of the branch, helping you identify different runs more easily. - `run_env[url]`: Provides the URL for the build on your CI provider, giving you a handy link back to the CI build. ##### Containers and test collectors If you're using containers within your CI system, then the environment variables used by test collectors may not be exposed to those containers by default. Make sure to export your CI environment's variables and your Buildkite API token to your containerized builds and tests. For example, by default Docker does not receive the host's environment variables. To pass them through to the Docker container, use the `--env` option: ``` docker run \ --env BUILDKITE_ANALYTICS_TOKEN \ --env BUILDKITE_BUILD_ID \ --env BUILDKITE_BUILD_NUMBER \ --env BUILDKITE_JOB_ID \ --env BUILDKITE_BRANCH \ --env BUILDKITE_COMMIT \ --env BUILDKITE_MESSAGE \ --env BUILDKITE_BUILD_URL \ bundle exec rspec ``` Review the following sections for the environment variables expected by test collectors. ##### Buildkite During Buildkite pipeline runs, test collectors upload information from the following environment variables, and test importers use the following field names: | Field name | Environment variable | Description | `` | `` | ##### CircleCI During CircleCI workflow runs, test collectors upload information from the following environment variables, and test importers use the following field names: | Field name | Environment variable(s) | Description | `` | `` `` | ##### GitHub Actions During GitHub Actions workflow runs, test collectors upload information from the following environment variables, and test importers use the following field names: | Field name | Environment variable(s) | Description | `` | `` `` `` | ##### Other CI providers If you're using other CI providers (or [containers](#containers-and-test-collectors)), then set environment variables for test collectors to gather information about your builds and tests. If you don't set these environment variables, then Test Engine lacks the details needed to produce useful reports. Each environment variable corresponds to a `run_env` key in the payload `https://analytics-api.buildkite.com/v1/uploads`. Read [Importing JSON](/docs/test-engine/test-collection/importing-json) to learn how these keys are used to make API calls. | Field name | Environment variable | Description | `run_env[]` | `` | Examples: --- ### Getting started with a Ruby project URL: https://buildkite.com/docs/test-engine/tutorials/setting-up-a-ruby-project #### Setting up a Ruby project for Test Engine This tutorial helps you set up a Ruby project with Buildkite Test Engine, by guiding you through the creation of a new Test Engine [test suite](/docs/test-engine/test-suites) and configuring a simple Ruby project, which you'll clone and run to generate test results that are collected and reported through this test suite. Note that Buildkite Test Engine supports [other languages and test runners](/docs/test-engine/test-collection) too. ##### Before you start To complete this tutorial, you'll need: - A Buildkite account—if you don't have one already, [create a free personal account](https://buildkite.com/signup) - [Git](https://git-scm.com/downloads), to clone the Ruby project example. - [Ruby](https://www.ruby-lang.org/en/downloads)—macOS users can also install Ruby with [Homebrew](https://formulae.brew.sh/formula/ruby). * Once Ruby is installed, open a terminal or command prompt, and install the [RSpec testing framework](https://github.com/rspec/rspec-core?tab=readme-ov-file#rspec-core--) using the following command: ```bash gem install rspec ``` ##### Create a test suite To begin creating a new test suite: 1. Select **Test Suites** in the global navigation to access the **Test Suites** page. 1. Select **New test suite**. 1. On the **Identify, track and fix problematic tests** page, enter an optional **Application name**, for example, `RSpec test suites`. 1. Enter a mandatory **Test suite name**, for example, `My RSpec example test suite`. 1. Enter the **Default branch name**, which is the default branch that Test Engine shows trends for and can be changed any time, for example (and usually), `main`. 1. Enter an optional **Suite emoji** using [emoji syntax](/docs/pipelines/emojis), for example, `\:ruby\:` for a ruby emoji representing the Ruby language. 1. Select **Set up suite**. 1. If your Buildkite organization has the [teams feature](/docs/test-engine/permissions) enabled, select the relevant **Teams** to be granted access to this test suite, followed by **Continue**. The new test suite's **Complete test suite setup** page is displayed, requesting you to configure your test collector within your development project. Keep this web page open. ##### Clone the Ruby example test suite project Then, clone the Ruby example test suite project: 1. Open a terminal or command prompt, and run the following command: ```bash git clone git@github.com:buildkite/ruby-example-test-suite.git ``` 1. Change directory (`cd`) into the `ruby-example-test-suite` directory. 1. (Optional) Run the following `rspec` command to test that RSpec test runner executes successfully: ```bash rspec ``` After about a minute, the command output should display something similar to: ```bash disabled tests ...................... Finished in 1 minute 4.06 seconds (files took 0.29045 seconds to load) 22 examples, 0 failures ``` ##### Configure your Ruby project with its test collector Next, configure your Ruby project's RSpec test runner with its Buildkite test collector: 1. Install the [`buildkite-test_collector`](https://rubygems.org/gems/buildkite-test_collector) gem by running the following `gem` command: ```bash gem install buildkite-test_collector ``` 1. Add the following lines of code to your project's `spec_helper.rb` file: ```ruby require 'buildkite/test_collector' Buildkite::TestCollector.configure(hook: :rspec) ``` The top of this file should look similar to: ```ruby require 'yaml' require 'json' require 'buildkite/test_collector' Buildkite::TestCollector.configure(hook: :rspec) begin skip_data = File.read('skipped.json') skip = JSON.parse(skip_data) rescue skip = [] end ... ``` ##### Run RSpec (again) to send your test data to Test Engine 1. Back on the **Complete test suite setup** page, copy the **Test Suite API token** value. 1. At your terminal/command prompt, run the following `rspec` command (with additional environment variables) to execute the RSpec test runner and send its execution data back to your Test Engine test suite: ```bash BUILDKITE_ANALYTICS_TOKEN= BUILDKITE_ANALYTICS_MESSAGE="My first test run" rspec ``` where: * `` is the value of the **Test Suite API token** value you copied in the previous step. This value can typically be pasted without any quotation marks. * `BUILDKITE_ANALYTICS_MESSAGE` is an environment variable, which is usually used for a source control (Git) commit message, and is presented in a run of your Buildkite test suite. However, in this scenario, this environment variable and its value are being used to describe the test run (or build). Learn more about [these types of environment variables](/docs/test-engine/test-collection/ci-environments#other-ci-providers), which are available to _other CI/CD providers_ (that is, those other than [Buildkite Pipelines](/docs/test-engine/test-collection/ci-environments#buildkite), [CircleCI](/docs/test-engine/test-collection/ci-environments#circleci) or [GitHub Actions](/docs/test-engine/test-collection/ci-environments#github-actions)), as well as [containers](/docs/test-engine/test-collection/ci-environments#containers-and-test-collectors), and manually run builds such as this `rspec` execution command above. The command output should display something similar to: ```bash disabled tests ...................... Finished in 1 minute 4.06 seconds (files took 0.25227 seconds to load) 22 examples, 0 failures ``` 1. Back in Test Engine, your test suite should now be displayed, showing its **Runs** tab, with a summary of details from the last execution of the RSpec test runner in the previous step. The final result should indicate **My first test run** (obtained from the value of `BUILDKITE_ANALYTICS_MESSAGE` in the previous step) with a status of **PASSED**. If this page indicates **Still processing data** after a while, refresh your browser page to display the results. If the status indicates **PENDING**, wait a little longer until the final result appears. ##### Next steps That's it! You've successfully created a test suite, configured your Ruby project with a test collector, and executed the project's test runner to send its test data to your test suite. 🎉 Learn more about: - How to configure [test collection](/docs/test-engine/test-collection) for other test runners. - [CI environment variables](/docs/test-engine/test-collection/ci-environments) that test collectors (and other test collection mechanisms) provide to your Buildkite test suites, when your test runs are automated through CI/CD. - How to work with [test suites](/docs/test-engine/test-suites) in Buildkite Test Engine. --- ### Overview URL: https://buildkite.com/docs/test-engine/workflows #### Workflows overview Workflows surface valuable qualitative information about the [tests](/docs/test-engine/glossary#test) in your [test suite](/docs/test-engine/test-suites), which can be difficult to surmise from raw execution data. A workflow defines a process which allows you to create custom mappings between _observations_ that Test Engine makes about your test suite, and the [_actions_](/docs/test-engine/glossary#action) you'd like to take from them. This means that observations about the health and performance of your tests (for example, test is flaky) can generate automatic actions (for example, label the test as flaky, send a notification). Workflows are composed of a single [monitor](/docs/test-engine/workflows/monitors), optional [tag filters](/docs/test-engine/workflows/monitors#tag-filters), and a number of different [actions](/docs/test-engine/workflows/actions). The actionable insights generated by workflows can be used to improve your test suite, for example by [reducing the number of flaky tests](/docs/test-engine/reduce-flaky-tests). ##### How they work A single monitor watches over all the tests in your test suite (except for those excluded by filters) and generates individual _alarm_ and _recover_ events for each test, which then trigger the associated [alarm and recover actions](/docs/test-engine/workflows/actions). _Alarm_ events are reported by the monitor when the alarm conditions are met for a given test. This could be an observation of a single special occurrence (for example, a test reports both a pass and fail result on the same commit SHA) or a cumulative score that is tracked over time being exceeded (for example, the transition count score for a test exceeds 0.05). _Recover_ events are [hysteric](https://en.wikipedia.org/wiki/Hysteresis), meaning that the recover event can only be reported on a test that has a previous alarm event. In such a situation, when the monitor detects that the test has met the recover conditions, a recover event is reported. Depending on the monitor type, the alarm and recover conditions can be configured. Actions are performed when the recover or alarm event is reported. Actions are automatically triggered user-defined operations, and can be for operations that happen within the Test Engine system (that is, changing a test's [state](/docs/test-engine/glossary#test-state) or [label](/docs/test-engine/test-suites/labels)), or externally to Test Engine (for example, sending a Slack notification about the test). Repeated occurrences of the test meeting the alarm/recover conditions do not retrigger the corresponding actions. ##### When workflows run Workflow monitors are _event-driven_, not scheduled. They perform evaluations each time new test execution data is ingested into Test Engine—there is no cron or periodic schedule involved. When a test run is completed and its results are uploaded, the relevant monitors evaluate the incoming data against their configured conditions and thresholds. This means: - **Alarm and recover events are generated in response to test executions.** If no tests are running, no workflow events are produced. - **Threshold changes take effect on the next test execution.** For example, if you adjust a [probabilistic flakiness](/docs/test-engine/workflows/monitors#probabilistic-flakiness) threshold or a [transition count](/docs/test-engine/workflows/monitors#transition-count) alarm value, the updated threshold is applied the next time test data is ingested for a matching test—not at a scheduled interval. - **The frequency of workflow evaluation matches the frequency of your test runs.** Workflows for a test suite that runs on every commit will perform evaluations more often than one that runs nightly. ##### Rate limit Each workflow monitor has a rate limit of 500 events per minute across alarm and recover events. If a workflow exceeds this limit within a one minute window, no new alarm or recover events will trigger their configured [actions](/docs/test-engine/workflows/actions) for the remainder of that minute. Event processing resumes in the following minute when usage falls below the limit. To avoid hitting the limit, you can refine your workflow using [tag filters](/docs/test-engine/workflows/monitors#tag-filters) or adjust monitor thresholds. > 🚧 > Currently, there is no indicator when a workflow monitor is rate limited. To check if your workflow is triggering events as expected, go to your test suite and select **Workflows**. In the Events section of the workflow, select **view** to see the list of triggered events. --- ### Monitors URL: https://buildkite.com/docs/test-engine/workflows/monitors #### Monitors A workflow is configured with a _monitor_, which is a specialized type of observer to your [test suite](/docs/test-engine/test-suites). A monitor observes test [executions](/docs/test-engine/glossary#execution), and surfaces information and trends about the test's performance and reliability over time. Workflows are subject to a rate limit. See [Rate limit](/docs/test-engine/workflows#rate-limit) for more information. Test Engine supports the following types of monitors: - [Transition count](#transition-count) - [Passed on retry](#passed-on-retry) - [Probabilistic flakiness](#probabilistic-flakiness) - [New test](#new-test) - [Duration threshold](#duration-threshold) You can alter and reduce the amount of test executions that a monitor receives using [tag filters](#tag-filters). ##### Transition count A transition is a change from passing to failing, or failing to passing, in a sequence of results for a test over time. The transition count monitor keeps track of how many times the result changes, over the configured window, and calculates a score based on this. A low transition score means that the test is either consistently passing, or consistently failing. A high transition count for a test indicates flakiness, as the test result is changing very frequently between **pass** and **fail**. For example: - Over a window of 5, a test result pattern of `FFFFF` will have a score of 0. - Over a window of 5, a test result pattern of `PPFFF` will have a score of 0.2. - Over a window of 5, a test result pattern of `PFPFF` will have a score of 0.4. - Over a window of 5, a test result pattern of `PFFFFF` will have a score count of 0 (the oldest result `P` has fallen outside the evaluation window, and so is ignored). - Over a window of 5, a test result pattern of `PF` will have a score of 0.2 (the score is always calculated based on window, not number of results). In addition to the window, the transition counts that cause the [_alarm_ and _recover_ actions](/docs/test-engine/workflows/actions) to be triggered are configurable. A branch must be configured for the transition count monitor, and therefore, it is recommended setting this to the value of the main branch (for example, `main`, `master`, `trunk`). Configuring a branch is necessary so that transitions from feature branches are ignored in the accumulation of the transition count, as failures and passes on feature branches are a byproduct of a standard development workflow, and do not indicate test instability. If you're unsure what the most suitable monitor is for your test suite, use this the transition count monitor on your test suite's default branch. This monitor will likely work without any pipeline configuration changes (for example, setting up job retries), and has more resiliency to "real world" events (for example, infrastructure-related events) which affect test results. ##### Passed on retry _Passed on retry_ refers to a test that both passes and fails on the same git commit SHA. When this occurs, the _alarm_ [action(s)](/docs/test-engine/workflows/actions) are triggered. If the monitor then does not encounter passed on retry events over the next seven days or 100 executions of the given test (whichever is reached first), then the _recover_ actions for your workflow will be triggered. Because this monitor relies on inconsistent results on the same commit SHA, you'll need to set up automatic retries on your test pipeline. You can do this with Buildkite Pipelines [retry jobs](/docs/pipelines/configure/retry) or by setting the [retry count environment variable](/docs/test-engine/bktec/configuring#BUILDKITE_TEST_ENGINE_RETRY_COUNT) in Buildkite Test Engine Client. The order and number of pass and fail results don't change the reportage of the passed on retry event, as long as there is at least one of each pass and fail. Other test results (for example, null, skipped, pending) are ignored in the detection of passed on retry events. > 📘 > This monitor is created by default for all test suites. ##### Probabilistic flakiness This monitor tracks the [probabilistic flakiness score](https://engineering.fb.com/2020/12/10/developer-tools/probabilistic-flakiness/) (PFS) of each test. The PFS was developed by [Meta](https://www.meta.com/), and uses a Bayesian statistical model to derive the probability that a test will become flaky on its next execution. The PFS model takes into account the current result of the test, and the historical results of the test execution. > 📘 > The probabilistic flakiness monitor is only available on [Enterprise](https://buildkite.com/pricing) plans. The probabilistic flakiness monitor is best suited to large and complex test suites, where the volume and noise of test data prevents a simpler flaky test monitor from being successful. As the PFS is a continuous metric, these scores provide a smarter prioritization metric for larger organizations. ##### New test > 📘 > The new test monitor is in beta. The _recover_ actions are not yet available. The new test monitor triggers when a test executes for the first time and becomes a [managed test](/docs/test-engine/glossary#managed-test). A test is considered new only if the combination of its scope and name is unique and has not previously existed. If a test executes for the first time but its scope and name match an existing managed test, the monitor does not trigger. You can configure the new test monitor to trigger actions that help track and manage the performance and reliability of new tests to prevent flaky tests from being introduced to your test suite. ##### Duration threshold The duration threshold monitor tracks how long individual tests take to run. The monitor triggers when the aggregated duration over a sliding window crosses a configured threshold. Use this monitor to identify tests that exceed an acceptable runtime. The monitor maintains a rolling window of recent [execution](/docs/test-engine/glossary#execution) durations for each test. The monitor calculates the aggregated duration across the window and compares it to the configured thresholds: - When the aggregated duration is greater than or equal to the **alarm threshold**, an _alarm_ [action](/docs/test-engine/workflows/actions) is triggered. - When the aggregated duration is less than or equal to the **recover threshold**, a _recover_ action is triggered. ###### Configuration Configure the following when setting up a duration threshold monitor: - **Aggregation function**: The function used to aggregate execution durations within the window. Only `p50` (median) is currently supported. - **Evaluation window**: The number of most recent executions used to calculate the aggregated duration. Accepts integers up to 100. Default: 5. - **Alarm threshold**: The duration, in seconds, that triggers an _alarm_ action. - **Recover threshold**: The duration, in seconds, that triggers a _recover_ action. Must be lower than the **alarm threshold**. Like the [transition count](#transition-count) monitor, the duration threshold monitor is most useful when pointed at a single branch. Configure a branch [tag filter](#tag-filters) set to your default branch so that variance from feature branch executions does not affect the monitor. Without a branch filter, longer-running executions on feature branches can inflate the aggregated duration and cause spurious alarms. ##### Tag filters Tag filters reduce the set of [execution](/docs/test-engine/glossary#execution) data that goes into a monitor, so that you can ignore lower relevancy data and produce better insights, or take different [actions](/docs/test-engine/workflows/actions) based on different types of test executions. This means that you can set up custom actions and monitors based on tag values, for example sending different notifications based on different team tag values, or using tags to segment the different types of test (e.g. feature, unit) and monitor on different thresholds. Tag filters are optional and you can configure up to four of them per workflow. Tag filter values support the following matching operators: - **is** - **is not** - **starts with** If you haven't set up tags for test execution, see [Tags](/docs/test-engine/test-suites/tags) in the [Test suites](/docs/test-engine/test-suites) documentation for details. ###### Default branch filter By default, a filter for `scm.branch` is added, whose value is set to your default branch. This means that test instability on feature or development branches, or both, do not affect the reliability of your test suite. You may want to modify or remove this default branch filter if your organization meets any of the following criteria: - Your organization is interested in test results on a specific branch, that is not your default branch. For example, your organization uses test selection and full test builds are run on a specific branch. - Your organization uses merge queues, and is interested in branches following the merge queue naming convention. - Your organization is interested in monitoring all branches. > 📘 > Remove the branch filter if you want to monitor on all branches. The branch filter must be set to a value if you're using the transition count monitor. --- ### Actions URL: https://buildkite.com/docs/test-engine/workflows/actions #### Alarm and recover actions When conditions from a [monitor](/docs/test-engine/workflows/monitors) in your test suite's [workflow](/docs/test-engine/workflows) triggers an [_alarm_ or _recover_ action event](/docs/test-engine/workflows#how-they-work), there are several automatic actions that Test Engine can perform. Workflows are subject to a rate limit. See [Rate limit](/docs/test-engine/workflows#rate-limit) for more information. ##### Add or remove label The **Add label** or **Remove label** action lets you respectively add a label to or remove a label from a test. These two actions are often set as a pair, for example, an alarm action will label a test **flaky**, and the corresponding recover action will remove the **flaky** label. ##### Change state The **Change state** action lets you [change the state](/docs/test-engine/test-suites/test-state-and-quarantine#lifecycle-states) (enabled, muted, skipped) of a test. For example, you can set the alarm action to change the state of a test to "muted", and the recover action to change the state of a test to "enabled", which will allow you to [run builds more reliably](/docs/test-engine/speed-up-builds-with-bktec#increase-build-reliability-with-test-states). ##### Send webhook notification The **Send webhook notification** action lets you send JSON payloads through HTTP requests to specific URL endpoints of third-party applications, which let these applications react to activities on your Test Engine workflows as they happen. The content of the payloads differ depending on the workflow monitor configured. ###### Transition count ```json { "subject": { "type": "test", "test_id": "08b99391-8caa-88c7-8d45-98c6fd3f94b7", "test_full_name": "Enumerated spec 1", "test_location": "./spec/enumerated_spec.rb:22", "test_url": "http://buildkite.localhost/organizations/buildkite/analytics/suites/te-sample/tests/08b99391-8caa-88c7-8d45-98c6fd3f94b7" }, "workflow_id": "0198a11d-9486-7ac5-a87a-d55d2642cd3f", "workflow_url": "http://buildkite.localhost/organizations/buildkite/analytics/suites/te-sample/workflows/0198a11d-9486-7ac5-a87a-d55d2642cd3f", "event": "workflow.alarm", "workflow_event": { "type": "transition_count" } } ``` ###### Passed on retry ```json { "type": "passed_on_retry", "execution_id": "01996fcb-3dba-72dd-bdc0-9bf9e06bf4fe", "timestamp": "2025-09-22T05:00:14.650Z", "execution_run_id": "01996fcb-3dba-7b9a-9c53-b1ffb0960f31", "execution_commit_sha": "abc123", "execution_branch": "main", "window_days": 7, "window_executions": 100, "commit_count_threshold_high": 2, "commit_count_threshold_low": 0, "executions_in_window": 42, "days_in_window": 6, "commit_count": 1 } ``` ###### Probabilistic flakiness ```json { "type": "probabilistic_flakiness", "timestamp": "2025-09-22T05:00:16.506Z", "branch_filter": "main", "score": 0.2, "score_threshold_high": 0.1, "score_threshold_low": 0.08 } ``` ##### Send Slack notification The **Send Slack notification** action lets you send a Slack notification about a test. The Slack notification will be sent to a specified Slack channel in a connected Slack workspace, and the message supports the use of [mrkdwn](https://docs.slack.dev/messaging/formatting-message-text/#formatting) as a presentation format. The message also supports interpolation of workflow event information used in [webhook notifications](#send-webhook-notification). A full variable list is also available in the Test Engine **Workflows** interface for your test suite. To use this feature, ensure your Buildkite organization administrator has [connected to your Buildkite organization to your Slack workspace](/docs/platform/integrations/slack-workspace). ##### Creating a Linear issue The **Create Linear issue** action lets you create a Linear issue about a test. The issue will be created for the specified Linear team, with a custom title and description. Linear issues created for a test are visible in the **Issues** tab on the individual test view page, and the test's status is synchronized with Linear so that you know if someone is working on the issue. The title and description fields support [Linear-flavoured Markdown](https://linear.app/docs/editor#text-styling). The message also supports interpolation of workflow event information used in [webhook notifications](#send-webhook-notification), and a full variable list available in Test Engine's **Workflows** interface for your test suite. To use this feature, ensure your Buildkite organization administrator has [connected to your Buildkite organization to your Linear account](/docs/test-engine/integrations/linear). --- ### Installing and using the client URL: https://buildkite.com/docs/test-engine/bktec/installing-and-using-the-client #### Installing and using the client This page provides instructions on how to install the Test Engine Client ([bktec](https://github.com/buildkite/test-engine-client)) using [installers](#installation) provided by Buildkite, as well as [configure and use bktec](#using-bktec). ##### Installation bktec is supported on both Linux ([Debian](#installation-debian) and [Red Hat](#installation-red-hat)) and [macOS](#installation-macos), as well as in [Docker](#installation-docker), for 64-bit ARM and AMD architectures. You can install the client using the following installers. If you need to install this tool on a system without an installer listed below, you'll need to perform a manual installation using one of the binaries from [Test Engine Client's releases page](https://github.com/buildkite/test-engine-client/releases/latest). Once you have the binary, make it executable in your pipeline. ###### Debian 1. Ensure you have curl and gpg installed first: ```shell apt update && apt install curl gpg -y ``` 1. Install the registry signing key: ```shell curl -fsSL "https://packages.buildkite.com/buildkite/test-engine-client-deb/gpgkey" | gpg --dearmor -o /etc/apt/keyrings/buildkite_test-engine-client-deb-archive-keyring.gpg ``` 1. Configure the registry: ```shell echo -e "deb [signed-by=/etc/apt/keyrings/buildkite_test-engine-client-deb-archive-keyring.gpg] https://packages.buildkite.com/buildkite/test-engine-client-deb/any/ any main\ndeb-src [signed-by=/etc/apt/keyrings/buildkite_test-engine-client-deb-archive-keyring.gpg] https://packages.buildkite.com/buildkite/test-engine-client-deb/any/ any main" > /etc/apt/sources.list.d/buildkite-buildkite-test-engine-client-deb.list ``` 1. Install the package: ```shell apt update && apt install bktec ``` ###### Red Hat 1. Configure the registry: ```shell echo -e "[test-engine-client-rpm]\nname=Test Engine Client - rpm\nbaseurl=https://packages.buildkite.com/buildkite/test-engine-client-rpm/rpm_any/rpm_any/\$basearch\nenabled=1\nrepo_gpgcheck=1\ngpgcheck=0\ngpgkey=https://packages.buildkite.com/buildkite/test-engine-client-rpm/gpgkey\npriority=1" > /etc/yum.repos.d/test-engine-client-rpm.repo ``` 2. Install the package: ```shell dnf install -y bktec ``` ###### macOS The Test Engine Client can be installed using [Homebrew](https://brew.sh) with [Buildkite tap formulae](https://github.com/buildkite/homebrew-buildkite). To install, run: ```shell brew tap buildkite/buildkite && brew install buildkite/buildkite/bktec ``` ###### Docker You can run the Test Engine Client inside a Docker container using the official image in [Docker Hub](https://hub.docker.com/r/buildkite/test-engine-client/tags). To run the client using Docker: ```shell docker run buildkite/test-engine-client ``` Or, to add the Test Engine Client binary to your Docker image, include the following in your Dockerfile: ```dockerfile COPY --from=buildkite/test-engine-client /usr/local/bin/bktec /usr/local/bin/bktec ``` ##### Dependencies bktec relies on execution timing data captured by the test collectors from previous builds to partition your tests evenly across your agents. Therefore, you will need to configure the [test collector](/docs/test-engine/test-collection) for your test framework. ##### Using bktec Buildkite maintains its open source Test Engine Client ([bktec](https://github.com/buildkite/test-engine-client)) tool. Currently, the bktec tool supports [RSpec](/docs/test-engine/test-collection/ruby-collectors#rspec-collector), [Jest](/docs/test-engine/test-collection/javascript-collectors#configure-the-test-framework-jest), [Cypress](/docs/test-engine/test-collection/javascript-collectors#configure-the-test-framework-cypress), [PlayWright](/docs/test-engine/test-collection/javascript-collectors#configure-the-test-framework-playwright), and [Pytest](/docs/test-engine/test-collection/python-collectors#pytest-collector), pytest-pants, [Go](/docs/test-engine/test-collection/golang-collectors), and cucumber testing frameworks. If your testing framework is not supported, get in touch through support@buildkite.com or submit a pull request. Once you have [installed the bktec binary](#installation) and it is executable in your pipeline, you'll need to [configure some additional environment variables](#using-bktec-configure-environment-variables) for bktec to function. You can then [update your pipeline step](#using-bktec-update-the-pipeline-step) to call `bktec run` instead of calling RSpec to run your tests. ###### Configure environment variables bktec uses a number of [predefined](#predefined-environment-variables) and [mandatory](#mandatory-environment-variables) environment variables, as well as several optional ones for either [RSpec](#optional-rspec-environment-variables) or [Jest](#optional-jest-environment-variables). ###### Predefined environment variables By default, the following predefined environment variables are available to your testing environment and do not need any further configuration. If, however, you use Docker or some other type of containerization tool to run your tests, and you wish to use these predefined environment variables in these tests, you may need to expose these environment variables to your containers. | ` [#](#)` | ###### Mandatory environment variables The following mandatory environment variables must be set. | ` [#](#)` | ###### Optional RSpec environment variables The following optional RSpec environment variables can also be used to configure bktec's behavior. | ` [#](#)` **Default**: `` | ###### Optional Jest environment variables The following optional Jest environment variables can also be used to configure bktec's behavior. | ` [#](#)` **Default**: `` | ###### Update the pipeline step With the environment variables configured, you can now update your pipeline step to run bktec instead of running RSpec, or Jest directly. The following example pipeline step demonstrates how to partition your RSpec test suite across 10 nodes. ``` steps: - name: "RSpec" command: bktec run parallelism: 10 env: BUILDKITE_TEST_ENGINE_API_ACCESS_TOKEN: your-secret-token BUILDKITE_TEST_ENGINE_RESULT_PATH: tmp/rspec-result.json BUILDKITE_TEST_ENGINE_SUITE_SLUG: my-suite BUILDKITE_TEST_ENGINE_TEST_RUNNER: rspec ``` ##### API rate limits There is a limit on the number of API requests that bktec can make to the server. This limit is 10,000 requests per minute per Buildkite organization. When this limit is reached, bktec will pause and wait until the next minute is reached before retrying the request. This rate limit is independent of the [REST API rate limits](/docs/apis/rest-api/limits), and only applies to the Test Engine Client's interactions with the Test Splitting API. ##### Dynamic parallelism Usually the `parallelism` value is hard coded in the bktec pipeline step. However, from version 2.0.0, it is possible to run bktec with a dynamic `parallelism` value based on a target time for the test run. A common use case for this is test selection, where feature branch builds only run a subset of tests relevant to the changes being made. Dynamic parallelism is supported using the `bktec plan` command. When used with the `--max-parallelism` and `--target-time` flags (see list of [bktec plan flags](#dynamic-parallelism-bktec-plan-flags) for more information), bktec generates a test plan and estimates the `parallelism` required to achieve the specified target build time. bktec then [uploads a dynamic pipeline](/docs/agent/cli/reference/pipeline) using the specified pipeline template. In the following example, the `test-selection.sh` script is assumed to generate a list of test files, one per line, relevant to the changes in a feature branch. ``` steps: - name: "Test selection" command: test-selection.sh > selected-files.txt - wait: ~ - name: "Dynamic pipeline" key: "dynamic-pipeline" command: bktec plan --max-parallelism 10 --target-time 2m --files selected-files.txt --pipeline-upload .buildkite/dynamic-pipeline-template.yml ``` In this example pipeline, bktec uploads a dynamic pipeline using `.buildkite/dynamic-pipeline-template.yml` by invoking `buildkite agent pipeline upload`. Learn more about the [bktec plan additional environment variables](#dynamic-parallelism-bktec-plan-additional-environment-variables) generated during pipeline uploads. These variables can be used in the template file provided to the `--pipeline-upload` flag, where you can use [environment variable substitution](/docs/agent/cli/reference/pipeline#environment-variable-substitution) to obtain their values. ``` steps: - command: "bktec run --plan-identifier ${BUILDKITE_TEST_ENGINE_PLAN_IDENTIFIER}" name: "bktec run" depends_on: "dynamic-pipeline" parallelism: ${BUILDKITE_TEST_ENGINE_PARALLELISM} ``` ###### bktec plan flags The `bktec plan` command supports the following flags, which controls the behavior of the dynamic parallelism test plan. Each flag's value alternatively can be supplied using an environment variable. | `--max-parallelism` | The maximum allowed parallelism for a dynamic parallelism test plan. **Environment variable:** `$BUILDKITE_TEST_ENGINE_MAX_PARALLELISM` | `--target-time` | Target duration for each node, for example, `2m30s`. The test planner will attempt to split the test plan into equal duration buckets of this duration and calculate the optimum parallelism to achieve this, up to the value supplied to `--max-parallelism` **Environment variable:** `$BUILDKITE_TEST_ENGINE_TARGET_TIME` | `--files` | Path to a file containing a newline separated list of test file names to be executed. **Environment variable:** `$BUILDKITE_TEST_ENGINE_FILES` ###### bktec plan additional environment variables The `bktec plan` command generates the following additional environment variables when uploading the pipeline. | `BUILDKITE_TEST_ENGINE_PLAN_IDENTIFIER` | The identifier of the test plan generated by `bktec plan`. | `BUILDKITE_TEST_ENGINE_PARALLELISM` | The parallelism estimated by the test planner to achieve the requested target build time. --- ### Email URL: https://buildkite.com/docs/test-engine/notifications/email #### Weekly flaky test summary You're able to schedule a weekly email summary of the flakiest tests owned by your teams. Visit the **Suite settings** page to create new notifications, or manage existing ones. --- ### Linear URL: https://buildkite.com/docs/test-engine/integrations/linear #### Linear The Linear integration lets you synchronize issues between [Linear](https://linear.app) and Buildkite Test Engine. This integration supports the creation of Linear issues based on [Test Engine workflows](/docs/test-engine/workflows/actions#creating-a-linear-issue). > 📘 > Setting up a Workspace requires Buildkite organization administrator permissions. When adding a Linear integration through the [**Add Linear Notification** page](https://buildkite.com/organizations/-/services/linear/new), access for your entire Linear workspace will be authorized, along with all the teams contained within this workspace. You only need to set up this integration once per Linear workspace, after which, you can then configure action for any Linear team. ##### Connect Linear 1. Select **Settings** in the global navigation and select **Notification Services** in the left sidebar. 1. Select the **Add** button on **Linear**. 1. Select the **Add to Linear** button: This action redirects you to Linear. 1. Log in to Linear and grant Buildkite permission to access your Linear workspace. 1. After granting access, you can then configure the [Test Engine workflow Linear action](/docs/test-engine/workflows/actions#creating-a-linear-issue). ##### Privacy policy For details on how Buildkite handles your information, please see Buildkite's [Privacy Policy](https://buildkite.com/about/legal/privacy-policy/). --- ### Usage and billing URL: https://buildkite.com/docs/test-engine/usage-and-billing #### Usage and billing Test Engine is designed to optimize your test suites through the management of your tests. ##### Managed tests Buildkite bills Test Engine customers by number of _managed tests_. See the [Buildkite Pricing](https://buildkite.com/pricing/) page for plan-level details. Each and every test that can be uniquely identified by its combination of test suite, scope, and name, is a _managed test_. For example, each of the following three tests are unique managed tests: - Test Suite 1 - here.is.scope.one - Login Test name - Test Suite 1 - here.is.another.scope - Login Test name - Test Suite 2 - here.is.scope.one - Login Test name Test Engine conducts the following on each managed test: - Tracks its history - Maintains its state (for example, [Enterprise plan](https://buildkite.com/pricing) customers can quarantine tests by disabling them under certain conditions) - Attributes [ownership by team](/docs/test-engine/test-suites/test-ownership) For billing purposes, Buildkite measures usage by calculating the number of managed tests that have executed (run) at least once each day, and then bills based on the 90th percentile of this usage for the month. This billing method ensures that occasional spikes in usage, such as those caused by refactoring, don't result in excessive charges. > 📘 Executed managed tests are only charged once per day > If a specific managed test has run multiple times on a specific day, then this only counts once towards the usage measurement for that day. ##### Test executions > 📘 Personal and legacy plans only > This section is only applicable to Buildkite Test Engine customers on the [Personal](https://buildkite.com/pricing/) and paid _legacy_ plans. If you are on the Personal plan, your first 50,000 test executions are free, after which, you will need to upgrade to the [Pro or Enterprise](https://buildkite.com/pricing/) plan to continue using Test Engine. For customers on the Pro or Enterprise plan, usage is billed per [managed test](#managed-tests). Customers on legacy paid plans may still be billed per individual test execution, which sum to the _total number of times_ a test was executed (test execution count). However, this approach is no longer used on current and new Buildkite [Pro or Enterprise](https://buildkite.com/pricing/) plans. Instead, see [Managed tests](#managed-tests) for details about the current billing approach for these plans. You can find the test execution details for a run at the top of the run page, and your organization's [total usage](#usage-page) in Settings. ##### Usage page The [Usage page](https://buildkite.com/organizations/~/usage?product=test_engine) is available on every Buildkite plan, and shows a breakdown of all billable usage for your organization including managed tests and test executions. The [managed tests usage page](https://buildkite.com/organizations/~/usage/test_engine_managed_tests) graphs the maximum number of unique tests per day over the organization's billing periods. This page includes a breakdown of usage by suite and a CSV download of usage over the period. The [test executions usage page](https://buildkite.com/organizations/~/usage/test_executions) graphs the total executions over the organization's billing periods. This page includes a breakdown of usage by suite and a CSV download of usage over the period. --- ### Permissions URL: https://buildkite.com/docs/test-engine/permissions #### User, team, and test suite permissions The [_teams_ feature](#manage-teams-and-permissions) allows you to apply access permissions and functionality controls for one or more groups of users (that is, _teams_) on each test suite throughout your organization. Enterprise plan customers can configure test suite permissions and security features for all users across their Buildkite organization through the **Security** page. Learn more about this feature in [Manage organization security for test suites](#manage-organization-security-for-test-suites). ##### Manage teams and permissions To manage teams across the Buildkite Test Engine application, a _Buildkite organization administrator_ first needs to enable this feature across their organization. Learn more about how to do this in the [Manage teams and permissions section of Platform documentation](/docs/platform/team-management/permissions#manage-teams-and-permissions). Once the _teams_ feature is enabled, you can see the teams that you're a member of from the **User** page, which: - As a Buildkite organization administrator, you can access by selecting **Settings** in the global navigation > [**Users**](https://buildkite.com/organizations/~/users/). - As any other user, you can access by selecting **Teams** in the global navigation > [**Users**](https://buildkite.com/organizations/~/users/). ###### Organization-level permissions Learn more about what a _Buildkite organization administrator_ can do in the [Organization-level permissions in the Platform documentation](/docs/platform/team-management/permissions#manage-teams-and-permissions-organization-level-permissions). As an organization administrator, you can access the [**Organization Settings** page](https://buildkite.com/organizations/~/settings) by selecting **Settings** in the global navigation, where you can do the following: - Add new teams or edit existing ones in the [**Team** section](https://buildkite.com/organizations/~/teams). - After selecting a team, you can view and administer the member-, [pipeline-](/docs/pipelines/security/permissions#manage-teams-and-permissions-pipeline-level-permissions), [test suite-](#manage-teams-and-permissions-test-suite-level-permissions), [registry-](/docs/package-registries/security/permissions#manage-teams-and-permissions-registry-level-permissions) and [team-](/docs/platform/team-management/permissions#manage-teams-and-permissions-team-level-permissions)level settings for that team. **Note:** Registry-level settings are only available once [Buildkite Package Registries has been enabled](/docs/package-registries/security/permissions#enabling-buildkite-packages). ###### Team-level permissions Learn more about what _team members_ are and what _team maintainers_ can do in the [Team-level permissions in the Platform documentation](/docs/platform/team-management/permissions#manage-teams-and-permissions-team-level-permissions). ###### Test suite-level permissions When the [teams feature is enabled](#manage-teams-and-permissions), any user can create a new test suite, as long as this user is a member of at least one team within the Buildkite organization, and this team has the **Create test suites** [team member permission](/docs/platform/team-management/permissions#manage-teams-and-permissions-team-level-permissions). When you create a new test suite in Buildkite: - You are automatically granted the **Full Access** permission to this test suite. - Any members of teams to which you provide access to this test suite are also granted the **Full Access** permission. **Full Access** on a test suite allows you to: - View test data. - Edit test suite's settings. - Delete the test suite. - Provide access to other users, by adding the test suite to other teams that you are a [team maintainer](#manage-teams-and-permissions-team-level-permissions) on. - Configure test splitting. - Create and edit workflows. Any user with **Full Access** permission to a test suite can change its permission to **Read Only**, which allows you to view test runs only, but _not_: - Edit the test suite's settings. - Delete the test suite. - Create and edit workflows. - Provide access to other users. A user who is a member of at least one team with **Full Access** permission to a test suite can change the permissions on this test suite. However, once this user loses this **Full Access** through their last team with this permission on this test suite, the user then loses the ability to change the test suite's permission in any team they are a member of. Another user with **Full Access** to this test suite or a [Buildkite organization administrator](#manage-teams-and-permissions-organization-level-permissions) is required to change the test suite's permission back to **Full Access** again. ##### Manage organization security for test suites Buildkite customers on the [Enterprise plan](https://buildkite.com/pricing/) can configure test suite action permissions for all users across their Buildkite organization. These features can be used either with or without the [teams feature enabled](#manage-teams-and-permissions). These user-level permissions and security features are managed by _Buildkite organization administrators_. To access this feature: 1. Select **Settings** in the global navigation to access the [**Organization Settings**](https://buildkite.com/organizations/~/settings) page. 1. Select [**Security** > **Test Engine** tab](https://buildkite.com/organizations/~/security/test-analytics) to access your organization's security for Test Engine page. From this page, you can configure the following permissions for all users across your Buildkite organization: - **Create test suites**—if the [teams feature](#manage-teams-and-permissions) is enabled, then this permission is controlled at a [team-level](#manage-teams-and-permissions-team-level-permissions) and therefore, this option will be unavailable on this page. - **Delete test suites** - **Change test suite visibility**—Make test suites publicly available. --- ## Package Registries ### Package Registries URL: https://buildkite.com/docs/package-registries #### Buildkite Package Registries Scale out asset management for faster builds and deployments across any ecosystem with _Buildkite Package Registries_. Secure your supply chain and avoid the bottlenecks of poorly managed and insecure dependencies. Package Registries allows you to: - Manage artifacts and packages from [Buildkite Pipelines](/docs/pipelines), as well as other CI/CD applications that require artifact management. - Provide registries to store your [packages and other package-like file formats](/docs/package-registries/background) such as container images and Terraform modules. As well as storing a collection of packages, a registry also surfaces metadata or attributes associated with a package, such as the package's description, version, contents (files and directories), checksum details, distribution type, dependencies, and so on. > 📘 > Customers on legacy Buildkite plans can enable [Package Registries](https://buildkite.com/platform/package-registries) through the [**Organization Settings** page](/docs/package-registries/security/permissions#enabling-buildkite-packages). ##### Get started Run through the [Getting started](/docs/package-registries/getting-started) tutorial for a step-by-step guide on how to use Buildkite Package Registries. If you're familiar with the basics, explore how to use registries for each of Buildkite Package Registries' supported package ecosystems: ##### Core features ##### API & references Learn more about: - Package Registries' APIs through the: * [REST API documentation](/docs/apis/rest-api), and related endpoints, starting with [registries](/docs/apis/rest-api/package-registries/registries). * [GraphQL documentation](/docs/apis/graphql-api) and its [registries](/docs/apis/graphql/cookbooks/registries)-related queries, as well as [portals](/docs/apis/graphql/portals). - Package Registries' [webhooks](/docs/apis/webhooks/package-registries). --- ### Overview URL: https://buildkite.com/docs/package-registries #### Buildkite Package Registries Scale out asset management for faster builds and deployments across any ecosystem with _Buildkite Package Registries_. Secure your supply chain and avoid the bottlenecks of poorly managed and insecure dependencies. Package Registries allows you to: - Manage artifacts and packages from [Buildkite Pipelines](/docs/pipelines), as well as other CI/CD applications that require artifact management. - Provide registries to store your [packages and other package-like file formats](/docs/package-registries/background) such as container images and Terraform modules. As well as storing a collection of packages, a registry also surfaces metadata or attributes associated with a package, such as the package's description, version, contents (files and directories), checksum details, distribution type, dependencies, and so on. > 📘 > Customers on legacy Buildkite plans can enable [Package Registries](https://buildkite.com/platform/package-registries) through the [**Organization Settings** page](/docs/package-registries/security/permissions#enabling-buildkite-packages). ##### Get started Run through the [Getting started](/docs/package-registries/getting-started) tutorial for a step-by-step guide on how to use Buildkite Package Registries. If you're familiar with the basics, explore how to use registries for each of Buildkite Package Registries' supported package ecosystems: ##### Core features ##### API & references Learn more about: - Package Registries' APIs through the: * [REST API documentation](/docs/apis/rest-api), and related endpoints, starting with [registries](/docs/apis/rest-api/package-registries/registries). * [GraphQL documentation](/docs/apis/graphql-api) and its [registries](/docs/apis/graphql/cookbooks/registries)-related queries, as well as [portals](/docs/apis/graphql/portals). - Package Registries' [webhooks](/docs/apis/webhooks/package-registries). --- ### Background URL: https://buildkite.com/docs/package-registries/background #### Background to packages A _package_ is a combination of _metadata_, _configuration_, and _software_ that is prepared in a way that a package management tool can use to properly and reliably install software and related configuration data on a computer. Some examples of package management tools include: - [apt](https://help.ubuntu.com/community/Repositories/CommandLine) on Ubuntu - [yum](https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/5/html/Deployment_Guide/c1-yum.html) on RedHat Enterprise Linux (RHEL) - [pip](https://pip.pypa.io/) for Python packages - [gem](http://guides.rubygems.org/) for RubyGems Packages are useful because their: - Version information helps keep software up to date. - Metadata offers visibility in what's installed to which locations and why. - Software installations are reproducible in different environments. ##### Package creation tools There are many tools for creating packages. Some of these tools are provided directly by Linux distributions, while many other third-party packaging tools are also available. Some popular package creation tools include: - [rpmbuild](http://wiki.centos.org/HowTos/SetupRpmBuildEnvironment) (CentOS) for RPM packages. Also, refer to the [Packaging Tutorial: GNU Hello](https://docs.fedoraproject.org/en-US/package-maintainers/Packaging_Tutorial_GNU_Hello/) for Fedora. - [debuild](https://wiki.debian.org/Packaging/Intro) for deb packages. - [distutils](https://docs.python.org/2/distutils/builtdist.html) for Python packages. - [gem](http://guides.rubygems.org/make-your-own-gem/) for RubyGems packages. Some advanced package creation tools include: - [Mock](https://rpm-software-management.github.io/mock/), a chroot-based system for building RPM packages in a clean room environment. - [pbuilder](https://wiki.ubuntu.com/PbuilderHowto), a chroot-based system for building deb packages in a clean room environment. Useful tips about pbuilder can also be found in [manuals page](https://manpages.ubuntu.com/manpages/jammy/man8/pbuilder.8.html) for pbuilder. - [git-buildpackage](http://honk.sigxcpu.org/projects/git-buildpackage/manual-html/gbp.html), a set of scripts that can be used to build deb packages directly from git repositories. - [fpm](https://github.com/jordansissel/fpm), a third-party tool that allows users to quickly and easily make a variety of packages (including RPM and deb packages). - [PackPack](https://github.com/packpack/packpack), a simple tool to build RPM and Debian packages from git repositories. ##### Next steps Learn more about how: - Buildkite Package Registries works through this step-by-step [Getting started](/docs/package-registries/getting-started) tutorial. - To work with registries in [Manage registries](/docs/package-registries/registries/manage). - To manage access to your registries in [Access controls](/docs/package-registries/security/permissions). - To configure your own private storage for Buildkite Package Registries in [Private storage](/docs/package-registries/registries/private-storage-link). --- ### Getting started URL: https://buildkite.com/docs/package-registries/getting-started #### Getting started with Package Registries 👋 Welcome to Buildkite Package Registries! You can use Package Registries to house your [packages](/docs/package-registries/background#package-creation-tools) built through [Buildkite Pipelines](/docs/pipelines) or another CI/CD application, and manage them through dedicated registries. This getting started page is a tutorial that helps you understand Package Registries' fundamentals, by guiding you through the creation of a new JavaScript _source_ registry, cloning, running and packaging a simple example Node.js project locally, and uploading the package to this new registry. Note that Buildkite Package Registries supports [other package ecosystems](/docs/package-registries/ecosystems) too. ##### Before you start To complete this tutorial, you'll need: - A Buildkite account. If you don't have one already, [create a free personal account](https://buildkite.com/signup). - [Git](https://git-scm.com/downloads), to clone the Node.js package example. - [Node.js](https://nodejs.org/en/download)—macOS users can also install Node.js with [Homebrew](https://formulae.brew.sh/formula/node). ##### Create a source registry First, create a new JavaScript source registry: 1. Select **Package Registries** in the global navigation to access the **Registries** page. 1. Select **New registry** > **Source Registry**. 1. On the **New Registry** page, enter the mandatory **Name** for your source registry. For example, `My JavaScript registry`. 1. Enter an optional **Description** for the source registry, which will appear under the name of the registry item listed on the **Registries** page. For example, `This is an example of a JavaScript registry`. 1. Select the required registry **Ecosystem** of **JavaScript (npm)**. 1. If your Buildkite organization has the [teams feature](/docs/package-registries/security/permissions) enabled, select the relevant **Teams** to be granted access to the new JavaScript registry. 1. Select **Create Registry**. The new JavaScript source registry's **Releases** page is displayed. Selecting **Package Registries** in the global navigation opens the **Registries** page, where your new source registry will be listed. ##### Clone the Node.js example package project Then, clone the Node.js example package project: 1. Open a terminal or command prompt, and run the following command: ```bash git clone git@github.com:buildkite/nodejs-example-package.git ``` 1. Change directory (`cd`) into the `nodejs-example-package` directory. 1. (Optional) Run the following `npm` command to test that the package executes successfully: ```bash npm run main ``` The command output should display `Hello world!`. ##### Configure your Node.js environment and project Next, configure your Node.js environment to publish Node.js packages to [the JavaScript registry you created above](#create-a-source-registry): 1. Access your JavaScript registry's publishing instructions page. To do this, select **Package Registries** in the global navigation > your JavaScript source registry (for example, **My JavaScript registry**) from the list on the **Registries** page. 1. Select the **Publish Instructions** tab. 1. On the resulting page, copy the `npm` command in the first code box and run it to configure your npm config settings file (`.npmrc`). This configuration allows you to publish packages to your JavaScript registry. The `npm` command has the following format: ```bash npm set "//packages.buildkite.com/{org.slug}/{registry.slug}/npm/:_authToken" registry-write-token ``` where: - `{org.slug}` can be obtained from the end of your Buildkite URL, after accessing **Package Registries** or **Pipelines** in the global navigation of your organization in Buildkite. - `{registry.slug}` is the slug of your JavaScript source registry, which is the [kebab-case](https://en.wikipedia.org/wiki/Letter_case#Kebab_case) version of this registry's name, and can be obtained after accessing **Package Registries** in the global navigation > your JavaScript source registry from the **Registries** page. - `registry-write-token` is your [API access token](https://buildkite.com/user/api-access-tokens) used to publish/upload packages to your JavaScript source registry. Ensure this access token has the **Read Packages** and **Write Packages** REST API scopes, which allows this token to publish packages to any source registry your user account has access to within your Buildkite organization. **Note:** * If your `.npmrc` file doesn't exist, this command will automatically create it for you. * This step only needs to be performed once for the life of your JavaScript source registry. 1. Copy the `publishConfig` field and its value in the second code box and paste it to the end of your Node.js package's `package.json` file. Alternatively, select and copy the line of code beginning `"publishConfig": ...`. For example: ```json { "name": "nodejs-example-package", "version": "1.0.1", "description": "An example Node.js package for Buildkite Package Registries", "main": "index.js", "scripts": { "main": "node index.js" }, "author": "A Person", "license": "MIT", "publishConfig": {"registry": "https://packages.buildkite.com/{org.slug}/{registry.slug}/npm/"} } ``` **Note:** Don't forget to add the separating comma between `"publishConfig": ...` and the previous field, that is, `"license": ...` in this case. ##### Publish the package Last, in the `nodejs-example-package` directory, publish your Node.js package to your JavaScript registry by running the following `npm` commands: ```bash npm pack npm publish ``` Your Node.js package is published to your Buildkite JavaScript registry in `.tgz` format. ##### Check the end result To confirm that your Node.js package was successfully published to your Buildkite JavaScript registry: 1. View your JavaScript registry's details page, refreshing the page if necessary. To access this page, select **Package Registries** in the global navigation > your Node.js package from the list. The package name (for example, **nodejs.example-package-1.0.1.tgz**) should appear under **Packages**. 1. Select the package name to access its details, and note the following: * **Instructions**: this section of the **Installation** tab provides command line instructions for installing the package you just published. * **Details** tab: provides various checksum values for this published package. * **About this version**: obtained from the `description` field value of the `package.json` file. * **Details**, which lists the following (where any field values are also obtained from the `package.json` file): - The name of the package, obtained from the `name` field value. - The package version, obtained from the `version` field value. - The registry the package is located in. - The package's visibility (**Private** by default), based on its registry's visibility. - The distribution name/version (just **node.js** in this case). - The package's license, obtained from the `license` field value. * **Download**: select this to download the package locally. To return to the your JavaScript registry details page (listing all packages published to this registry), select the registry's name at the top of the page. ###### Publish a new version As an optional extra, try incrementing the version number in your `packages.json` file, [re-publishing the package to your JavaScript registry](#publish-the-package), and [checking the end result](#check-the-end-result). Your JavaScript registry's details page should show your new package with the incremented version number. ##### Next steps That's it! You've created a new Buildkite registry, configured your Node.js environment and project to publish to your new JavaScript registry, and published a Node.js package to this registry. 🎉 Learn more about how to work with Buildkite Package Registries in [Manage registries](/docs/package-registries/registries/manage), and [how to create them](/docs/package-registries/registries/manage#create-a-source-registry). --- ### Manage registries URL: https://buildkite.com/docs/package-registries/registries/manage #### Manage registries This page provides details on how to manage registries within your Buildkite organization. Buildkite Package Registries allows you to create [source registries](#create-a-source-registry). ##### Create a source registry A _source_ registry is a basic type of registry used for publishing and installing packages. A source registry stores package files, which are either hosted by Buildkite or in your own [private storage](#update-a-source-registry-configure-registry-storage). New source registries can be created through the **Registries** page of the Buildkite / Package Registries interface. To create a new source registry: 1. Select **Package Registries** in the global navigation to access the [**Registries**](https://buildkite.com/organizations/~/packages) page. **Note:** Any previously created registries are listed and can be accessed from this page. 1. Select **New registry** > **Source Registry**. 1. On the **New Registry** page, enter the mandatory **Name** for your registry. 1. Enter an optional **Description** for the registry. This description appears under the name of the registry item on the **Registries** page. 1. Select the required registry **Ecosystem** based on the [package ecosystem](#manage-packages-in-a-source-registry) for this new registry. 1. If your Buildkite organization has the [teams feature](/docs/package-registries/security/permissions) enabled, select the relevant **Teams** to be granted access to the new registry. 1. Select **Create Registry**. The new registry's details page is displayed. Selecting **Package Registries** in the global navigation opens the **Registries** page, where your new registry will be listed. ##### Manage packages in a source registry Once a [source registry has been created](#create-a-source-registry), packages can then be uploaded to it. Learn more about how to manage packages for your registry's relevant language and package ecosystem: - [Alpine (apk)](/docs/package-registries/ecosystems/alpine) - [OCI (Docker)](/docs/package-registries/ecosystems/oci) images - [Debian/Ubuntu (deb)](/docs/package-registries/ecosystems/debian) - [Files (generic)](/docs/package-registries/ecosystems/files) - Helm ([OCI](/docs/package-registries/ecosystems/helm-oci) or [Standard](/docs/package-registries/ecosystems/helm)) charts - [Hugging Face](/docs/package-registries/ecosystems/hugging-face) models - Java ([Maven](/docs/package-registries/ecosystems/maven) or Gradle using [Kotlin](/docs/package-registries/ecosystems/gradle-kotlin) or [Groovy](/docs/package-registries/ecosystems/gradle-groovy)) - [JavaScript (npm)](/docs/package-registries/ecosystems/javascript) - [NuGet](/docs/package-registries/ecosystems/nuget) - [Python (PyPI)](/docs/package-registries/ecosystems/python) - [Red Hat (RPM)](/docs/package-registries/ecosystems/red-hat) - [Ruby (RubyGems)](/docs/package-registries/ecosystems/ruby) - [Terraform](/docs/package-registries/ecosystems/terraform) modules ##### Update a source registry Source registries can be updated using the **Registries** page of the Buildkite / Package Registries interface, which lists all previously created [source registries](#create-a-source-registry). The following aspects of a source registry can be updated: - **Name**: be aware that changing this value will also change the URL, which in turn will break any existing installations that use this registry. - **Description** - **Emoji**: to change the emoji of the registry from the default provided when the [source](#create-a-source-registry) registry was created. The emoji appears next to the registry's name. - **Color**: the background color for the emoji - **Registry Management**: the privacy settings for the registry—private (the initial default state for all newly created registries) or public. - **OIDC Policy**: one or more [policies defining which OpenID Connect (OIDC) tokens](/docs/package-registries/security/oidc), from the [Buildkite agent](/docs/agent/cli/reference/oidc) or another third-party system, can be used to either publish/upload packages to the registry, or download/install packages from the registry. - **Tokens** (private registries only): one or more [registry tokens](#configure-registry-tokens), which are an alternative to API access tokens. - **Storage**: choose your [registry storage](#update-a-source-registry-configure-registry-storage), selecting from **Buildkite-hosted storage** (the initially default storage system) or [your own private AWS S3 bucket](/docs/package-registries/registries/private-storage-link) to store packages for this registry. A source registry's ecosystem type cannot be changed once the [registry is created](#create-a-source-registry). To update a source registry: 1. Select **Package Registries** in the global navigation to access the [**Registries**](https://buildkite.com/organizations/~/packages) page. 1. Select the source registry to update on this page. 1. Select **Settings** and on the **General (Settings)** page, update the following fields as required: * **Name**: being aware of the consequences described above * **Description**: appears under the name of the registry item on the **Registries** page, and on the registry's details page * **Emoji** and **Color**: the emoji appears next to the registry's name and the color (in hex code syntax, for example, `#FFE0F1`) provides the background color for this emoji * **Registry Management** > **Make registry public** or **Make registry private**: select either of these buttons to make the registry public or revert it back to its private state—the existing wording on this button indicates the current state, and if the registry is public, the word **Public** is indicated explicitly next to the registry's name in the Buildkite interface. 1. Select **Update Registry** to save your changes. The registry's updates will appear on the **Registries** page, as well as the registry's details page. 1. If the registry's _OIDC policy_ needs to be configured, learn more about this in [OIDC in Buildkite Package Registries](/docs/package-registries/security/oidc). 1. If the registry is _private_ and _registry tokens_ (an alternative to API access tokens) need to be configured, learn more about this in [Configure registry tokens](#configure-registry-tokens). 1. If [_private storage_](/docs/package-registries/registries/private-storage-link) has been configured and linked to your Buildkite organization, the storage location for the registry can be changed. Learn more about this in [Configure registry storage](#update-a-source-registry-configure-registry-storage). ###### Configure registry storage When a [new source registry is created](#create-a-source-registry), it automatically uses the [default Buildkite Package Registries storage](/docs/package-registries/registries/private-storage-link#set-the-default-buildkite-package-registries-storage) location. However, your new source registry's default storage location can be overridden to use another configured storage location. Learn more about configuring private storage in [Private storage links](/docs/package-registries/registries/private-storage-link). To configure/change your source registry's current storage: 1. Select **Package Registries** in the global navigation to access the [**Registries**](https://buildkite.com/organizations/~/packages) page. 1. Select the source registry whose storage requires configuring. 1. Select **Settings** > **Storage** to access the source registry's **Storage** page. 1. Select **Change** to switch from using **Buildkite-hosted storage** (or a previously configured private storage link such as **s3://…** or **gs://…**) to your new private storage link. If this setting is currently configured to use a previously configured private storage link, the storage location can also be reverted back to using **Buildkite-hosted storage**. > 📘 > All subsequent packages published to this source registry will be stored in your newly configured storage location. Bear in mind that all existing packages in this registry will remain in their original storage location. ##### Configure registry tokens _Registry tokens_ are long-lived _read only_ tokens configurable for a [private source registry](#update-a-source-registry), which allow you download and install packages from that registry, acting as an alternative to (and without having to use) a user account-based [API access token](https://buildkite.com/user/api-access-tokens) with the **Read Packages** REST API scope. To configure registry tokens for a private source registry: 1. Select **Package Registries** in the global navigation to access the [**Registries**](https://buildkite.com/organizations/~/packages) page. 1. Select the private source registry whose registry tokens require configuring. 1. Select **Settings** > **Tokens** to access the registry's **Tokens** page, where you can: * Create a new registry token. To do this: 1. Select **Create Registry Token**. 1. Enter a **Description** for this new token. 1. Select **Create**. * Select the copy, view, **Edit description** or **Delete token** button associated with any existing token on this page, to perform that action on the token. Unlike other tokens generated elsewhere in Buildkite, registry tokens can continue to be viewed and copied in their entirety on multiple occasions after their creation. This registry tokens feature (the **Tokens** page) is not accessible while a registry is public. However, any registry tokens that were created before a registry is made public, will become accessible again when the registry is made private. ##### Delete a registry Any type of registry can be deleted using the **Registries** page of the Buildkite / Package Registries interface, which lists all previously created [source registries](#create-a-source-registry). Deleting a source registry permanently deletes all packages contained within it. To delete a registry: 1. Select **Package Registries** in the global navigation to access the **Registries** page. 1. Select the registry to delete on this page. 1. Select **Settings** to open the **General (Settings)** page. 1. Under **Registry Management**, select **Delete Registry**. 1. In the confirmation dialog, enter the name of the registry, exactly as it is presented, and select **Delete Registry**. ##### Audit logging All events performed through Buildkite Package Registries are logged through the Buildkite organization's [**Audit Log** feature](/docs/platform/audit-log). --- ### Overview URL: https://buildkite.com/docs/package-registries/registries/private-storage-link #### Private storage link This page provides an overview of Buildkite Package Registries' _private storage link_ feature. By default, Buildkite Package Registries provides its own storage (known as _Buildkite storage_) to house any packages, container images and modules stored in source registries. However, as a [Buildkite organization administrator](/docs/package-registries/security/permissions#manage-teams-and-permissions-organization-level-permissions), you can also link your own private storage to Buildkite Package Registries (known as a _private storage link_) to house these files. A private storage link allows you to: - Manage Buildkite registry packages, container images and modules stored within your private storage, which has the following advantages: * Locating your private storage closer to your geographical location may provide faster registry access. * Mitigates network transmission costs. * Reduces latency when closer to your workloads. * Allows full control over bucket-level security and lifecycle policies. - Use Buildkite Package Registries' management and metadata-handling features to manage these files in registries within your private storage. While packages are stored in your own private storage, Buildkite still handles the indexing and management of these packages. - Maintain control, ownership and sovereignty over the packages, container images and modules stored within your source registries managed by Buildkite Package Registries. Regardless of whether you choose to manage your packages in Buildkite storage or in your own storage through a private storage link: - Both storage and bandwidth are metered in the same manner, with no differences in additional costs. - Package management, indexing, and access are all routed through the Buildkite API. The following diagram shows how your private storage link interfaces between the Buildkite Package Registries software-as-a-service (SaaS) control plane (which constitutes the Buildkite Platform), and your teams' infrastructure, operating in environments you can control. Abbreviations: - CDN—content delivery network - API/CLI—is application programming interface/command line interface - SSO & RBAC—single sign-on & role-based access control ##### Link your private storage to Buildkite Package Registries The following steps provide a high-level overview on how to link your private storage (offered through a cloud-based storage provider) to Package Registries: 1. Provide bucket details. 1. Authorize Buildkite to access the bucket. 1. Run a diagnostic to confirm Buildkite can access/modify/sign objects in that bucket. 1. Activate the link. Learn more about how to configure Package Registries to use your private storage with the following supported cloud-based storage providers: - [Amazon S3 storage](/docs/package-registries/registries/private-storage-link/amazon-s3) - [Google Cloud Storage](/docs/package-registries/registries/private-storage-link/google-cloud-storage) ##### Set the default Buildkite Package Registries storage By default, your Buildkite organization uses storage provided by Buildkite (indicated as **Buildkite-hosted storage**). The _default storage_ is the storage used when a [new source registry is created](/docs/package-registries/registries/manage#create-a-source-registry). Once you have [configured at least one other private storage link](#link-your-private-storage-to-buildkite-package-registries), you can change the default storage to one of the configured private storage configurations. To do this: 1. Select **Settings** in the global navigation to access the [**Organization Settings**](https://buildkite.com/organizations/~/settings) page. 1. In the **Packages** section, select **Private Storage Link** to open its page. 1. Select **Change** to switch from using **Buildkite-hosted storage** (or a previously configured private storage link such as **s3://…** or **gs://…**) to your new private storage link. If this setting is currently configured to use a previously configured private storage link, the default storage can also be reverted back to using **Buildkite-hosted storage**. All [newly created source registries](/docs/package-registries/registries/manage#create-a-source-registry) will automatically use the default private storage location to house packages. --- ### Amazon S3 URL: https://buildkite.com/docs/package-registries/registries/private-storage-link/amazon-s3 #### Amazon S3 storage This page provides details on how to link your own Amazon Web Services (AWS) Simple Storage Service (S3) bucket (or simply _Amazon S3_ bucket) to Buildkite Package Registries, through a [private storage link](/docs/package-registries/registries/private-storage-link). By default, Buildkite Package Registries provides its own storage (known as _Buildkite storage_). However, linking your own Amazon S3 bucket to Package Registries lets you: - Keep packages and artifacts close to your geographical region for faster downloads. - Retain full sovereignty over your packages and artifacts, while Buildkite continues to manage their metadata and indexing. Buildkite Package Registries uses [AWS CloudFormation](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/Welcome.html) to provision its services within your private Amazon S3 storage. ##### Before you start Before you can start linking your private Amazon S3 storage to Buildkite Package Registries, you will need to have created your own empty Amazon S3 bucket. Learn more about: - Amazon S3 from the main [Amazon S3](https://aws.amazon.com/s3/) page, as well as the [Amazon S3 documentation](https://docs.aws.amazon.com/s3/). - How to create an Amazon S3 bucket from Amazon's [Getting started with Amazon S3](https://docs.aws.amazon.com/AmazonS3/latest/userguide/GetStartedWithS3.html) guide. ##### Link your private Amazon S3 bucket to Buildkite Package Registries To link your private Amazon S3 bucket to Package Registries: 1. As a [Buildkite organization administrator](/docs/package-registries/security/permissions#manage-teams-and-permissions-organization-level-permissions), select **Settings** in the global navigation to access the [**Organization Settings**](https://buildkite.com/organizations/~/settings) page. 1. In the **Packages** section, select **Private Storage Link** to open its page. 1. Select **Add private storage link** start the private storage configuration process. 1. On the **Provide your storage's details** (page) > **Step 1: Select your storage provider**, select **AWS**. 1. In **Step 2: Create or locate your AWS S3 bucket**, select **Open AWS** to open the list of Amazon S3 buckets in your AWS account, to either retrieve your existing empty S3 bucket, or create a new one if you [haven't already done so](#before-you-start). **Note:** If you are not already signed in to your AWS account, you may need to navigate to the area listing your S3 buckets. 1. Back on the Buildkite interface, in **Step 3: Enter your AWS S3 bucket details**, specify the **Region** (for example, `us-east-1`) and **Bucket name** for your Amazon S3 bucket, then select **Continue**. 1. On the next **Authorize Buildkite in AWS** page, select **Launch Stack** to open the **Quick create stack** page in the AWS CloudFormation interface. 1. Ensure the the following fields are populated with the correct information: * **Template URL**—should be based on: `https://packages-public-assets.s3.amazonaws.com/cf-templates/byo-storage-bucket-policy-yyyymmdd.yml` * **Stack name**—`buildkitePackagesProvisioning` by default, but can be changed if another CloudFormation stack of the same name exists in your AWS account. * **BucketName**—the name of your Amazon S3 bucket (specified on the previous **Provide your bucket's details** page in Buildkite). * **KeyPrefix**—`{org.uuid}/`, where `{org.uuid}` is the UUID of your Buildkite organization. * **IAM role - optional**—specify any **IAM role** **name** or **ARN** to restrict the actions that can be performed on your CloudFormation stack in your S3 bucket. 1. Select **Create stack** to begin creating the CloudFormation stack for your Amazon S3 bucket. 1. Once the stack is created, return to the Buildkite interface and select **Run diagnostic** to verify that Buildkite Package Registries can do the following with packages in your Amazon S3 private storage: * publish (`PUT`) * download (`GET`) * tag (`PUT`) * delete (`DELETE`) 1. Once the **Diagnostic Result** page indicates a **Pass** for each of these diagnostic tests, select **Create Private Storage Link** to complete this linking process. You are returned to the **Private Storage Link** page, where you can: - [Set the default Buildkite Package Registries storage for your Buildkite organization](/docs/package-registries/registries/private-storage-link#set-the-default-buildkite-package-registries-storage). - [Set the storage independently for each of your Buildkite registries](/docs/package-registries/registries/manage#update-a-source-registry-configure-registry-storage). ##### Deleting packages When deleting a package, Buildkite Package Registries does not delete the associated objects from your storage. Instead, Package Registries marks them for deletion using Amazon S3 _object tags_. Learn more about these object tags in Amazon's [Categorizing your storage using tags](https://docs.aws.amazon.com/AmazonS3/latest/userguide/object-tagging.html) documentation. An object tagged for deletion by Package Registries has the following key value pair: | Key | Value | | ------------------- | ------------------ | | `buildkite:deleted` | ISO 8601 timestamp | Set the expiration on objects from your Amazon S3 bucket by adding an [S3 Lifecycle configuration](https://docs.aws.amazon.com/AmazonS3/latest/userguide/how-to-set-lifecycle-configuration-intro.html) that filters on these object tags. For example, to remove objects 30 days after they're tagged, you can implement the following rule: ```json { "Rules": [ { "ID": "BuildkiteDeleteExpired", "Status": "Enabled", "Filter": { "Tag": { "Key": "buildkite:deleted", "Value": "*" } }, "Expiration": { "Days": 30 } } ] } ``` Learn more about filter syntax in these lifecycle rules from Amazon's [Lifecycle configuration elements > Filter element](https://docs.aws.amazon.com/AmazonS3/latest/userguide/intro-lifecycle-rules.html#intro-lifecycle-rules-filter) documentation. --- ### Google Cloud Storage URL: https://buildkite.com/docs/package-registries/registries/private-storage-link/google-cloud-storage #### Google Cloud Storage This page details on how to link your own Google Cloud Storage (GCS) bucket to Buildkite Package Registries, through a [private storage link](/docs/package-registries/registries/private-storage-link). By default, Buildkite Package Registries provides its own storage (called *Buildkite storage*). However, linking your own GCS bucket to Package Registries lets you: - Keep packages and artifacts close to your geographical region for faster downloads. - Retain full sovereignty over your packages and artifacts, while Buildkite continues to manage their metadata and indexing. ##### Google Cloud's Workload Identity Federation feature Each time Buildkite Package Registries uploads, downloads, or signs an object, the Buildkite platform presents its own OIDC token to Google Cloud's [Security Token Service](https://cloud.google.com/iam/docs/reference/sts/rest) (STS). The STS swaps this OIDC token for a short-lived access token representing your configured Google Cloud service account. This exchange is part of Google Cloud's [Workload Identity Federation](https://cloud.google.com/iam/docs/workload-identity-federation) feature, and allows Package Registries to perform its required actions inside your GCS bucket without the need for any storage of long-lived credentials. Key security benefits of Workload Identity Federation: - No long-lived [service account keys](https://cloud.google.com/iam/docs/service-account-creds#key-types)—tokens are generated when they're required and expire automatically. - Least-privilege access—Buildkite Package Registries only requires these Google Cloud Identity and Access Management (IAM) roles: * `roles/storage.bucketViewer` on the GCS bucket, to allow Package Registries to read the bucket's metadata. * `roles/storage.objectUser` on the GCS bucket to allow Package Registries to create, read, update, delete, and tag objects. * `roles/iam.serviceAccountTokenCreator` on the Google Cloud service account so it can create signed URLs. You can audit or revoke these at any time. - Full audit history—every token exchange and object access is recorded in Google Cloud service's audit logs. ##### Before you start Before you begin, you'll need a GCS bucket in a Google Cloud project that Buildkite can access. Learn more about: - Google Cloud Storage from its main [Cloud Storage](https://cloud.google.com/storage) page. - How to create a GCS bucket from Google's [Create a bucket](https://cloud.google.com/storage/docs/creating-buckets) guide. * As part of creating your GCS bucket, learn more about the [requirements for naming your bucket](https://cloud.google.com/storage/docs/naming-buckets), especially for creating ones with globally-unique names. - How to keep your GCS bucket private by default from Google's [Public access prevention](https://cloud.google.com/storage/docs/public-access-prevention) guide. - How to protect again accidental object deletion from Google's [Object Versioning](https://cloud.google.com/storage/docs/object-versioning) guide. Once you have a bucket, the Buildkite wizard will guide you through: 1. **Creating (or selecting) a service account** for Buildkite to impersonate. 1. **Creating a Workload Identity _Pool_ and _Provider_**. 1. **Granting IAM roles** to wire everything together. ##### Link your private GCS bucket to Buildkite Package Registries To link your private Google Cloud Storage (GCS) bucket to Package Registries: 1. As a [Buildkite organization administrator](/docs/package-registries/security/permissions#manage-teams-and-permissions-organization-level-permissions), select **Settings** in the global navigation to open the [**Organization Settings**](https://buildkite.com/organizations/~/settings) page. 1. In the **Packages** section, select **Private Storage Link** to open its page. 1. Select **Add private storage link** to start the private storage configuration process. 1. On the **Provide your storage's details** (page) > **Step 1: Select your storage provider**, select **GCS**. 1. In **Step 2: Create or locate your Google Cloud bucket**, select **Open Google Cloud** to open the list of GCS buckets in your Google Cloud account, to either retrieve your existing empty GCS bucket, or create a new one if you [haven't already done so](#before-you-start). **Notes:** * If you are already familiar with using Google Cloud Storage and need to create a new GCS bucket, expand the **Create a new bucket** section for quick instructions to start this process. * Ensure you are in the correct Google Cloud _organization_ and _project_ in which to create your GCS bucket. * For the fastest outcome, you can also copy the command line interface (CLI) code snippet and modify its following values before pasting the modified code snippet into your [Cloud Shell Terminal](https://cloud.google.com/storage/docs/discover-object-storage-gcloud) and submitting it: - `BUCKET`: the name of your new GCS bucket, for example, `my-gcs-bucket`. - `--location`: A location that's geographically closest to your current location, or the location closest to where this bucket's packages will most frequently be accessed from. 1. Back on the Buildkite interface, in **Step 3: Enter your Google Cloud bucket details**, specify the **Bucket name** (for example, `my-gcs-bucket`) for your GCS bucket configured in the previous step, then select **Continue**. 1. On the **Connect Buildkite to Google** page, you'll be configuring a [Google Cloud (GC) service account](https://cloud.google.com/iam/docs/service-account-overview) and [Workload Identity Pool and Provider (WIPP)](https://cloud.google.com/iam/docs/workload-identity-federation#providers) using the CLI code snippets on this page, which you can modify if required and paste into your [Cloud Shell Terminal](https://cloud.google.com/storage/docs/discover-object-storage-gcloud). * To create a new GC service account and WIPP, copy the **Create New** CLI code snippet and if required, modify its following **Setup** values before pasting the modified code snippet into your Cloud Shell Terminal and submitting it: - `SERVICE_ACCOUNT_NAME`: The name of your GC service account, which appears before the `@` symbol of your resulting GC service account's email address. - `POOL_ID`: The ID for your [workload identity pool](https://cloud.google.com/iam/docs/workload-identity-federation#pools), which must be a unique value for both active and deleted pools. - `PROVIDER_ID`: The ID for your [workload identity pool provider](https://cloud.google.com/iam/docs/workload-identity-federation#providers). * To find an existing GC service account and WIPP: 1. Scroll down the page and expand the **Find Existing** section. 1. Copy this CLI code snippet and if necessary, modify its `POOL_ID` and `PROVIDER_ID` values to those for the WIPP you want to use. 1. Paste this modified code snippet into your Cloud Shell Terminal and submit it. 1. From your Cloud Shell Terminal output: * If you created a new GC service account and WIPP: 1. Copy the **Service account created** value (from the Cloud Shell Terminal output, for example, `buildkite-storage-link@my-google-cloud-project.iam.gserviceaccount.com`), and paste it into the **Service account email** field on the **Connect Buildkite to Google** page of the Buildkite interface. This email address has the format `service-account-name@google-cloud-project-name.iam.gserviceaccount.com`. 1. Copy the **Workload Identity Provider** value (from the Cloud Shell Terminal output, for example, `projects/123456789012/locations/global/workloadIdentityPools/bk-pool/providers/buildkite`), and paste it into the **Workload Identity Provider (full resource name)** field on the **Connect Buildkite to Google** page of the Buildkite interface. This resource name has the format `projects/project-id/locations/global/workloadIdentityPools/pool-id/providers/provider-id`. * If you are using an existing GC service account and WIPP: 1. Copy the relevant **EMAIL** value (from the Cloud Shell Terminal output, for example `buildkite-storage-link@my-google-cloud-project.iam.gserviceaccount.com`), and paste it into the **Service account email** field on the **Connect Buildkite to Google** page of the Buildkite interface. 1. Copy the relevant **Workload Identity Provider resource name** value (from the Cloud Shell Terminal output, for example, `projects/123456789012/locations/global/workloadIdentityPools/bk-pool/providers/buildkite`), and paste it into the **Workload Identity Provider (full resource name)** field on the **Connect Buildkite to Google** page of the Buildkite interface. 1. Select **Next**. 1. On the next **Connect Buildkite to Google** page's **Allow Buildkite to impersonate service account** section of the Buildkite interface, copy and paste this CLI code snippet into your [Cloud Shell Terminal](https://cloud.google.com/storage/docs/discover-object-storage-gcloud) and submit it. This allows the Buildkite platform to impersonate your GC service account. 1. In the next **Grant bucket access to the service account** section of the Buildkite interface, copy and paste this CLI code snippet into your [Cloud Shell Terminal](https://cloud.google.com/storage/docs/discover-object-storage-gcloud) and submit it. This grants your GC service account the `roles/storage.objectUser` and `roles/storage.bucketViewer` roles on your bucket so the Buildkite platform can manage package objects and read bucket metadata. 1. Select **Run diagnostic**. Buildkite uploads, downloads, and tags a test object to confirm it can: * publish (`PUT`) * download (`GET`) * generate a signed URL (`signBlob`) * tag with metadata (to allow lifecycle rules to delete) * delete (`DELETE`) 1. When all tests pass, select **Create Private Storage Link** to finish. You're returned to the **Private Storage Link** page where you can: - [Set the default Buildkite Package Registries storage for your organization](/docs/package-registries/registries/private-storage-link#set-the-default-buildkite-package-registries-storage). - [Choose storage per registry](/docs/package-registries/registries/manage#update-a-source-registry-configure-registry-storage). ##### Deleting packages Buildkite does **not** delete objects straight away. When a package is removed it: - Adds a metadata tag of `buildkite:deleted=`. - Sets the object’s `customTime` field to the same timestamp. | Metadata key | Value (UTC) | |--------------------|-------------| | `buildkite:deleted`| ISO 8601 timestamp | Create a [Lifecycle rule](https://cloud.google.com/storage/docs/lifecycle) that removes objects a set number of days after their `customTime`. For example, to purge 30 days after deletion: ```json { "rule": [ { "action": { "type": "Delete" }, "condition": { "daysSinceCustomTime": 30 } } ] } ``` --- ### Overview URL: https://buildkite.com/docs/package-registries/migration #### Migrating to Buildkite Package Registries This section of the documentation provides a comprehensive guidance to help you export your packages, images and other files from an existing registry or repository provider, and import them to Buildkite Package Registries. ##### Before you start Ensure the following are ready or have been done before commencing the migration process: - The packages, images or other relevant files from your existing registry or repository provider are ready to be exported and downloaded locally. - A new Buildkite registry whose package ecosystem matches your existing registry or repository provider. Learn more about this process in [Create a registry](/docs/package-registries/registries/manage#create-a-source-registry). - An [API access token](https://buildkite.com/user/api-access-tokens) with the appropriate [package and registry scopes](/docs/apis/managing-api-tokens#token-scopes) to manage your packages. ##### Begin migrating To get started, choose the guide that corresponds to the registry or repository provider you are migrating from: - [Export from JFrog Artifactory](/docs/package-registries/migration/from_jfrog_artifactory) - [Export from Cloudsmith](/docs/package-registries/migration/from-cloudsmith) Once you have downloaded your exported packages, you can then [import them into Buildkite Package Registries](/docs/package-registries/migration/import-to-package-registries). If you need further assistance or have any questions, please don't hesitate to reach out to support at support@buildkite.com for help. --- ### Export from JFrog URL: https://buildkite.com/docs/package-registries/migration/from-jfrog-artifactory #### Export from JFrog Artifactory To migrate your packages from JFrog Artifactory to Buildkite Package Registries, you'll need to export and download packages from a JFrog Artifactory repository before importing them to your Buildkite registry. ##### Download packages using the JFrog Artifactory interface You can download a complete folder of packages or a specific package version from a JFrog Artifactory repository through its interface: - To download a complete folder of packages, follow JFrog's [Download a Folder](https://jfrog.com/help/r/jfrog-artifactory-documentation/download-a-folder) guide. You might need to configure folder download from the administrator settings. - To download a specific version of a package, follow JFrog's [Downloading Package Versions](https://docs.jfrog.com/artifactory/docs/upload-and-download-packages-using-artifactory#downloading-package-versions) guidance. ##### Download packages using the JFrog CLI The [JFrog CLI](https://jfrog.com/help/r/jfrog-applications-and-cli-documentation/jfrog-cli) provides a command line interface (CLI) that allows more options on downloading packages from JFrog Artifactory repositories than what is typically available through the JFrog Artifactory interface. Learn more about this from the **Downloading Files** section of the [Generic Files](https://jfrog.com/help/r/jfrog-applications-and-cli-documentation/generic-files) of the JFrog CLI documentation. ###### Setting up the JFrog CLI 1. First, [download and install the JFrog CLI](https://jfrog.com/help/r/jfrog-applications-and-cli-documentation/download-and-install-the-jfrog-cli). You can install the latest version of the JFrog CLI from JFrog's [Install the Latest Version of JFrog CLI](https://jfrog.com/getcli/) page on their website. 1. Use the `jf c add` command to authenticate your JFrog Artifactory login credentials to access the repository whose package/s need to be downloaded. Learn more about how to do this from the [Authentication page of the JFrog CLI](https://jfrog.com/help/r/jfrog-applications-and-cli-documentation/authentication) documentation. 1. Use the `jfrog rt dl` command to download the required packages from your JFrog Artifactory repository. Learn more about this from the **Downloading Files** section of the [Generic Files](https://jfrog.com/help/r/jfrog-applications-and-cli-documentation/generic-files) of the JFrog CLI documentation. ###### Example JFrog CLI download commands The following JFrog CLI download command examples can be used to get you started. To download all packages from a particular JFrog Artifactory repository, use the `--flat` option download all of these packages into the same folder. ```bash jfrog rt dl {repo-name} --flat ``` Following on from this, to download a particular package type from all JFrog Artifactory repositories that your API access token provides access to, specify a wildcard package name with a file type extension, such as the following example for `.deb` files. ```bash jfrog rt dl "*/*.deb" --flat ``` ##### Next step Once you have downloaded your packages from your JFrog Artifactory repositories, learn how to [import them into your Buildkite registry](/docs/package-registries/migration/import-to-package-registries). --- ### Export from Cloudsmith URL: https://buildkite.com/docs/package-registries/migration/from-cloudsmith #### Export from Cloudsmith To migrate your packages from Cloudsmith to Buildkite Package Registries, you'll need to export and download packages from a Cloudsmith repository before importing them to your Buildkite registry. ##### Download packages using the Cloudsmith interface Cloudsmith offers two options to download specific packages from a Cloudsmith repository through its interface, one of which also involves command execution through a command line interface (CLI): - To download individual packages, follow the [Download via Cloudsmith web app](https://help.cloudsmith.io/docs/download-a-package#download-via-cloudsmith-web-app) guide for either [public](https://help.cloudsmith.io/docs/download-a-package#public-repositories) or [private](https://help.cloudsmith.io/docs/download-a-package#private-repositories) repositories. - To download packages using native package management tools (for example, `npm` or `gem`), follow Cloudsmith's [Downloading via Native Package Manager](https://help.cloudsmith.io/docs/download-a-package#download-via-native-package-manager) guide. This guide provides details on how to use the Cloudsmith interface to access specific instructions for each native package management tool. These specific instructions then provide guidance on using the relevant native package management's own CLI tools to download packages from Cloudsmith. > 📘 > Cloudsmith does not provide a mechanism to download packages in bulk from a repository through its interface. However, scripting-based methods (using the [Cloudsmith CLI](https://help.cloudsmith.io/docs/cli) tool) are available to [download packages in bulk](#download-packages-in-bulk). ##### Download packages using the Cloudsmith REST API or CLI tool Cloudsmith does not support downloading packages directly using its [REST API](https://help.cloudsmith.io/reference/introduction) or [CLI](https://help.cloudsmith.io/docs/cli) tool. However, download URLs can be obtained using the Cloudsmith REST API or its command line interface (CLI), which in turn can then be used to download packages from a Cloudsmith repository. > 📘 > If you are using the Cloudsmith CLI to download packages, ensure that your [Cloudsmith API key](https://help.cloudsmith.io/docs/cli#getting-your-api-key) has been set up correctly. ###### Retrieving download URLs using the REST API To retrieve the download URL/s for one or more packages in a Cloudsmith repository [using the Cloudsmith API](https://help.cloudsmith.io/reference/packages_list): ```bash curl -X GET "https://api.cloudsmith.io/v1/packages/{owner}/{repository}/" \ -H "X-Api-Key: $CLOUDSMITH_API_KEY" \ -H 'accept: application/json' | jq '.[].cdn_url' ``` where: - `{owner}` is your Cloudsmith account or organization name. - `{repository}` is your Cloudsmith repository name/slug. - `$CLOUDSMITH_API_KEY` is your [Cloudsmith API key](https://help.cloudsmith.io/docs/api-key). The `jq '.[].cdn_url` command transforms the JSON response from this Cloudsmith REST API query to list the URLs for individual packages, which can then be used to download them from this repository. ###### Retrieving download URLs using the CLI To retrieve the download URL/s for one or more packages in a Cloudsmith repository [using the Cloudsmith CLI](https://help.cloudsmith.io/docs/search-packages#searching-packages-via-the-cloudsmith-cli): ```bash cloudsmith list packages {owner}/{repository} -F json | jq -r '.data[].cdn_url' ``` where: - `{owner}` is your Cloudsmith account or organization name. - `{repository}` is your Cloudsmith repository name/slug. The `jq -r '.data[].cdn_url` command transforms the JSON-formatted response from this Cloudsmith CLI command to list the URLs for individual packages, which can then be used to download them from this repository. > 📘 > The command `cloudsmith list packages` can also be contracted to `cloudsmith ls pkgs`. > Note that the [Cloudsmith CLI](https://help.cloudsmith.io/docs/cli) tool can also be used to [download packages in bulk](#download-packages-in-bulk). ###### Download a package Once you have obtained the relevant download URLs (using the [REST API](#download-packages-using-the-cloudsmith-rest-api-or-cli-tool-retrieving-download-urls-using-the-rest-api) or [CLI](#download-packages-using-the-cloudsmith-rest-api-or-cli-tool-retrieving-download-urls-using-the-cli)) for packages from your Cloudsmith registry, you can download using the `wget` command to download them. To download a package from a _public_ repository: ```bash wget {cdn_url} ``` where `{cdn_url}` is the URL of your package to be downloaded. To download a package from a _private_ repository: ```bash wget -d --header="X-Api-Key: $CLOUDSMITH_API_KEY" {cdn_url} ``` where `$CLOUDSMITH_API_KEY` is your [Cloudsmith API key](https://help.cloudsmith.io/docs/api-key). Or: ```bash wget --http-user=$account --http-password=$token {cdn_url} ``` where: - `$account` is your Cloudsmith account or organization name. - `$token` is an appropriate [Cloudsmith entitlement token](https://help.cloudsmith.io/docs/entitlements). ##### Download packages in bulk Packages can be downloaded in bulk from a Cloudsmith repository using the [Cloudsmith CLI](https://help.cloudsmith.io/docs/cli) tool, along with some scripting. Learn more about how to do this from the [Bulk Package Download](https://help.cloudsmith.io/docs/download-a-package#bulk-package-download) section of Cloudsmith's documentation, which provides scripting examples for Linux (bash) and Windows (PowerShell). ##### Next step Once you have downloaded your packages from your Cloudsmith repositories, learn how to [import them into your Buildkite registry](/docs/package-registries/migration/import-to-package-registries). --- ### Export from Packagecloud URL: https://buildkite.com/docs/package-registries/migration/from-packagecloud #### Export from Packagecloud To migrate your packages from Packagecloud to Buildkite Package Registries, you'll need to export and download packages from a Packagecloud repository using the Packagecloud REST API before importing them to your Buildkite registry. ##### Before you start To export the packages, you'll need: - A Packagecloud account with access to the repository you want to export - Your Packagecloud API token - `curl` installed on your system - `jq` installed for JSON processing (install using `brew install jq` on macOS or `apt install jq` on Debian/Ubuntu) - Sufficient disk space for your packages ##### Get your Packagecloud API token 1. Log in to [packagecloud.io](https://packagecloud.io). 1. Navigate to [packagecloud.io/api_token](https://packagecloud.io/api_token). 1. Copy your API token and store it securely. ##### Export all packages from a repository The following shell script exports all packages from a Packagecloud repository to a local directory. It handles pagination automatically and preserves the original filenames. To be able to use the script, create a file named `export-packagecloud.sh` with the following content: ```bash #!/bin/bash set -euo pipefail PACKAGECLOUD_TOKEN="${PACKAGECLOUD_TOKEN:-}" PACKAGECLOUD_USER="${PACKAGECLOUD_USER:-}" PACKAGECLOUD_REPO="${PACKAGECLOUD_REPO:-}" OUTPUT_DIR="${OUTPUT_DIR:-./packagecloud-export}" PER_PAGE=100 if [[ -z "$PACKAGECLOUD_TOKEN" ]]; then echo "Error: PACKAGECLOUD_TOKEN environment variable is required" exit 1 fi if [[ -z "$PACKAGECLOUD_USER" ]]; then echo "Error: PACKAGECLOUD_USER environment variable is required" exit 1 fi if [[ -z "$PACKAGECLOUD_REPO" ]]; then echo "Error: PACKAGECLOUD_REPO environment variable is required" exit 1 fi mkdir -p "$OUTPUT_DIR" echo "Exporting packages from packagecloud.io/${PACKAGECLOUD_USER}/${PACKAGECLOUD_REPO}" echo "Output directory: $OUTPUT_DIR" fetch_all_packages() { local page=1 local all_packages="[]" while true; do echo "Fetching page $page..." response=$(curl -s -u "${PACKAGECLOUD_TOKEN}:" \ "https://packagecloud.io/api/v1/repos/${PACKAGECLOUD_USER}/${PACKAGECLOUD_REPO}/packages.json?per_page=${PER_PAGE}&page=${page}") if ! echo "$response" | jq -e 'type == "array"' > /dev/null 2>&1; then echo "Error: Invalid API response on page $page" echo "$response" exit 1 fi count=$(echo "$response" | jq 'length') if [[ "$count" -eq 0 ]]; then break fi echo "Found $count packages on page $page" all_packages=$(echo "$all_packages" "$response" | jq -s 'add') if [[ "$count" -lt "$PER_PAGE" ]]; then break fi page=$((page + 1)) done echo "$all_packages" } packages=$(fetch_all_packages) total=$(echo "$packages" | jq 'length') echo "Total packages to download: $total" echo "$packages" | jq '.' > "${OUTPUT_DIR}/manifest.json" echo "Package manifest saved to ${OUTPUT_DIR}/manifest.json" echo "$packages" | jq -c '.[]' | while read -r package; do filename=$(echo "$package" | jq -r '.filename') package_url=$(echo "$package" | jq -r '.package_url') package_type=$(echo "$package" | jq -r '.type') type_dir="${OUTPUT_DIR}/${package_type}/${PACKAGECLOUD_REPO}" mkdir -p "$type_dir" output_path="${type_dir}/${filename}" if [[ -f "$output_path" ]]; then echo "Skipping (already exists): $filename" continue fi echo "Downloading: $filename" package_details=$(curl -s -u "${PACKAGECLOUD_TOKEN}:" \ "https://packagecloud.io${package_url}") download_url=$(echo "$package_details" | jq -r '.download_url // empty') if [[ -z "$download_url" ]]; then echo " Warning: No download URL found for $filename, skipping" continue fi if curl -s -L -u "${PACKAGECLOUD_TOKEN}:" -o "$output_path" "$download_url"; then echo " Saved to: $output_path" else echo " Error: Failed to download $filename" rm -f "$output_path" fi done echo "Export complete. Output directory: $OUTPUT_DIR" ``` Make the script executable and run it: ```bash chmod +x export-packagecloud.sh export PACKAGECLOUD_TOKEN="your-api-token" export PACKAGECLOUD_USER="your-username" export PACKAGECLOUD_REPO="your-repository" ./export-packagecloud.sh ``` The script creates the following directory structure, organizing packages by ecosystem type and source repository: ``` packagecloud-export/ ├── manifest.json ├── deb/ │ └── my-repo/ │ └── example_1.0.0_amd64.deb ├── rpm/ │ └── my-repo/ │ └── example-1.0.0-1.x86_64.rpm └── gem/ └── my-repo/ └── example-1.0.0.gem ``` Each top-level folder (`deb/`, `rpm/`, `gem/`) maps to one package ecosystem. The repository subdirectory preserves the source Packagecloud repository name, which can be used for re-creating these as registries within Buildkite Package Registries. For example, to import all Debian packages into a Buildkite Debian registry, run: ```bash find ./packagecloud-export/deb -name "*.deb" -exec bk package push my-debian-registry {} \; ``` ##### Export packages manually For smaller repositories or if you would like to have more control over the export process, you can use curl commands directly. Follow the instructions and commands in the sections below. ###### List all packages in a repository ```bash curl -s -u "YOUR_API_TOKEN:" \ "https://packagecloud.io/api/v1/repos/USERNAME/REPO/packages.json?per_page=100" \ | jq '.' ``` Replace `YOUR_API_TOKEN`, `USERNAME`, and `REPO` with your values. Note the trailing colon after the token as it is required for HTTP basic authentication with an empty password. ###### Get package details and download URL The package list response includes a `package_url` field. Use this to fetch the package details, which contain the `download_url`: ```bash curl -s -u "YOUR_API_TOKEN:" \ "https://packagecloud.io/api/v1/repos/USERNAME/REPO/package/TYPE/DISTRO/VERSION/FILENAME.json" \ | jq '.download_url' ``` ###### Download a package ```bash curl -L -u "YOUR_API_TOKEN:" \ -o "package-filename.deb" \ "DOWNLOAD_URL" ``` ##### Handling pagination The Packagecloud API returns a maximum of 100 packages per request. For repositories with more packages, use the `page` query parameter: ```bash curl -s -u "YOUR_API_TOKEN:" \ "https://packagecloud.io/api/v1/repos/USERNAME/REPO/packages.json?per_page=100&page=2" ``` The API provides pagination information in response headers: - `Total`: total number of packages - `Per-Page`: number of packages per page - `Link`: links to next, previous, and last pages ##### Troubleshooting This section covers the potential issues you might run into when bulk-exporting your packages from Packagecloud following the instructions in this guide and how to solve them. ###### Authentication errors If you receive a 401 Unauthorized response, verify that: - Your API token is correct - The token is passed as the username with an empty password (note the trailing colon in `-u "TOKEN:"`) ###### Rate limiting Packagecloud may rate limit API requests. If you encounter rate limiting: - Add a delay between downloads by inserting `sleep 1` in the download loop - Run the export during off-peak hours ###### Missing download URLs Some package types use different API endpoints. If a package doesn't have a `download_url` in the response, check the [Packagecloud API documentation](https://packagecloud.io/docs/api) for the correct endpoint for that package type. ###### Non-version-agnostic packages For deb, rpm, and alpine packages, migration works only if your packages are distribution version-agnostic (for example, a package works on all Ubuntu versions such as Focal and Jammy). If your packages target specific distribution versions, contact [Buildkite support](mailto:support@buildkite.com) before proceeding. ##### Next step Once you have downloaded your packages from your Packagecloud repositories, learn how to [import them into your Buildkite registry](/docs/package-registries/migration/import-to-package-registries). > 🚧 Repository signing keys > Buildkite Package Registries signs repository metadata with its own keys, not your Packagecloud keys. After migration, update your clients (apt, yum, apk) to use the new signing keys from your Buildkite registry. --- ### Import exported packages URL: https://buildkite.com/docs/package-registries/migration/import-to-package-registries #### Import exported packages to Package Registries After exporting and downloading your packages, images, and other files from your existing registry or repository provider, you can then import them to your Buildkite registry! ##### Use via the Buildkite CLI The easiest method for importing packages, images, and other files from your existing registry or repository provider is to use the [Buildkite CLI](/docs/platform/cli) tool. Ensure that: - You have [installed the Buildkite CLI tool](/docs/platform/cli/installation), and have configured your organization name and token (using the `bk configure` command). - You have set up a registry whose [supported package ecosystem](/docs/package-registries/ecosystems) matches the package, images, or other files. You downloaded from your existing registry or repository provider. To push a package to your registry using the Buildkite CLI, run the command `bk package push`. Learn more about how to [use this command](/docs/platform/cli#usage) by running the command option `bk package push --help`. ###### Example of importing a single file The following command is an example of using the Buildkite CLI to import a single Debian package to a Buildkite (Debian) registry named `my-registry`. ```bash bk package push my-registry my-package.deb ``` ###### Example of bulk-importing files from a folder The following shell script can be used to bulk-import files from a folder. Once made executable, this shell script imports all files of a specified type found in a specified local folder path to a specified registry. ```bash #!/bin/bash for FILE in $(ls $2/*.$3); do bk package push $1 $FILE done ``` This example command demonstrates running this script from its current location to bulk import Debian packages from the local folder `/path/to/my/downloaded/deb/files` to the Buildkite (Debian) registry named `my-registry`. ```bash ./bulk-import.sh my-registry /path/to/my/downloaded/deb/files deb ``` ##### Importing via the REST API and other methods To import a package via REST API, use the [publish a package](/docs/apis/rest-api/package-registries/packages#publish-a-package) endpoint. You can also find other methods for publishing packages to Buildkite registries on the specific package ecosystem pages linked from [Package ecosystems overview](/docs/package-registries/ecosystems). --- ### Overview URL: https://buildkite.com/docs/package-registries/security #### Security Customer security is paramount to Buildkite. Buildkite Package Registries provides mechanisms to restrict access to your registries from Buildkite agents and their pipeline's jobs, as well as other third-party systems that can issue [Open ID Connect (OIDC)](https://openid.net/developers/how-connect-works/) tokens. This section contains the following topics: - [OIDC with Buildkite Package Registries](/docs/package-registries/security/oidc) and how to restrict access to registries through OIDC policies. - [User, team, and registry permissions](/docs/package-registries/security/permissions) and how to manage team and user access to registries. - [SLSA provenance](/docs/package-registries/security/slsa-provenance) and how to publish packages and other artifact types to registries with SLSA provenance. --- ### OIDC URL: https://buildkite.com/docs/package-registries/security/oidc #### OIDC in Buildkite Package Registries [Open ID Connect (OIDC)](https://openid.net/developers/how-connect-works/) is an authentication protocol based on the [OAuth 2.0 framework](https://auth0.com/docs/authenticate/protocols/oauth/). With OIDC, one system or service issues a typically short-lived _OIDC token_, which is a signed [JSON Web Token (JWT)](https://jwt.io/) containing metadata (or _claims_) about a user or object. This token can be consumed by another service (which may be offered by a third-party or by the same organization) to authenticate the user or object. An _OIDC policy_ configured on this other service defines which OIDC tokens, based on their claims (also known as _asserted_ claims) are permitted to perform the actions. If the OIDC token's asserted claims comply with those of the OIDC policy configured in the other service, the token is authenticated and the service issuing the token is permitted to perform its actions on the other service. You can configure Buildkite registries with OIDC policies that allow access using OIDC tokens issued by Buildkite agents and other OIDC identity providers. This is similar to how [third-party products and services can be configured with OIDC policies](/docs/pipelines/security/oidc) to consume Buildkite agent OIDC tokens for specific pipeline jobs, for deployment, or access management and security purposes. A Buildkite agent's OIDC tokens assert claims about the slugs of the pipeline it is building and organization that contains this pipeline, the ID of the job that created the token, as well as other claims, such as the name of the branch used in the build, the SHA of the commit that triggered the build, and the agent ID. If the token's claims do not comply with the registry's OIDC policy, the OIDC token is rejected, and any actions attempted with that token will fail. If the claims do comply, however, the Buildkite agent and its permitted actions will have read and write access to packages in the registry. Such tokens are also short-lived to further mitigate the risk of compromising the security of your Buildkite registries, should the token accidentally be leaked. The [Buildkite agent's `oidc` command](/docs/agent/cli/reference/oidc) allows you to request an OIDC token from Buildkite containing claims about the pipeline's current job. These tokens can then be used by a Buildkite registry to determine (through its OIDC policy) if the organization, pipeline and any other metadata associated with the pipeline and its job are permitted to publish/upload packages to this registry. ##### OIDC token requirements All Buildkite registries defined with an OIDC policy, require the following claims from an OIDC token (unless indicated as optional), regardless of the OIDC identity provider that issued the token. | Claim | Value | | ----- | ----- | | [`iat` (issued at)](/docs/agent/cli/reference/oidc#iat) | Must be a UNIX timestamp in the past. | | [`nbf` (not before)](/docs/agent/cli/reference/oidc#nbf) (Optional) | If present, must be a UNIX timestamp in the past. | | [`exp` (expiration time)](/docs/agent/cli/reference/oidc#exp) | Must be a UNIX timestamp in the future. The OIDC token's lifespan—that is, the `exp` minus the `iat` timestamp values—cannot be greater than 5 minutes. | | [`aud` (audience)](/docs/agent/cli/reference/oidc#aud) | Must be equal to the registry's canonical URL, which has the format `https://packages.buildkite.com/{org.slug}/{registry.slug}`. | When generating an OIDC token from: - A [Buildkite agent](/docs/agent/cli/reference/oidc), the [`--audience` option](/docs/agent/cli/reference/oidc#audience) must explicitly be specified with the required value, whereas `iat`, `nbf` and `exp` claims will automatically be included in the token. - Another OIDC identity provider, ensure that its OIDC tokens contain these required claims. This should be the case by default, but if not, consult the relevant documentation for your OIDC identity provider on how to include these claims in the OIDC tokens it issues. ##### Define an OIDC policy for a registry You can specify an OIDC policy for your Buildkite registry, which defines the criteria for which OIDC tokens, from the [Buildkite agent](/docs/agent/cli/reference/oidc) or another OIDC identity provider, will be accepted by your registry and authenticate a package publication/upload action from that system. To define an OIDC policy for one or more Buildkite pipeline jobs in a registry: 1. Select **Package Registries** in the global navigation to access the [**Registries**](https://buildkite.com/organizations/~/packages) page. 1. Select the registry whose OIDC policy needs defining. 1. Select **Settings** > **OIDC Policy** to access the registry's **OIDC Policy** page. 1. In the **Policy** field, specify this using the following [Basic OIDC policy format](#define-an-oidc-policy-for-a-registry-basic-oidc-policy-format), or one based on a more [complex example](#define-an-oidc-policy-for-a-registry-complex-oidc-policy-example). Learn more about how an OIDC policy for a registry is constructed in [Policy structure and behavior](#define-an-oidc-policy-for-a-registry-policy-structure-and-behavior). ###### Basic OIDC policy format The basic format for a Buildkite registry's OIDC policy is: ```yaml - iss: https://agent.buildkite.com scopes: - read_packages claims: organization_slug: organization-slug pipeline_slug: pipeline-slug build_branch: main ``` where: - `iss` (the issuer) is `https://agent.buildkite.com`, representing tokens issued by Buildkite. - the `scopes` field identifies the actions that the token can perform. The only supported scopes are `read_packages`, `write_packages`, and `delete_packages`. - the `claims` field contains: * `organization-slug`, which can be obtained from the end of your Buildkite URL, after accessing **Package Registries** or **Pipelines** in the global navigation of your organization in Buildkite. * `pipeline-slug`, which can be obtained from the end of your Buildkite URL, after accessing **Pipelines** in the global navigation of your organization in Buildkite. * `main` or whichever branch of the repository you want to restrict package publication/uploads from pipeline builds. However, more [complex OIDC policies](#define-an-oidc-policy-for-a-registry-complex-oidc-policy-example) can be created. ###### Complex OIDC policy example The following OIDC policy for a Buildkite registry contains two [_statements_](#statements)—one for a registry in Package Registries and another for GitHub Actions. ```yaml - iss: https://agent.buildkite.com scopes: - read_packages - write_packages claims: organization_slug: equals: your-org pipeline_slug: in: - one-pipeline - another-pipeline build_branch: matches: - main - feature/* not_equals: feature/not-this-one - iss: https://token.actions.githubusercontent.com scopes: - delete_packages claims: repository: matches: your-org/* actor: in: - deploy-bot - revert-bot ``` The first statement allows OIDC tokens representing a pipeline's job being built by a Buildkite agent, but only when all of the following is true for the tokens' claims: - The [organization slug](/docs/agent/cli/reference/oidc#organization-slug) is `your-org` - The [pipeline slug](/docs/agent/cli/reference/oidc#pipeline-slug) is either `one-pipeline` or `another-pipeline` - The [build branch](/docs/agent/cli/reference/oidc#build-branch) is either `main` or matches a `feature/*` branch Tokens allowed by this statement can read and write packages in the registry. The second statement allows OIDC tokens representing a GitHub Actions workflow, but only when all of the following is true for the tokens' claims: - The repositories match `your-org/*` - The actor is either `deploy-bot` or `revert-bot` Tokens allowed by this statement can only delete packages in the registry. ###### Policy structure and behavior OIDC policy [_statements_](#statements) in Buildkite Package Registries are defined as a YAML- or JSON-formatted list, each of which includes a _token issuer_ from an OIDC identity provider, along with a map of [_claim rules_](#claim-rules). If an OIDC token's claims match both the token issuer and _all_ claim rules defined by any statement within a registry's OIDC policy, then the token is accepted and the OIDC identity provider that issued the token is granted access to the registry. If no statements of the OIDC policy match, the token is rejected, and no registry access is granted. When multiple statements match a token's claims, the token is accepted by the first matching statement in the policy, and no further statements are evaluated. This affects the use of scopes, as only the scopes defined in the first matching statement are granted to the token. When using YAML to define an OIDC policy, only simple YAML syntax is accepted—that is, YAML containing only scalar values, maps, and lists. Complex YAML syntax and features, such as anchors, aliases, and tagged values are not supported. ###### Statements A _statement_ consists of a list of [_claim rules_](#claim-rules) for a particular _token issuer_ within an OIDC policy, as well as the _API scopes_ that the token is allowed to access. Each statement in the policy must contain: - An `iss` field, which is used to identify the token issuer. Statements will only match OIDC tokens whose `iss` claim matches the value of this field. - A `scopes` field, which is a list of API scopes that a token is granted. Currently, the only scopes supported by Registry OIDC policies are `read_packages`, `write_packages`, and `delete_packages`. If a token's claims match a statement, the token is granted access to the registry with the scopes defined in that statement. - A `claims` field, which is a map of [claim rules](#claim-rules). Currently, only OIDC tokens from the following token issuers are supported. | Token issuer name | The token issuer (`iss`) value | Relevant documentation link | | ----------------- | ------------------------------ | --------------------------- | | Buildkite | `https://agent.buildkite.com` | [Buildkite agent `oidc` command](/docs/agent/cli/reference/oidc) | | GitHub Actions | `https://token.actions.githubusercontent.com` | [GitHub Actions OIDC Tokens](https://docs.github.com/en/actions/security-for-github-actions/security-hardening-your-deployments/about-security-hardening-with-openid-connect) | | CircleCI | `https://oidc.circleci.com/org/$ORG` where `$ORG` is your organization name | [CircleCI OIDC Tokens](https://circleci.com/docs/openid-connect-tokens) | If you'd like to use OIDC tokens from a different token issuer or OIDC identity provider with Buildkite Package Registries, please contact [support](https://buildkite.com/about/contact/). ###### Claim rules A [_statement_](#statements) contains a `claims` field, which in turn contains a map of _claim rules_, where the rule's key is the name of the claim being verified, and the rule's value is the actual rule used to verify this claim. Each rule is a map of [_matchers_](#claim-rule-matchers), which are used to match a claim value in an OIDC token. If at least one claim rule defined within an OIDC policy's statement is missing from an OIDC token and no other statements in that policy have complete matches with the token's claims, then the token is rejected. When a claim rule contains multiple matchers—such as the `build_branch` claim rule in the [complex example](#define-an-oidc-policy-for-a-registry-complex-oidc-policy-example) above—_all_ of the rule's matchers must match a claim in the token for it to be granted registry access. In the `build_branch` example above, this means that the token must have a `build_branch` claim whose value is either `main` or begins with `feature/`, but whose value is not `feature/not-this-one`. Be aware that this means some combinations of matchers used in a claim rule may never match an OIDC token's claims. For example, the following OIDC policy statement will always reject a token, since the token's `build_branch` claim cannot be both equal to `main` and not equal to `main` at the same time: ```yaml - iss: https://agent.buildkite.com scopes: - read_packages claims: build_branch: equals: main not_equals: main ``` ###### Claim rule matchers The following _matchers_ can be used within a [_claim rule_](#claim-rules). | Matcher | Argument type | Description | | ------- | ------------- | ----------- | | `equals` | Scalar | The claim value must be exactly equal to the argument. | | `not_equals` | Scalar | The claim value must not be exactly equal to the argument. | | `in` | List of scalars | The claim value must be in the list of arguments. | | `not_in` | List of scalars | The claim value must not be in the list of arguments. | | `matches` | List of glob strings OR a single glob string | The claim value must match at least one of the globs provided. Note that this matcher is only applied when the claim value is a string, and is ignored otherwise. | Argument type details: - A scalar is a single value, which must be a String, Number (float or integer), Boolean, or Null. - A glob string is a string that may contain wildcards, such as `*` or `?`, which match zero or more characters, or a single character respectively. Glob strings are _not_ regular expressions, and do not support the full range of features that regular expressions do. As a special case, if a claim rule in its entirety is a scalar, it is treated as if it were a rule with the `equals` matcher. This means that the following two claim rules are equivalent: ```yaml organization_slug: your-org #### is equivalent to organization_slug: equals: your-org ``` ##### Configure a Buildkite pipeline to authenticate to a registry Configuring a Buildkite pipeline [`command` step](/docs/pipelines/configure/step-types/command-step) to request an OIDC token from Buildkite to interact with your Buildkite registry [configured with an OIDC policy](#define-an-oidc-policy-for-a-registry), is a two-part process. ###### Part 1: Request an OIDC token from Buildkite To do this, use the following [`buildkite-agent oidc` command](/docs/agent/cli/reference/oidc): ```bash buildkite-agent oidc request-token --audience "https://packages.buildkite.com/{org.slug}/{registry.slug}" --lifetime 300 ``` where: - `--audience` is the target system that consumes this OIDC token. For Buildkite Package Registries, this value must be based on the URL `https://packages.buildkite.com/{org.slug}/{registry.slug}`. - `{org.slug}` can be obtained from the end of your Buildkite URL, after accessing **Package Registries** or **Pipelines** in the global navigation of your organization in Buildkite. - `{registry.slug}` is the slug of your registry, which is the [kebab-case](https://en.wikipedia.org/wiki/Letter_case#Kebab_case) version of your registry name, and can be obtained after accessing **Package Registries** in the global navigation > your registry from the **Registries** page. - `--lifetime` is the time (in seconds) that the OIDC token is valid for. By default, this value must be less than `300`. ###### Part 2: Authenticate the registry with the OIDC token To do this (using Docker as an example), authenticate the registry with the OIDC token obtained in [part 1](#configure-a-buildkite-pipeline-to-authenticate-to-a-registry-part-1-request-an-oidc-token-from-buildkite) by piping the output through to the `docker login` command: ```bash docker login packages.buildkite.com/{org.slug}/{registry.slug} --username buildkite --password-stdin ``` where: - `{org.slug}` and `{registry.slug}` are the same as the values used in the [`buildkite-agent oidc request-token` command](#configure-a-buildkite-pipeline-to-authenticate-to-a-registry-part-1-request-an-oidc-token-from-buildkite). - `--username` always has the value `buildkite`. As a result, the full [`command` step](/docs/pipelines/configure/step-types/command-step) would look like: ```bash buildkite-agent oidc request-token --audience "https://packages.buildkite.com/{org.slug}/{registry.slug}" --lifetime 300 | docker login packages.buildkite.com/{org.slug}/{registry.slug} --username buildkite --password-stdin ``` For a Buildkite organization with a slug `my-organization` and a registry slug `my-registry`, this full command would look like: ```bash buildkite-agent oidc request-token --audience "https://packages.buildkite.com/my-organization/my-registry" --lifetime 300 | docker login packages.buildkite.com/my-organization/my-registry --username buildkite --password-stdin ``` ###### Example pipeline The following example Buildkite pipeline YAML snippet demonstrates how to push Docker images to a Buildkite registry using OIDC token authentication: ```yml steps: - key: "docker" label: "\:docker\: Build, Login & Push" commands: - echo "Building Docker image" - docker build --tag packages.buildkite.com/my-organization/my-registry/my-image:latest . - echo "Logging into Buildkite Package Registry using OIDC" - buildkite-agent oidc request-token --audience "https://packages.buildkite.com/my-organization/my-registry" --lifetime 300 | docker login packages.buildkite.com/my-organization/my-registry --username buildkite --password-stdin - echo "Pushing Docker image to Buildkite Package Registry" - docker push packages.buildkite.com/my-organization/my-registry/my-image:latest ``` --- ### Permissions URL: https://buildkite.com/docs/package-registries/security/permissions #### User, team, and registry permissions The [_teams_ feature](#manage-teams-and-permissions) allows you to apply access permissions and functionality controls for one or more groups of users (that is, _teams_) on each registry throughout your organization. Enterprise plan customers can configure registry permissions for all users across their Buildkite organization through the **Security** page. Learn more about this feature in [Manage organization security for registries](#manage-organization-security-for-registries). ##### Manage teams and permissions To manage teams across the Buildkite Package Registries application, a _Buildkite organization administrator_ first needs to enable this feature across their organization. Learn more about how to do this in the [Manage teams and permissions in the Platform documentation](/docs/platform/team-management/permissions#manage-teams-and-permissions). Once the _teams_ feature is enabled, you can see the teams that you're a member of from the **Users** page, which: - As a Buildkite organization administrator, you can access by selecting **Settings** in the global navigation > [**Users**](https://buildkite.com/organizations/~/users/). - As any other user, you can access by selecting **Teams** in the global navigation > [**Users**](https://buildkite.com/organizations/~/users/). ###### Organization-level permissions Learn more about what a _Buildkite organization administrator_ can do in the [Organization-level permissions in the Platform documentation](/docs/platform/team-management/permissions#manage-teams-and-permissions-organization-level-permissions). As an organization administrator, you can access the [**Organization Settings** page](https://buildkite.com/organizations/~/settings) by selecting **Settings** in the global navigation, where you can do the following: - Add new teams or edit existing ones in the [**Team** section](https://buildkite.com/organizations/~/teams). * After selecting a team, you can view and administer the member-, [pipeline-](/docs/pipelines/security/permissions#manage-teams-and-permissions-pipeline-level-permissions), [test suite-](/docs/test-engine/permissions#manage-teams-and-permissions-test-suite-level-permissions), [registry-](#manage-teams-and-permissions-registry-level-permissions) and [team-](/docs/platform/team-management/permissions#manage-teams-and-permissions-team-level-permissions)level settings for that team. - [Enable Buildkite Package Registries](#enabling-buildkite-packages) for your Buildkite organization. - Configure [private storage](/docs/package-registries/registries/private-storage-link) for your registries in Buildkite Package Registries. ###### Enabling Buildkite Package Registries Customers on legacy Buildkite plans may need to enable Package Registries to gain access to this product. To do this: 1. As a [Buildkite organization administrator](#manage-teams-and-permissions-organization-level-permissions), access the [**Organization Settings** page](https://buildkite.com/organizations/~/settings) by selecting **Settings** in the global navigation. 1. In the **Packages** section, select **Enable** to open the **Enable Packages** page. 1. Select the **Enable Buildkite Packages** button, then **Enable Buildkite Packages** in the **Ready to enable Buildkite Packages** confirmation dialog. > 📘 > Once Buildkite Package Registries is enabled, the **Enable** link on the **Organization Settings** page changes to **Enabled** and Buildkite Package Registries can only be disabled by contacting support at support@buildkite.com. ###### Team-level permissions Learn more about what _team members_ are and what _team maintainers_ can do in the [Team-level permissions in the Platform documentation](/docs/platform/team-management/permissions#manage-teams-and-permissions-team-level-permissions). ###### Registry-level permissions When the [teams feature is enabled](#manage-teams-and-permissions), any user can create a new registry, as long as this user is a member of at least one team within the Buildkite organization, and this team has the **Create registries** [team member permission](/docs/platform/team-management/permissions#manage-teams-and-permissions-team-level-permissions). When you create a new registry in Buildkite: - You are automatically granted the **Read & Write** permission to this registry. - Any members of teams to which you provide access to registry are also granted the **Read & Write** permission. The **Full Access** permission on a registry allows you to: - View and download packages, images, or modules from the registry. - Publish packages, images, or modules to the registry. - Edit the registry's settings. - Delete the registry. - Provide access to other users, by adding the registry to other teams that you are a [team maintainer](#manage-teams-and-permissions-team-level-permissions) on. Any user with **Full Access** permissions to a registry can change its permission to either: - **Read & Write**, which allows you to publish packages, images, or modules to the registry, as well as view and download these items from the registry, but _not_: * Edit the registry's settings. * Delete the registry. * Provide access to other users. - **Read Only**, which allows you to view and download packages, images, or modules from the registry only, but _not_: * Publish such items to the registry. * Edit the registry's settings. * Delete the registry. * Provide access to other users. A user who is a member of at least one team with **Full Access** permissions to a registry can change the permissions on this registry. However, once this user loses this **Full Access** through their last team with access to this registry, the user then loses the ability to change the registry's permissions. Another user with **Full Access** to this registry or a [Buildkite organization administrator](#manage-teams-and-permissions-organization-level-permissions) is required to change the registry's permissions back to **Full Access** again. ##### Manage organization security for registries Buildkite customers on the [Enterprise plan](https://buildkite.com/pricing/) can configure registry action permissions for all users across their Buildkite organization. These features can be used either with or without the [teams feature enabled](#manage-teams-and-permissions). These user-level permissions and security features are managed by _Buildkite organization administrators_. To access this feature: 1. Select **Settings** in the global navigation to access the [**Organization Settings**](https://buildkite.com/organizations/~/settings) page. 1. Select [**Security** > **Packages** tab](https://buildkite.com/organizations/~/security/packages) to access your organization's security for **Packages** page. From this page, you can configure the following permissions for all users across your Buildkite organization: - **Create registries**—if the [teams feature](#manage-teams-and-permissions) is enabled, then this permission is controlled at a [team-level](#manage-teams-and-permissions-team-level-permissions) and therefore, this option will be unavailable on this page. - **Delete registries** - **Delete packages** ##### Manage an agent's access to registries To configure the rules by which a Buildkite agent can access a registry, you'll need to configure the [OpenID Connect (OIDC) policy](/docs/package-registries/security/oidc) within the registry to allow the Buildkite agent to generate an OIDC token (using the [`buildkite-agent oidc request-token`](/docs/agent/cli/reference/oidc#request-oidc-token) command), which the agent can use to authenticate to this registry. --- ### SLSA provenance URL: https://buildkite.com/docs/package-registries/security/slsa-provenance #### Generate and store SLSA provenance Supply-chain levels for software artifacts ([SLSA](https://slsa.dev/spec/) and pronounced like "salsa") is an industry-consensus specification for describing and gradually improving artifact supply chain security. When using Buildkite [Pipelines](/docs/pipelines) with [Package Registries](/docs/package-registries), you can publish software packages and artifacts to registries with [SLSA provenance](https://slsa.dev/provenance) in only four steps. > 📘 Enterprise plan feature > The SLSA provenance feature is only available to Buildkite customers on [Enterprise](https://buildkite.com/pricing) plans. If you don't have access to this feature, please contact support@buildkite.com to get it activated. This guide uses the following Buildkite examples to demonstrate this process: - Buildkite organization: `nova-corp` - Pipeline: `ruby-logger-gem`, which builds a RubyGem package - Registry: `ruby-gems` to store the RubyGem Although this guide uses RubyGems as examples, this process will work for [all supported package ecosystems](/docs/package-registries/ecosystems) (with the exception of OCI-based packages like [OCI (Docker)](/docs/package-registries/ecosystems/oci) and [Helm OCI](/docs/package-registries/ecosystems/helm-oci)). ##### Step 1: Configure steps to generate SLSA provenance The [Generate Provenance Attestation Buildkite Plugin](https://github.com/buildkite-plugins/generate-provenance-attestation-buildkite-plugin) generates a SLSA provenance attestation for artifacts that have been uploaded to artifact storage in a pipeline step. First, configure a step step that builds a RubyGem gem and uploads it to artifact storage. ```yaml steps: - label: "Build Gem" command: "gem build logger.gemspec" artifact_paths: "logger-*.gem" ``` A SLSA provenance attestation can be generated by adding this plugin to your pipeline step that builds the package or artifact: ```yaml steps: - label: "Build Gem" command: "gem build logger.gemspec" artifact_paths: "logger-*.gem" plugins: - generate-provenance-attestation#v1.1.0: artifacts: "logger-*.gem" attestation_name: "gem-attestation.json" ``` In the example above, a SLSA provenance attestation will be generated for artifacts matching `logger-*.gem` and be uploaded it to artifact storage as `gem-attestation.json`. Once this step is complete, `gem-attestation.json` will be available to subsequent steps in the pipeline. See [an example of a SLSA provenance statement](https://github.com/buildkite-plugins/generate-provenance-attestation-buildkite-plugin/blob/d9f2ff4d6b745f17cc55b6b91778a0e1a7d45824/examples/statement.json) that this plugin generates. This SLSA provenance statement is then serialized and uploaded as [a dead simple signing envelope (DSSE)](https://github.com/buildkite-plugins/generate-provenance-attestation-buildkite-plugin/blob/d9f2ff4d6b745f17cc55b6b91778a0e1a7d45824/examples/envelope.json). This `envelope.json` DSSE file shows an example of the `gem-attestation.json` file format. Learn more about DSSE in [DSSE Envelope](https://github.com/secure-systems-lab/dsse/blob/master/envelope.md). ##### Step 2: Configure steps to publish a package with SLSA provenance The [Publish to Packages](https://github.com/buildkite-plugins/publish-to-packages-buildkite-plugin/) plugin allows you to quickly and easily publish a package to Package Registries. When the `attestations: ` attribute is set, the package will be published from artifact storage with the specified attestations. ```yaml steps: - label: "Publish Gem" plugins: - publish-to-packages#v2.2.0: artifacts: "logger-*.gem" registry: "nova-corp/ruby-gems" attestations: - "gem-attestation.json" ``` In the example above, artifacts matching `logger-*.gem` will be published to `nova-corp/ruby-gems` Package Registry. Additionally, they will be published with the `gem-attestation.json` attestation. ##### Step 3: Define an OIDC policy for the registry The Publish to Packages plugin authenticates with Package Registries using an [Agent OIDC token](/docs/agent/cli/reference/oidc). Therefore, an [OIDC policy](/docs/package-registries/security/oidc#define-an-oidc-policy-for-a-registry) must be configured on the [Ruby registry](/docs/package-registries/ecosystems/ruby). ```yaml - iss: "https://agent.buildkite.com" claims: organization_slug: "nova-corp" pipeline_slug: "ruby-logger-gem" ``` In the example above, the policy allows the Buildkite pipeline with slug `ruby-logger-gem`, configured in the Nova Corp Buildkite organization (with slug `nova-corp`) to publish packages to the Ruby registry named **ruby-gems**. ##### Step 4: Complete the pipeline All the steps above come together in a simple pipeline that builds and publishes a Ruby Gem with SLSA Provenance. A [step dependency](/docs/pipelines/configure/depends-on#defining-explicit-dependencies) ensures that the **Publish Gem** step does not start until the **Build Gem** step has finished successfully. ```yaml steps: - label: "Build Gem" key: "build-gem" command: "gem build logger.gemspec" artifact_paths: "logger-*.gem" plugins: - generate-provenance-attestation#v1.1.0: artifacts: "logger-*.gem" attestation_name: "gem-attestation.json" - label: "Publish Gem" depends_on: "build-gem" plugins: - publish-to-packages#v2.2.0: artifacts: "logger-*.gem" registry: "nova-corp/ruby-gems" attestations: - "gem-attestation.json" ``` This generates a build that looks something like this: The SLSA provenance will then be visible under the **Attestations** tab of a package details page. ##### Summary - SLSA Provenance can be generated and stored in Buildkite with the help of [Generate Provenance Attestation](https://github.com/buildkite-plugins/generate-provenance-attestation-buildkite-plugin) and [Publish to Packages](https://github.com/buildkite-plugins/publish-to-packages-buildkite-plugin/) plugins. - Artifacts that are built and published in this way satisfy [SLSA Build Level 1](https://slsa.dev/spec/v1.0/levels#build-l1) requirements. --- ### Overview URL: https://buildkite.com/docs/package-registries/ecosystems #### Package ecosystems overview Buildkite Package Registries supports the following language and package ecosystems: - [Alpine (apk)](/docs/package-registries/ecosystems/alpine) - [OCI (Docker)](/docs/package-registries/ecosystems/oci) images - [Debian/Ubuntu (deb)](/docs/package-registries/ecosystems/debian) - [Files (generic)](/docs/package-registries/ecosystems/files) - Helm ([OCI](/docs/package-registries/ecosystems/helm-oci) or [Standard](/docs/package-registries/ecosystems/helm)) charts - [Hugging Face](/docs/package-registries/ecosystems/hugging-face) models - Java ([Maven](/docs/package-registries/ecosystems/maven) or Gradle using [Kotlin](/docs/package-registries/ecosystems/gradle-kotlin) or [Groovy](/docs/package-registries/ecosystems/gradle-groovy)) - [JavaScript (npm)](/docs/package-registries/ecosystems/javascript) - [NuGet](/docs/package-registries/ecosystems/nuget) - [Python (PyPI)](/docs/package-registries/ecosystems/python) - [Red Hat (RPM)](/docs/package-registries/ecosystems/red-hat) - [Ruby (RubyGems)](/docs/package-registries/ecosystems/ruby) - [Terraform](/docs/package-registries/ecosystems/terraform) modules --- ### Alpine URL: https://buildkite.com/docs/package-registries/ecosystems/alpine #### Alpine Buildkite Package Registries provides registry support for Alpine-based (apk) packages for Alpine Linux operating systems. Once your Alpine source registry has been [created](/docs/package-registries/registries/manage#create-a-source-registry), you can publish/upload packages (generated from your application's build) to this registry. ##### Publish a package You can use two approaches to publish an apk package to your Alpine source registry—[`curl`](#publish-a-package-using-curl) or the [Buildkite CLI](#publish-a-package-using-the-buildkite-cli). ###### Using curl The **Publish Instructions** tab of your Alpine source registry includes a `curl` command you can use to upload a package to this registry. To view and copy this `curl` command: 1. Select **Package Registries** in the global navigation to access the **Registries** page. 1. Select your Alpine source registry on this page. 1. Select the **Publish Instructions** tab and on the resulting page, use the copy icon at the top-right of the relevant code box to copy this `curl` command and run it (with the appropriate values) to publish the package to this source registry. This command provides: - The specific URL to publish a package to your specific Alpine source registry in Buildkite. - A temporary API access token to publish packages to this source registry. - The Alpine package file to be published. You can also create this command yourself using the following `curl` command (which you'll need to modify as required before submitting): ```bash curl -X POST https://api.buildkite.com/v2/packages/organizations/{org.slug}/registries/{registry.slug}/packages \ -H "Authorization: Bearer $REGISTRY_WRITE_TOKEN" \ -F "file=@path/to/alpine/package.apk" ``` where: - `{org.slug}` can be obtained from the end of your Buildkite URL, after accessing **Package Registries** or **Pipelines** in the global navigation of your organization in Buildkite. - `{registry.slug}` is the slug of your Alpine source registry, which is the [kebab-case](https://en.wikipedia.org/wiki/Letter_case#Kebab_case) version of this registry's name, and can be obtained after accessing **Package Registries** in the global navigation > your Alpine source registry from the **Registries** page. - `$REGISTRY_WRITE_TOKEN` is your [API access token](https://buildkite.com/user/api-access-tokens) used to publish/upload packages to your Alpine source registry. Ensure this access token has the **Read Packages** and **Write Packages** REST API scopes, which allows this token to publish packages to any source registry your user account has access to within your Buildkite organization. Alternatively, you can use an OIDC token that meets your Alpine source registry's [OIDC policy](/docs/package-registries/security/oidc#define-an-oidc-policy-for-a-registry). Learn more about these tokens in [OIDC in Buildkite Package Registries](/docs/package-registries/security/oidc). - `path/to/alpine/package.apk` is the full path to the apk package, including the file's name. If the file is located in the same directory that this command is running from, then no path is required. For example, to upload the file `my-alpine-package_0.1.1_r0.apk` from the current directory to the **My Alpine packages** source registry in the **My organization** Buildkite organization, run the `curl` command: ```bash curl -X POST https://api.buildkite.com/v2/packages/organizations/my-organization/registries/my-alpine-packages/packages \ -H "Authorization: Bearer $REPLACE_WITH_YOUR_REGISTRY_WRITE_TOKEN" \ -F "file=@my-alpine-package_0.1.1_r0.apk" ``` ###### Using the Buildkite CLI The following [Buildkite CLI](/docs/platform/cli) command can also be used to publish an apk package to your Alpine source registry from your local environment, once it has been [installed](/docs/platform/cli/installation) and [configured with an appropriate token](#token-usage-with-the-buildkite-cli): ```bash bk package push registry-slug path/to/alpine/package.apk ``` where: - `registry-slug` is the slug of your Alpine source registry, which is the [kebab-case](https://en.wikipedia.org/wiki/Letter_case#Kebab_case) version of this registry's name, and can be obtained after accessing **Package Registries** in the global navigation > your Alpine source registry from the **Registries** page. - `path/to/alpine/package.apk` is the full path to the apk package, including the file's name. If the file is located in the same directory that this command is running from, then no path is required. ###### Token usage with the Buildkite CLI When [configuring the Buildkite CLI with an API access token](/docs/platform/cli/configuration), ensure it has the **Read Packages** and **Write Packages** REST API scopes, which allows this token to publish files to any source registry your user account has access to within your Buildkite organization. You can also override this configured token by passing in a different token value using the `BUILDKITE_API_TOKEN` environment variable when running the `bk` command: ```bash BUILDKITE_API_TOKEN=$another_token_value bk package push organization-slug/registry-slug ./path/to/my/file.ext ``` If you have [installed the Buildkite CLI](/docs/platform/cli/installation) to your [self-hosted agents](/docs/agent/self-hosted/install), you can also do the following: - Use the `bk` command from within your Buildkite pipelines. - Using the `BUILDKITE_API_TOKEN` environment variable, pass in a Buildkite OIDC token value generated from your agents that meets your source registry's [OIDC policy](/docs/package-registries/security/oidc#define-an-oidc-policy-for-a-registry). Learn more about these tokens in [OIDC in Buildkite Package Registries](/docs/package-registries/security/oidc). ##### Access a package's details An Alpine (apk) package's details can be accessed from its source registry through the **Releases** (tab) section of your Alpine source registry page. To do this: 1. Select **Package Registries** in the global navigation to access the **Registries** page. 1. Select your Alpine source registry on this page. 1. On your Alpine source registry page, select the package to display its details page. The package's details page provides the following information in the following sections: - **Installation** (tab): the [installation instructions](#access-a-packages-details-installing-a-package). - **Contents** (tab, where available): a list of directories and files contained within the package. - **Details** (tab): a list of checksum values for this package—MD5, SHA1, SHA256, and SHA512. - **About this version**: a brief (metadata) description about the package. - **Details**: details about: * the name of the package (typically the file name excluding any version details and extension). * the package version. * the source registry the package is located in. * the package's visibility (based on its registry's visibility)—whether the package is **Private** and requires authentication to access, or is publicly accessible. * the distribution name / version. * additional optional metadata contained within the package, such as a homepage, licenses, etc. - **Pushed**: the date when the last package was uploaded to the source registry. - **Total files**: the total number of files (and directories) within the package. - **Dependencies**: the number of dependency packages required by this package. - **Package size**: the storage size (in bytes) of this package. - **Downloads**: the number of times this package has been downloaded. ###### Downloading a package An Alpine (apk) package can be downloaded from the package's details page. To do this: 1. [Access the package's details](#access-a-packages-details). 1. Select **Download**. ###### Installing a package An Alpine package can be installed using code snippet details provided on the package's details page. To do this: 1. [Access the package's details](#access-a-packages-details). 1. Ensure the **Installation** > **Instructions** section is displayed. 1. For each required command in the relevant code snippets, copy the relevant code snippet, paste it into your terminal, and run it. The following set of code snippets are descriptions of what each code snippet does and where applicable, its format: ###### Registry configuration **Step 1**: Configure your Alpine registry as the source for your Alpine (apk) packages: ```bash echo "https://buildkite:{registry.read.token}@packages.buildkite.com/{org.slug}/{registry.slug}/alpine_any/alpine_any/main" >> /etc/apk/repositories ``` where: - `{registry.read.token}` is your [API access token](https://buildkite.com/user/api-access-tokens) or [registry token](/docs/package-registries/registries/manage#configure-registry-tokens) used to download packages from your Alpine registry. Ensure this access token has the **Read Packages** REST API scope, which allows this token to download packages from any registry your user account has access to within your Buildkite organization. This URL component, along with its surrounding `buildkite:` and `@` components are not required for registries that are publicly accessible. - `{org.slug}` can be obtained from the end of your Buildkite URL, after accessing **Package Registries** or **Pipelines** in the global navigation of your organization in Buildkite. - `{registry.slug}` is the slug of your registry, which is the [kebab-case](https://en.wikipedia.org/wiki/Letter_case#Kebab_case) version of your registry name, and can be obtained after accessing **Package Registries** in the global navigation > your registry from the **Registries** page. **Step 2**: Install the registry signing key: ```bash wget -O /etc/apk/keys/{org.uuid}_{registry.uuid}.rsa.pub "https://buildkite:{registry.read.token}@packages.buildkite.com/{org.slug}/{registry.slug}/rsakey" ``` where: - `{org.uuid}` is the UUID of your Buildkite organization. This value can be obtained from this Alpine package **Instructions** page section. Alternatively, you can also obtain this value: * From your organization's **Pipeline Settings** page. To do this: 1. Select **Settings** in the global navigation to access the [**Organization Settings**](https://buildkite.com/organizations/~/settings) page. 1. Select **Pipelines** > **Settings** to access the [**Pipeline Settings**](https://buildkite.com/organizations/~/pipeline-settings) page. 1. At the end of the page, copy the value from the **Organization UUID** field. * By running the `getCurrentUsersOrgs` [GraphQL](/docs/apis/graphql-api) query to obtain the relevant organization UUID value in the response for the current user's accessible organizations: ```graphql query getCurrentUsersOrgs { viewer { organization { edges { node { name id uuid } } } } } ``` - `{registry.uuid}` is the UUID of your Alpine registry. Again, this value can be obtained from this Alpine package **Instructions** page section. Alternatively, you can also obtain this value: * From your registry's **Settings** page. To do this: 1. Select **Package Registries** in the global navigation to access the [**Registries**](https://buildkite.com/organizations/~/packages) page. 1. Select your Alpine registry on this page. 1. Select **Settings** to open the registry's **Settings** page. 1. Copy the **UUID** shown in the **API Integration** section of this page, which is this `{registry.uuid}` value. * By running the `getOrgRegistries` GraphQL query to obtain the registry UUID values of your `{org.slug}` in the response: ```graphql query getOrgRegistries { organization(slug: "{org.slug}") { registries(first: 20) { edges { node { name id uuid } } } } } ``` - `buildkite:{registry.read.token}@` while these values are the same as those in the previous step for configuring your source Alpine registry, this component is not required for registries that are publicly accessible. **Step 3**: Retrieve the latest apk indices: ```bash apk update ``` ###### Package installation Use `apk` to install the package: ```bash apk add package-name==version-number ``` where: - `package-name` is the name of your package. - `version-numnber` is the version number of this package. --- ### Debian URL: https://buildkite.com/docs/package-registries/ecosystems/debian #### Debian Buildkite Package Registries provides registry support for Debian-based (deb) packages for Debian and Ubuntu operating system variants. Once your Debian source registry has been [created](/docs/package-registries/registries/manage#create-a-source-registry), you can publish/upload packages (generated from your application's build) to this registry. ##### Publish a package You can use two approaches to publish a deb package to your Debian source registry—[`curl`](#publish-a-package-using-curl) or the [Buildkite CLI](#publish-a-package-using-the-buildkite-cli). ###### Using curl The **Publish Instructions** tab of your Debian source registry includes a `curl` command you can use to upload a package to this registry. To view and copy this `curl` command: 1. Select **Package Registries** in the global navigation to access the **Registries** page. 1. Select your Debian source registry on this page. 1. Select the **Publish Instructions** tab and on the resulting page, use the copy icon at the top-right of the relevant code box to copy this `curl` command and run it (with the appropriate values) to publish the package to this source registry. This command provides: - The specific URL to publish a package to your specific Debian source registry in Buildkite. - A temporary API access token to publish packages to this source registry. - The Debian package file to be published. You can also create this command yourself using the following `curl` command (which you'll need to modify as required before submitting): ```bash curl -X POST https://api.buildkite.com/v2/packages/organizations/{org.slug}/registries/{registry.slug}/packages \ -H "Authorization: Bearer $REGISTRY_WRITE_TOKEN" \ -F "file=@path/to/debian/package.deb" ``` where: - `{org.slug}` can be obtained from the end of your Buildkite URL, after accessing **Package Registries** or **Pipelines** in the global navigation of your organization in Buildkite. - `{registry.slug}` is the slug of your Debian source registry, which is the [kebab-case](https://en.wikipedia.org/wiki/Letter_case#Kebab_case) version of this registry's name, and can be obtained after accessing **Package Registries** in the global navigation > your Debian source registry from the **Registries** page. - `$REGISTRY_WRITE_TOKEN` is your [API access token](https://buildkite.com/user/api-access-tokens) used to publish/upload packages to your Debian source registry. Ensure this access token has the **Read Packages** and **Write Packages** REST API scopes, which allows this token to publish packages to any source registry your user account has access to within your Buildkite organization. Alternatively, you can use an OIDC token that meets your Debian source registry's [OIDC policy](/docs/package-registries/security/oidc#define-an-oidc-policy-for-a-registry). Learn more about these tokens in [OIDC in Buildkite Package Registries](/docs/package-registries/security/oidc). - `path/to/debian/package.deb` is the full path to the deb package, including the file's name. If the file is located in the same directory that this command is running from, then no path is required. For example, to upload the file `my-deb-package_1.0-2_amd64.deb` from the current directory to the **My Debian packages** source registry in the **My organization** Buildkite organization, run the `curl` command: ```bash curl -X POST https://api.buildkite.com/v2/packages/organizations/my-organization/registries/my-debian-packages/packages \ -H "Authorization: Bearer $REPLACE_WITH_YOUR_REGISTRY_WRITE_TOKEN" \ -F "file=@my-deb-package_1.0-2_amd64.deb" ``` ###### Using the Buildkite CLI The following [Buildkite CLI](/docs/platform/cli) command can also be used to publish a deb package to your Debian source registry from your local environment, once it has been [installed](/docs/platform/cli/installation) and [configured with an appropriate token](#token-usage-with-the-buildkite-cli): ```bash bk package push registry-slug path/to/debian/package.deb ``` where: - `registry-slug` is the slug of your Debian source registry, which is the [kebab-case](https://en.wikipedia.org/wiki/Letter_case#Kebab_case) version of this registry's name, and can be obtained after accessing **Package Registries** in the global navigation > your Debian source registry from the **Registries** page. - `path/to/debian/package.deb` is the full path to the deb package, including the file's name. If the file is located in the same directory that this command is running from, then no path is required. ###### Token usage with the Buildkite CLI When [configuring the Buildkite CLI with an API access token](/docs/platform/cli/configuration), ensure it has the **Read Packages** and **Write Packages** REST API scopes, which allows this token to publish files to any source registry your user account has access to within your Buildkite organization. You can also override this configured token by passing in a different token value using the `BUILDKITE_API_TOKEN` environment variable when running the `bk` command: ```bash BUILDKITE_API_TOKEN=$another_token_value bk package push organization-slug/registry-slug ./path/to/my/file.ext ``` If you have [installed the Buildkite CLI](/docs/platform/cli/installation) to your [self-hosted agents](/docs/agent/self-hosted/install), you can also do the following: - Use the `bk` command from within your Buildkite pipelines. - Using the `BUILDKITE_API_TOKEN` environment variable, pass in a Buildkite OIDC token value generated from your agents that meets your source registry's [OIDC policy](/docs/package-registries/security/oidc#define-an-oidc-policy-for-a-registry). Learn more about these tokens in [OIDC in Buildkite Package Registries](/docs/package-registries/security/oidc). ##### Access a package's details A Debian (deb) package's details can be accessed from this registry through the **Releases** (tab) section of your Debian source registry page. To do this: 1. Select **Package Registries** in the global navigation to access the **Registries** page. 1. Select your Debian source registry on this page. 1. On your Debian source registry page, select the package to display its details page. The package's details page provides the following information in the following sections: - **Installation** (tab): the [installation instructions](#access-a-packages-details-installing-a-package). - **Contents** (tab, where available): a list of directories and files contained within the package. - **Details** (tab): a list of checksum values for this package—MD5, SHA1, SHA256, and SHA512. - **About this version**: a brief (metadata) description about the package. - **Details**: details about: * the name of the package (typically the file name excluding any version details and extension). * the package version. * the source registry the package is located in. * the package's visibility (based on its registry's visibility)—whether the package is **Private** and requires authentication to access, or is publicly accessible. * the distribution name / version. * additional optional metadata contained within the package, such as a homepage, licenses, etc. - **Pushed**: the date when the last package was uploaded to the source registry. - **Total files**: the total number of files (and directories) within the package. - **Dependencies**: the number of dependency packages required by this package. - **Package size**: the storage size (in bytes) of this package. - **Downloads**: the number of times this package has been downloaded. ###### Downloading a package A Debian (deb) package can be downloaded from the package's details page. To do this: 1. [Access the package's details](#access-a-packages-details). 1. Select **Download**. ###### Installing a package A Debian package can be installed using code snippet details provided on the package's details page. To do this: 1. [Access the package's details](#access-a-packages-details). 1. Ensure the **Installation** > **Instructions** section is displayed. 1. For each required command set in the relevant code snippets, copy the relevant code snippet, paste it into your terminal, and run it. The following set of code snippets are descriptions of what each code snippet does and where applicable, its format: ###### Registry configuration Update the `apt` database and ensure `curl` or `gpg`, or both, is installed: ```bash apt update && apt install curl gpg -y ``` Install the registry signing key: ```bash curl -fsSL "https://buildkite:{registry.read.token}@packages.buildkite.com/{org.slug}/{registry.slug}/gpgkey" | gpg --dearmor -o /etc/apt/keyrings/{org.slug}_{registry.slug}-archive-keyring.gpg ``` where: - `{registry.read.token}` is your [API access token](https://buildkite.com/user/api-access-tokens) or [registry token](/docs/package-registries/registries/manage#configure-registry-tokens) used to download packages from your Debian registry. Ensure this access token has the **Read Packages** REST API scope, which allows this token to download packages from any registry your user account has access to within your Buildkite organization. This URL component, along with its surrounding `buildkite:` and `@` components are not required for registries that are publicly accessible. - `{org.slug}` can be obtained from the end of your Buildkite URL, after accessing **Package Registries** or **Pipelines** in the global navigation of your organization in Buildkite. - `{registry.slug}` is the slug of your registry, which is the [kebab-case](https://en.wikipedia.org/wiki/Letter_case#Kebab_case) version of your registry name, and can be obtained after accessing **Package Registries** in the global navigation > your registry from the **Registries** page. If your Debian source registry is _private_ (the default configuration for source registries), stash the private registry credentials into `apt`'s `auth.conf.d` directory: ```bash echo "machine https://packages.buildkite.com/{org.slug}/{registry.slug}/ login buildkite password ${registry.read.token}" > /etc/apt/auth.conf.d/{org.slug}_{registry.slug}.conf; chmod 600 /etc/apt/auth.conf.d/{org.slug}_{registry.slug}.conf ``` Configure the source using the installed registry signing key: ```bash echo -e "deb [signed-by=/etc/apt/keyrings/{org.slug}_{registry.slug}-archive-keyring.gpg] https://packages.buildkite.com/{org.slug}/{registry.slug}/any/ any main\ndeb-src [signed-by=/etc/apt/keyrings/{org.slug}_{registry.slug}-archive-keyring.gpg] https://packages.buildkite.com/{org.slug}/{registry.slug}/any/ any main" > /etc/apt/sources.list.d/buildkite-{org.slug}-{registry.slug}.list ``` ###### Package installation Update the `apt` database and use `apt` to install the package: ```bash apt update && apt install package-name ``` where `package-name` is the name of your package. --- ### Files URL: https://buildkite.com/docs/package-registries/ecosystems/files #### Files Buildkite Package Registries provides registry support for generic _files_ to cover some use cases where native package management either isn't required or isn't available. Once your **Files** source registry has been [created](/docs/package-registries/registries/manage#create-a-source-registry), you can publish/upload files (of any type and extension) to this registry. > 📘 Pro and Enterprise plan features > The generic _files_ registry feature is only available to customers on the [Pro or Enterprise](https://buildkite.com/pricing) plan. ##### Publish a file You can use two approaches to publish a file to your file source registry—[`curl`](#publish-a-file-using-curl) or the [Buildkite CLI](#publish-a-file-using-the-buildkite-cli). > 📘 > Be aware that file name formats must comply with [semantic version](https://semver.org/). Learn more about this in [File name format requirements](#file-name-format-requirements). ###### Using curl The **Publish Instructions** tab of your files source registry includes a `curl` command you can use to upload a file to this registry. To view and copy this `curl` command: 1. Select **Package Registries** in the global navigation to access the **Registries** page. 1. Select your file source registry on this page. 1. Select the **Publish Instructions** tab and on the resulting page, use the copy icon at the top-right of the relevant code box to copy this `curl` command and run it (with the appropriate `$FILE` value) to publish the file to this source registry. This command provides: - The specific URL to publish a file to your specific file source registry in Buildkite. - A temporary API access token to publish files to this source registry. - The file to be published. You can also create this command yourself using the following `curl` command (which you'll need to modify as required before submitting): ```bash curl -X POST https://api.buildkite.com/v2/packages/organizations/{org.slug}/registries/{registry.slug}/packages \ -H "Authorization: Bearer $REGISTRY_WRITE_TOKEN" \ -F "file=@path/to/file" ``` where: - `{org.slug}` can be obtained from the end of your Buildkite URL, after accessing **Package Registries** or **Pipelines** in the global navigation of your organization in Buildkite. - `{registry.slug}` is the slug of your file source registry, which is the [kebab-case](https://en.wikipedia.org/wiki/Letter_case#Kebab_case) version of this registry's name, and can be obtained after accessing **Package Registries** in the global navigation > your file source registry from the **Registries** page. - `$REGISTRY_WRITE_TOKEN` is your [API access token](https://buildkite.com/user/api-access-tokens) used to publish/upload files to your file source registry. Ensure this access token has the **Read Packages** and **Write Packages** REST API scopes, which allows this token to publish files and other package types to any source registry your user account has access to within your Buildkite organization. Alternatively, you can use an OIDC token that meets your file source registry's [OIDC policy](/docs/package-registries/security/oidc#define-an-oidc-policy-for-a-registry). Learn more about these tokens in [OIDC in Buildkite Package Registries](/docs/package-registries/security/oidc). - `path/to/file` is the full path to the file, including the file's name and extension if present. If the file is located in the same directory that this command is running from, then no path is required. For example, to upload the file `my-custom-app-1.0.0.ipa` from the current directory to the **My files** source registry in the **My organization** Buildkite organization, run the `curl` command: ```bash curl -X POST https://api.buildkite.com/v2/packages/organizations/my-organization/registries/my-files/packages \ -H "Authorization: Bearer $REPLACE_WITH_YOUR_REGISTRY_WRITE_TOKEN" \ -F "file=@my-custom-app-1.0.0.ipa" ``` ###### Using the Buildkite CLI The following [Buildkite CLI](/docs/platform/cli) command can also be used to publish a file to your file source registry from your local environment, once it has been [installed](/docs/platform/cli/installation) and [configured with an appropriate token](#token-usage-with-the-buildkite-cli): ```bash bk package push registry-slug path/to/file ``` where: - `registry-slug` is the slug of your file source registry, which is the [kebab-case](https://en.wikipedia.org/wiki/Letter_case#Kebab_case) version of this registry's name, and can be obtained after accessing **Package Registries** in the global navigation > your file source registry from the **Registries** page. - `path/to/file` is the full path to the file, including the file's name and extension if present. If the file is located in the same directory that this command is running from, then no path is required. ###### Token usage with the Buildkite CLI When [configuring the Buildkite CLI with an API access token](/docs/platform/cli/configuration), ensure it has the **Read Packages** and **Write Packages** REST API scopes, which allows this token to publish files to any source registry your user account has access to within your Buildkite organization. You can also override this configured token by passing in a different token value using the `BUILDKITE_API_TOKEN` environment variable when running the `bk` command: ```bash BUILDKITE_API_TOKEN=$another_token_value bk package push organization-slug/registry-slug ./path/to/my/file.ext ``` If you have [installed the Buildkite CLI](/docs/platform/cli/installation) to your [self-hosted agents](/docs/agent/self-hosted/install), you can also do the following: - Use the `bk` command from within your Buildkite pipelines. - Using the `BUILDKITE_API_TOKEN` environment variable, pass in a Buildkite OIDC token value generated from your agents that meets your source registry's [OIDC policy](/docs/package-registries/security/oidc#define-an-oidc-policy-for-a-registry). Learn more about these tokens in [OIDC in Buildkite Package Registries](/docs/package-registries/security/oidc). ##### File name format requirements Files uploaded to a file source registry must follow a specific naming convention that includes a [semantic version](https://semver.org/): ``` {BASENAME}-{SEMVER}.{EXT} ``` where: - `{BASENAME}` is the base name of your file, which can contain letters, numbers, and hyphens. - `{SEMVER}` is a valid semantic version number (for example, `1.0.0`, `2.3.1-beta.1`, or `1.0.0+build.123`). - `{EXT}` is the file extension. The following is a list of valid file name examples: - `my-app-1.0.0.zip` - `firmware-2.3.1-beta.1.bin` - `my-custom-app-1.0.0.ipa` If your file name doesn't match this format, the upload fails with an error: ``` Invalid filename format. Expected: {BASENAME}-{SEMVER}.{EXT} ``` ##### Access a file's details The file's details can be accessed from its source registry through the **Releases** (tab) section of your file source registry page. To do this: 1. Select **Package Registries** in the global navigation to access the **Registries** page. 1. Select your file source registry on this page. 1. On your file source registry page, select the file to display its details page. The file's details page provides the following information in the following sections: - **Installation** (tab): the [installation instructions](#access-a-files-details-downloading-a-file). - **Details** (tab): a list of checksum values for this file—MD5, SHA1, SHA256, and SHA512. - **About this version**: a brief (metadata) description about the file. - **Details**: details about: * the name of the file (typically the file name excluding any version details and extension). * the registry the file is located in. * the file's visibility (based on its registry's visibility)—whether the file is **Private** and requires authentication to access, or is publicly accessible. - **Pushed**: the date when the last file was uploaded to the source registry. - **File size**: the storage size (in bytes) of this file. - **Downloads**: the number of times this file has been downloaded. ###### Downloading a file The file can be downloaded from the file's details page. To do this: 1. [Access the file's details](#access-a-files-details). 1. Select **Download**. Alternatively, a file can be downloaded via the command line using code snippet details provided on the file details page. To do this: 1. [Access the file's details](#access-a-files-details). 1. Ensure the **Installation** > **Instructions** section is displayed. 1. For each required command in the relevant code snippets, copy the relevant code snippet, paste it into your terminal, and run it. The following set of code snippets are descriptions of what each code snippet does and where applicable, its format: ###### Using curl to download the file ```bash curl -O -L -H "Authorization: Bearer $TOKEN" \ https://packages.buildkite.com/{org.slug}/{registry.slug}/files/{filename} ``` where: - `$TOKEN` is your [API access token](https://buildkite.com/user/api-access-tokens) or [registry token](/docs/package-registries/registries/manage#configure-registry-tokens) used to download packages from your files source registry. Ensure this access token has the **Read Packages** REST API scope, which allows this token to download packages from any registry your user account has access to within your Buildkite organization. - `{org.slug}` can be obtained from the end of your Buildkite URL, after accessing **Package Registries** or **Pipelines** in the global navigation of your organization in Buildkite. - `{registry.slug}` is the slug of your Files source registry, which is the [kebab-case](https://en.wikipedia.org/wiki/Letter_case#Kebab_case) version of this registry's name, and can be obtained after accessing **Package Registries** in the global navigation > your Files source registry from the **Registries** page. - `{filename}` is the name of the file that you want to download. --- ### OCI-based URL: https://buildkite.com/docs/package-registries/ecosystems/helm-oci #### Helm OCI Buildkite Package Registries provides Helm Open Container Initiative (OCI)-based registry support for distributing Helm charts. [Helm version 3.8.0](https://helm.sh/docs/topics/registries/) or newer is required as these versions provide support for OCI. While this page is for OCI-based Helm source registry publishing instructions, you can alternatively publish to a [standard Helm source registry](/docs/package-registries/ecosystems/helm). Once your Helm OCI source registry has been [created](/docs/package-registries/registries/manage#create-a-source-registry), you can publish/upload charts (generated from your application's build) to this registry. ##### Publish a chart The **Publish Instructions** tab of your Helm OCI source registry includes `helm` commands you can use to upload a chart to this registry. To view and copy these `helm` commands: 1. Select **Package Registries** in the global navigation to access the **Registries** page. 1. Select your Helm OCI source registry on this page. 1. Select the **Publish Instructions** tab and on the resulting page, for each required `helm` command in code snippets provided, copy the relevant code snippet (using the icon at the top-right of its code box), paste it into your terminal, and run it with the appropriate values to publish the chart to this source registry. These commands are used to: - Log in to your Buildkite Helm OCI source registry with a temporary API access token. - Publish a Helm chart to this source registry. You can also run these commands yourself (modifying them as required before running): 1. Copy the following `helm login` command, paste it into your terminal, and modify as required before running to log in to your Helm OCI source registry: ```bash helm registry login packages.buildkite.com/{org.slug}/{registry.slug} -u buildkite -p registry-write-token ``` where: * `registry-write-token` is your [API access token](https://buildkite.com/user/api-access-tokens) used to publish/upload charts to your Helm OCI source registry. Ensure this access token has the **Read Packages** and **Write Packages** REST API scopes, which allows this token to publish charts and other package types to any source registry your user account has access to within your Buildkite organization. Alternatively, you can use an OIDC token that meets your Helm OCI source registry's [OIDC policy](/docs/package-registries/security/oidc#define-an-oidc-policy-for-a-registry). Learn more about these tokens in [OIDC in Buildkite Package Registries](/docs/package-registries/security/oidc). - `{org.slug}` can be obtained from the end of your Buildkite URL, after accessing **Package Registries** or **Pipelines** in the global navigation of your organization in Buildkite. - `{registry.slug}` is the slug of your Helm (OCI) source registry, which is the [kebab-case](https://en.wikipedia.org/wiki/Letter_case#Kebab_case) version of this registry's name, and can be obtained after accessing **Package Registries** in the global navigation > your Helm (OCI) source registry from the **Registries** page. 1. Copy the following `helm push` command, paste it into your terminal, and modify as required before running to publish your Helm chart: ```bash helm push {chart-filename.tgz} packages.buildkite.com/{org.slug}/{registry.slug} ``` where `{chart-filename.tgz}` is the name of the chart file to be published. ##### Access a chart's details A Helm chart's details can be accessed from its source registry through the **Releases** (tab) section of your Helm registry page. To do this: 1. Select **Package Registries** in the global navigation to access the **Registries** page. 1. Select your Helm OCI source registry on this page. 1. On your Helm OCI source registry page, select the chart to display its details page. The chart's details page provides the following information in the following sections: - **Installation** (tab): the [installation instructions](#access-a-charts-details-downloading-a-chart). - **Details**: details about: * the name of the chart (typically the file name excluding any version details and extension). * the chart version. * the source registry (type) the chart is located in. * the chart's visibility (based on its registry's visibility)—whether the chart is **Private** and requires authentication to access, or is publicly accessible. - **Pushed**: the date when the last chart was uploaded to the source registry. - **Package size**: the storage size (in bytes) of this chart. - **Downloads**: the number of times this chart has been downloaded. ###### Downloading a chart's manifest A Helm chart's OCI manifest can be downloaded from the details page. To do this: 1. [Access the chart's details](#access-a-charts-details). 1. Select **Download**. ###### Downloading a chart A Helm chart can be obtained using code snippet details provided on the chart's details page. To do this: 1. [Access the chart's details](#access-a-charts-details). 1. Ensure the **Installation** > **Instructions** section is displayed. 1. For each required command in the relevant code snippets, copy the relevant code snippet, paste it into your terminal, and run it. The following set of code snippets are descriptions of what each code snippet does and where applicable, its format: ###### Registry configuration If your Helm OCI source registry is _private_ (the default configuration for source registries), log in to the Helm registry containing the chart to obtain with the following `helm login` command: ```bash helm registry login packages.buildkite.com/{org.slug}/{registry.slug} -u buildkite -p registry-read-token ``` where: - `{org.slug}` can be obtained from the end of your Buildkite URL, after accessing **Package Registries** or **Pipelines** in the global navigation of your organization in Buildkite. - `{registry.slug}` is the slug of your registry, which is the [kebab-case](https://en.wikipedia.org/wiki/Letter_case#Kebab_case) version of your registry name, and can be obtained after accessing **Package Registries** in the global navigation > your registry from the **Registries** page. - `registry-read-token` is your [API access token](https://buildkite.com/user/api-access-tokens) or [registry token](/docs/package-registries/registries/manage#configure-registry-tokens) used to download charts from your Helm OCI registry. Ensure this access token has the **Read Packages** REST API scope, which allows this token to download charts and other package types from any registry your user account has access to within your Buildkite organization. > 📘 > This step is not required for public Helm (OCI) registries. ###### Chart download Use the following `helm pull` command to download the chart: ```bash helm pull oci://packages.buildkite.com/{org.slug}/{registry.slug}/chart-name --version {version} ``` where: - `{org.slug}` can be obtained from the end of your Buildkite URL, after accessing **Package Registries** or **Pipelines** in the global navigation of your organization in Buildkite. - `{registry.slug}` is the slug of your registry, which is the [kebab-case](https://en.wikipedia.org/wiki/Letter_case#Kebab_case) version of your registry name, and can be obtained after accessing **Package Registries** in the global navigation > your registry from the **Registries** page. - `chart-name` is the name of your chart. - `version` (optional) the version of the chart to download. Without this option, the latest chart version is downloaded. --- ### Standard URL: https://buildkite.com/docs/package-registries/ecosystems/helm #### Helm Buildkite Package Registries provides Helm registry support for distributing Helm charts. While this page is for standard Helm source registry publishing instructions, you can alternatively publish to an [Helm OCI-based source registry](/docs/package-registries/ecosystems/helm-oci). Once your Helm source registry has been [created](/docs/package-registries/registries/manage#create-a-source-registry), you can publish/upload charts (generated from `helm package` to create the package) to this registry. ##### Publish a chart You can use two approaches to publish a chart to your Helm source registry—[`curl`](#publish-a-chart-using-curl) or the [Buildkite CLI](#publish-a-chart-using-the-buildkite-cli). ###### Using curl The **Publish Instructions** tab of your Helm source registry includes a `curl` command you can use to upload a chart to this registry. To view and copy this `curl` command: 1. Select **Package Registries** in the global navigation to access the **Registries** page. 1. Select your Helm source registry on this page. 1. Select the **Publish Instructions** tab and on the resulting page, use the copy icon at the top-right of the relevant code box to copy this `curl` command and run it (with the appropriate values) to publish the chart to this source registry. This command provides: - The specific URL to publish a chart to your specific Helm source registry in Buildkite. - A temporary API access token to publish charts to this source registry. - The Helm chart (`.tgz`) to be published. You can also create this command yourself using the following `curl` command (which you'll need to modify as required before submitting): ```bash curl -X POST https://api.buildkite.com/v2/packages/organizations/{org.slug}/registries/{registry.slug}/packages \ -H "Authorization: Bearer $REGISTRY_WRITE_TOKEN" \ -F "file=@path/to/helm/chart.tgz" ``` where: - `{org.slug}` can be obtained from the end of your Buildkite URL, after accessing **Package Registries** or **Pipelines** in the global navigation of your organization in Buildkite. - `{registry.slug}` is the slug of your Helm (OCI) source registry, which is the [kebab-case](https://en.wikipedia.org/wiki/Letter_case#Kebab_case) version of this registry's name, and can be obtained after accessing **Package Registries** in the global navigation > your Helm (OCI) source registry from the **Registries** page. - `$REGISTRY_WRITE_TOKEN` is your [API access token](https://buildkite.com/user/api-access-tokens) used to publish/upload charts to your Helm source registry. Ensure this access token has the **Read Packages** and **Write Packages** REST API scopes, which allows this token to publish charts and other package types to any source registry your user account has access to within your Buildkite organization. Alternatively, you can use an OIDC token that meets your Helm source registry's [OIDC policy](/docs/package-registries/security/oidc#define-an-oidc-policy-for-a-registry). Learn more about these tokens in [OIDC in Buildkite Package Registries](/docs/package-registries/security/oidc). - `path/to/helm/chart.tgz` is the full path to the Helm `.tgz` chart, including the file's name. If the file is located in the same directory that this command is running from, then no path is required. For example, to upload the file `my-helm-chart-0.1.2.tgz` from the current directory to the **My Helm Charts** registry in the **My organization** Buildkite organization, run the `curl` command: ```bash curl -X POST https://api.buildkite.com/v2/packages/organizations/my-organization/registries/my-helm-charts/packages \ -H "Authorization: Bearer $REPLACE_WITH_YOUR_REGISTRY_WRITE_TOKEN" \ -F "file=@my-helm-chart-0.1.2.tgz" ``` ###### Using the Buildkite CLI The following [Buildkite CLI](/docs/platform/cli) command can also be used to publish a chart to your Helm source registry from your local environment, once it has been [installed](/docs/platform/cli/installation) and [configured with an appropriate token](#token-usage-with-the-buildkite-cli): ```bash bk package push registry-slug path/to/helm/chart.tgz ``` where: - `registry-slug` is the slug of your Helm source registry, which is the [kebab-case](https://en.wikipedia.org/wiki/Letter_case#Kebab_case) version of this registry's name, and can be obtained after accessing **Package Registries** in the global navigation > your Helm source registry from the **Registries** page. - `path/to/helm/chart.tgz` is the full path to the Helm `.tgz` chart, including the file's name. If the file is located in the same directory that this command is running from, then no path is required. ###### Token usage with the Buildkite CLI When [configuring the Buildkite CLI with an API access token](/docs/platform/cli/configuration), ensure it has the **Read Packages** and **Write Packages** REST API scopes, which allows this token to publish files to any source registry your user account has access to within your Buildkite organization. You can also override this configured token by passing in a different token value using the `BUILDKITE_API_TOKEN` environment variable when running the `bk` command: ```bash BUILDKITE_API_TOKEN=$another_token_value bk package push organization-slug/registry-slug ./path/to/my/file.ext ``` If you have [installed the Buildkite CLI](/docs/platform/cli/installation) to your [self-hosted agents](/docs/agent/self-hosted/install), you can also do the following: - Use the `bk` command from within your Buildkite pipelines. - Using the `BUILDKITE_API_TOKEN` environment variable, pass in a Buildkite OIDC token value generated from your agents that meets your source registry's [OIDC policy](/docs/package-registries/security/oidc#define-an-oidc-policy-for-a-registry). Learn more about these tokens in [OIDC in Buildkite Package Registries](/docs/package-registries/security/oidc). ##### Access a chart's details A Helm chart's details can be accessed from its source registry through the **Releases** (tab) section of your Helm registry page. To do this: 1. Select **Package Registries** in the global navigation to access the **Registries** page. 1. Select your Helm source registry on this page. 1. On your Helm source registry page, select the chart to display its details page. The chart's details page provides the following information in the following sections: - **Installation** (tab): the [installation instructions](#access-a-charts-details-downloading-a-chart). - **Details**: details about: * the name of the chart (typically the file name excluding any version details and extension). * the chart version. * the source registry (type) the chart is located in. * the chart's visibility (based on its registry's visibility)—whether the chart is **Private** and requires authentication to access, or is publicly accessible. - **Pushed**: the date when the last chart was uploaded to the source registry. - **Package size**: the storage size (in bytes) of this chart. - **Downloads**: the number of times this chart has been downloaded. ###### Downloading a chart A Helm (tgz) chart can be downloaded from the chart's details page. To do this: 1. [Access the chart's details](#access-a-charts-details). 1. Select **Download**. ###### Registry configuration If your Helm source registry is _private_ (the default configuration for source registries), configure your Helm registry locally for repeated use: ```bash helm repo add {registry.slug} https://packages.buildkite.com/{org.slug}/{registry.slug}/helm \ --username buildkite \ --password registry-read-token ``` where: - `{org.slug}` can be obtained from the end of your Buildkite URL, after accessing **Package Registries** or **Pipelines** in the global navigation of your organization in Buildkite. - `{registry.slug}` is the slug of your registry, which is the [kebab-case](https://en.wikipedia.org/wiki/Letter_case#Kebab_case) version of your registry name, and can be obtained after accessing **Package Registries** in the global navigation > your registry from the **Registries** page. - `registry-read-token` is your [API access token](https://buildkite.com/user/api-access-tokens) or [registry token](/docs/package-registries/registries/manage#configure-registry-tokens) used to download charts from your Helm registry. Ensure this access token has the **Read Packages** REST API scope, which allows this token to download charts and other package types from any registry your user account has access to within your Buildkite organization. > 📘 > This step is not required for public Helm registries. ###### Chart installation Use the following `helm install` command to download the chart: ```bash helm install "chart-release" "{registry.slug}/{chart-name}" --version {version} ``` where: - `{registry.slug}` is the slug of your registry, which is the [kebab-case](https://en.wikipedia.org/wiki/Letter_case#Kebab_case) version of your registry name, and can be obtained after accessing **Package Registries** in the global navigation > your registry from the **Registries** page. - `chart-release` is the unique release name for the Helm chart—this value must contain no `.` and be in lowercase. Learn more about chat name naming conventions in the [Chart Names section of the General Conventions](https://helm.sh/docs/chart_best_practices/conventions/#chart-names) page in the Helm documentation. - `chart-name` is the name of your chart. - `version` (optional) the version of the chart to download. Without this option, the latest chart version is downloaded. --- ### Hugging Face URL: https://buildkite.com/docs/package-registries/ecosystems/hugging-face #### Hugging Face > 📘 > The _Hugging Face registries_ feature is currently in _customer preview_. To enquire about accessing this feature for your Buildkite organization, please contact support@buildkite.com. Buildkite Package Registries provides registry support for [Hugging Face models](https://huggingface.co/models), which are essentially Git repositories aimed for developing [machine learning models](https://en.wikipedia.org/wiki/Machine_learning#Models). Learn more about Hugging Face's machine learning (ML) models from their [Hugging Face Hub documentation](https://huggingface.co/docs/hub/en/index#models). Hugging Face's [open source models](https://huggingface.co/models) can be developed, fine tuned, and published to your (private) Hugging Face registry in Buildkite Package Registries. Each Git commit to a model constitutes a new version of the model (known as a _model version_), which is published as an individual 'package' to your Hugging Face registry, with the Git commit SHA forming part of the package name. Once your Hugging Face source registry has been [created](/docs/package-registries/registries/manage#create-a-source-registry), you can cache your model locally from the [Hugging Face Hub](https://huggingface.co/docs/hub/index), then publish/upload model versions to this source registry. Learn more about installing the Hugging Face command line interface (CLI) tool from their [Hub Python Library CLI documentation](https://huggingface.co/docs/huggingface_hub/main/en/guides/cli). ##### Publish a model version The **Publish Instructions** tab of your Hugging Face source registry includes `huggingface-cli` commands you can use to upload a model version to this registry. To view and copy these `huggingface-cli` commands: 1. Select **Package Registries** in the global navigation to access the **Registries** page. 1. Select your Hugging Face source registry on this page. 1. Select the **Publish Instructions** tab and on the resulting page, for each required `huggingface-cli` command in code snippets provided, copy the relevant code snippet (using the icon at the top-right of its code box), paste it into your terminal, and run it with the appropriate values to publish the model's new version (in a Git commit) to this source registry. These commands are used to: - Set the [`HF_TOKEN` environment variable](https://huggingface.co/docs/huggingface_hub/main/en/package_reference/environment_variables#hftoken)'s value to the required access token used to either access the Hugging Face model from the Hub to cache locally, or publish the model's Git commit as a new model version to your specific Hugging Face source registry in Buildkite Package Registries. **Note:** The **Quick start** instruction's `HF_TOKEN` value is only temporary and has a short expiration. - Set the [`HF_ENDPOINT` environment variable](https://huggingface.co/docs/huggingface_hub/v0.16.3/en/package_reference/environment_variables#hfendpoint)'s value to the (base) URL of the Hugging Face Hub, or the source registry in Buildkite Package Registries. - Cache the Hugging Face model locally, or publish your model's new version (from a locally cached Git commit) to this source registry. ###### Detailed instructions You can also run these commands yourself (modifying them as required before submitting them), by following these detailed instructions. ###### Step 1: Ensure the Hugging Face model is cached locally If you haven't already done so, run the following `huggingface-cli` command to ensure the Hugging Face model has been cached locally: ```bash HF_TOKEN=huggingface-token \ HF_ENDPOINT=https://huggingface.co \ huggingface-cli download {huggingface.namespace}/{huggingface.repo.name} ``` where: - `huggingface-token` is your [Hugging Face user access token](https://huggingface.co/docs/hub/security-tokens) required to access the Hugging Face model from the [Hugging Face Hub](https://huggingface.co/docs/hub/index). - `{huggingface.namespace}` is the namespace of the Hugging Face model (in the [Hugging Face Hub](https://huggingface.co/docs/hub/index)), which is typically a username. - `{huggingface.repo.name}` is the Hugging Face model (Git repository) in the Hugging Face Hub, within this namespace. ###### Step 2: Publish your model version Use the following `huggingface-cli` command to publish the Hugging Face model version to your Hugging Face source registry: ```bash HF_TOKEN=registry-write-token \ HF_ENDPOINT=https://packages.buildkite.com/{org.slug}/{registry.slug}/huggingface \ huggingface-cli upload {huggingface.namespace}/{huggingface.repo.name} local-folder ``` where: - `registry-write-token` is your [API access token](https://buildkite.com/user/api-access-tokens) used to publish/upload a new model version to your Hugging Face source registry. Ensure this access token has the **Read Packages** and **Write Packages** REST API scopes, which allows this token to publish new model versions and other package types to any source registry your user account has access to within your Buildkite organization. Alternatively, you can use an OIDC token that meets your Hugging Face source registry's [OIDC policy](/docs/package-registries/security/oidc#define-an-oidc-policy-for-a-registry). Learn more about these tokens in [OIDC in Buildkite Package Registries](/docs/package-registries/security/oidc). - `{org.slug}` can be obtained from the end of your Buildkite URL, after accessing **Package Registries** or **Pipelines** in the global navigation of your organization in Buildkite. - `{registry.slug}` is the slug of your Hugging Face source registry, which is the [kebab-case](https://en.wikipedia.org/wiki/Letter_case#Kebab_case) version of this registry's name, and can be obtained after accessing **Package Registries** in the global navigation > your Hugging Face source registry from the **Registries** page. - `{huggingface.namespace}` is the namespace of the Hugging Face model (in the [Hugging Face Hub](https://huggingface.co/docs/hub/index)), which is typically a username. - `{huggingface.repo.name}` is the Hugging Face model (Git repository) in the Hugging Face Hub, within this namespace. - `local-folder` is the location of the locally cached Hugging Face model version. This can be found in the following path: `~/.cache/huggingface/hub/models--{huggingface.namespace}--{huggingface.repo.name}/snapshots/{commit.sha}/`, where `{commit.sha}` represents the Git commit SHA of model version you want to publish to this repository. ##### Access a model version's details A Hugging Face model version's details can be accessed from its source registry through the **Releases** (tab) section of your Hugging Face source registry page. To do this: 1. Select **Package Registries** in the global navigation to access the **Registries** page. 1. Select your Hugging Face source registry on this page. 1. On your Hugging Face source registry page, select the model version to display its details page. The model version's details page provides the following information in the following sections: - **Installation** (tab): the [installation instructions](#access-a-model-versions-details-installing-a-model-version). - **Contents** (tab, where available): a list of directories and files contained within the model version. - **Details** (tab): a list of checksum values for this model version—Message and SHA. The **Message** value can be customized using the `--commit-message` option of the `huggingface-cli` command. - **Details**: details about: * the name of the model version, consisting of the model's Hugging Face Hub namespace and name, along with the commit SHA. * the source registry the model version is located in. * the model version's visibility (based on its registry's visibility)—whether the model version is **Private** and requires authentication to access, or is publicly accessible. - **Pushed**: the date when the model version was uploaded to the source registry. - **Package size**: the storage size (in bytes) of this model version. ###### Installing a model version A Hugging Face model version can be downloaded using code snippet details provided on the model version's details page. To do this: 1. [Access the model version's details](#access-a-model-versions-details). 1. Ensure the **Installation** > **Instructions** section is displayed. 1. Copy the relevant code snippet, paste it into your terminal, and run it. --- ### Maven URL: https://buildkite.com/docs/package-registries/ecosystems/maven #### Maven Buildkite Package Registries provides registry support for Maven-based Java packages. Once your Java source registry has been [created](/docs/package-registries/registries/manage#create-a-source-registry), you can publish/upload packages (generated from your application's build) to this registry by configuring your `~/.m2/settings.xml` and application's relevant `pom.xml` files. ##### Publish a package The **Publish Instructions** tab of your Java source registry includes Maven XML snippets you can use to configure your environment for publishing packages to this registry. To view and copy the required `~/.m2/settings.xml` and `pom.xml` configurations: 1. Select **Package Registries** in the global navigation to access the **Registries** page. 1. Select your Java source registry on this page. 1. Select the **Publish Instructions** tab and on the resulting page, in the **Using Maven** section, select **Maven** to expand this section. 1. Use the copy icon at the top-right of each respective code box to copy the relevant XML snippets and paste it into its appropriate file. These file configurations contain the following: * `~/.m2/settings.xml`: the ID for your specific Java source registry in Buildkite and a temporary API access token required to publish the package to this registry. * `pom.xml`: the ID and URL for this source registry. 1. You can then run the `mvn deploy` command to publish the package to this source registry. ###### Detailed instructions You can also configure these files yourself (modifying the snippets as required), by following these detailed instructions. 1. Copy the following XML snippet, paste it into your `~/.m2/settings.xml` file, and modify accordingly: ```xml org-slug-registry-slug Authorization Bearer registry-write-token ``` where: - `org-slug-registry-slug` is the ID of your Java registry, based on the org and this registry's slugs separated by a hyphen. The org slug can be obtained from the end of your Buildkite URL, after accessing **Package Registries** or **Pipelines** in the global navigation of your organization in Buildkite. The registry slug is the [kebab-case](https://en.wikipedia.org/wiki/Letter_case#Kebab_case) version of your registry name, and can be obtained after accessing **Package Registries** in the global navigation > your Java registry from the **Registries** page. The Java registry ID can actually be any valid unique value, as long as the same value is used in both your `settings.xml` and `pom.xml` files. - `registry-write-token` is your [API access token](https://buildkite.com/user/api-access-tokens) used to publish/upload packages to your Java source registry. Ensure this access token has the **Read Packages** and **Write Packages** REST API scopes, which allows this token to publish packages to any source registry your user account has access to within your Buildkite organization. **Note:** This step only needs to be performed once for the life of your Java source registry, and API access token. 1. Copy the following XML snippet, paste it into your `pom.xml` configuration file, and modify accordingly: ```xml org-slug-registry-slug https://packages.buildkite.com/{org.slug}/{registry.slug}/maven2/ org-slug-registry-slug https://packages.buildkite.com/{org.slug}/{registry.slug}/maven2/ ``` where: * `org-slug-registry-slug` is the ID of your Java source registry (above). - `{org.slug}` can be obtained from the end of your Buildkite URL, after accessing **Package Registries** or **Pipelines** in the global navigation of your organization in Buildkite. - `{registry.slug}` is the slug of your Java source registry, which is the [kebab-case](https://en.wikipedia.org/wiki/Letter_case#Kebab_case) version of this registry's name, and can be obtained after accessing **Package Registries** in the global navigation > your Java source registry from the **Registries** page. 1. Publish your package: ```bash mvn deploy ``` ##### Access a package's details A Java package's details can be accessed from its source registry through the **Releases** (tab) section of your Java source registry page. To do this: 1. Select **Package Registries** in the global navigation to access the **Registries** page. 1. Select your Java source registry on this page. 1. On your Java source registry page, select the package to display its details page. The package's details page provides the following information in the following sections: - **Installation** (tab): the [installation instructions](#access-a-packages-details-installing-a-package). - **Contents** (tab, where available): a list of directories and files contained within the package. - **Details** (tab): a list of checksum values for this package—MD5, SHA1, SHA256, and SHA512. - **About this version**: a brief (metadata) description about the package. - **Details**: details about: * the name of the package (typically the file name excluding any version details and extension). * the package version. * the source registry the package is located in. * the package's visibility (based on its registry's visibility)—whether the package is **Private** and requires authentication to access, or is publicly accessible. * the distribution name / version. * additional optional metadata contained within the package, such as a homepage, licenses, etc. - **Pushed**: the date when the last package was uploaded to the source registry. - **Total files**: the total number of files (and directories) within the package. - **Dependencies**: the number of dependency packages required by this package. - **Package size**: the storage size (in bytes) of this package. - **Downloads**: the number of times this package has been downloaded. ###### Downloading a package A Java package can be downloaded from the package's details page. To do this: 1. [Access the package's details](#access-a-packages-details). 1. Select **Download**. ###### ###### Installing a package from a source registry A Java package can be installed using code snippet details provided on the package's details page. To do this: 1. [Access the package's details](#access-a-packages-details). 1. Ensure the **Installation** tab is displayed and select the **Maven** section to expand it. 1. Copy each code snippet, and paste them into their respective `~/.m2/settings.xml` and `pom.xml` files (under the `project` XML tag), modifying the required values accordingly. **Note:** The `~/.m2/settings.xml` configuration: * Is _not_ required if your registry is publicly accessible. * Only needs to be performed once for the life of your Java registry. You can then run `mvn install` on this modified `pom.xml` to install this package. The `~/.m2/settings.xml` code snippet is based on this format: ```xml org-slug-registry-slug Authorization Bearer registry-read-token ``` where: - `org-slug-registry-slug` is the ID of your Java registry, based on the org and this registry's slugs separated by a hyphen. The org slug can be obtained from the end of your Buildkite URL, after accessing **Package Registries** or **Pipelines** in the global navigation of your organization in Buildkite. The registry slug is the [kebab-case](https://en.wikipedia.org/wiki/Letter_case#Kebab_case) version of your registry name, and can be obtained after accessing **Package Registries** in the global navigation > your Java registry from the **Registries** page. The Java registry ID can actually be any valid unique value, as long as the same value is used in both your `settings.xml` and `pom.xml` files. - `registry-read-token` is your [API access token](https://buildkite.com/user/api-access-tokens) or [registry token](/docs/package-registries/registries/manage#configure-registry-tokens) used to download packages from your Java source registry. Ensure this access token has the **Read Packages** REST API scope, which allows this token to download packages from any registry your user account has access to within your Buildkite organization. The `pom.xml` code snippet is based on this format: ```xml org-slug-registry-slug https://packages.buildkite.com/{org.slug}/{registry.slug}/maven2/ true true com.name.domain.my my-java-package-name my-java-package-version ``` where: - `org-slug-registry-slug` is the ID of your Java registry, based on the org and this registry's slugs separated by a hyphen. The org slug can be obtained from the end of your Buildkite URL, after accessing **Package Registries** or **Pipelines** in the global navigation of your organization in Buildkite. The registry slug is the [kebab-case](https://en.wikipedia.org/wiki/Letter_case#Kebab_case) version of your registry name, and can be obtained after accessing **Package Registries** in the global navigation > your Java registry from the **Registries** page. The Java registry ID can actually be any valid unique value, as long as the same value is used in both your `settings.xml` and `pom.xml` files. - `{org.slug}` is the org slug, which can be obtained as described above. - `{registry.slug}` is the slug of your registry, which is the [kebab-case](https://en.wikipedia.org/wiki/Letter_case#Kebab_case) version of your registry name, and can be obtained after accessing **Package Registries** in the global navigation > your registry from the **Registries** page. - `com.name.domain.my` is the domain name of your Java package (in typical right-to-left order). - `my-java-package-name` is the name of your Java package. - `my-java-package-version` is the version number of your Java package. --- ### Gradle (Kotlin) URL: https://buildkite.com/docs/package-registries/ecosystems/gradle-kotlin #### Gradle (Kotlin) Buildkite Package Registries provides registry support for Gradle-based Java packages (using the [Maven Publish Plugin](https://docs.gradle.org/current/userguide/publishing_maven.html)), using the Gradle Kotlin DSL. If you're using Gradle's Groovy DSL, refer to the [Gradle (Groovy)](/docs/package-registries/ecosystems/gradle-groovy) page. Once your Java source registry has been [created](/docs/package-registries/registries/manage#create-a-source-registry), you can publish/upload packages (generated from your application's build) to this registry by configuring your `build.gradle.kts` file. ##### Publish a package The **Publish Instructions** tab of your Java source registry includes a Gradle snippet you can use to configure your environment for publishing packages to this registry. To view and copy the required `build.gradle.kts` configuration: 1. Select **Package Registries** in the global navigation to access the **Registries** page. 1. Select your Java source registry on this page. 1. Select the **Publish Instructions** tab and on the resulting page, in the **Using Gradle with `maven-publish` plugin** section, select **Gradle (Kotlin)** to expand this section. 1. Use the copy icon at the top-right of the code box to copy the Gradle code snippet and paste it into the appropriate area/s of your `build.gradle.kts` file. These `build.gradle.kts` file configurations contain the: - Maven coordinates for your package (which you will need to manually configure yourself). - URL for your specific Java source registry in Buildkite. - API access token required to publish the package to this source registry. 1. You can then run the `gradle publish` command to publish the package to this source registry. ###### Detailed instructions You can also configure this file yourself (modifying the snippet as required), by following these detailed instructions. 1. Copy the following Gradle (Kotlin) snippet, paste it into your `build.gradle.kts` file, and modify accordingly: ```kotlin plugins { `maven-publish` `java-library` } publishing { publications { create("maven") { // MODIFY: Define your Maven coordinates of your package groupId = "com.name.domain.my" artifactId = "my-java-package-name" version = "my-java-package-version" from(components["java"]) } } repositories { maven { url = uri("https://packages.buildkite.com/{org.slug}/{registry.slug}/maven2/") authentication { create("header") } credentials(HttpHeaderCredentials::class) { name = "Authorization" value = "Bearer registry-write-token" } } } } ``` where: - `com.name.domain.my` is the domain name of your Java package (in typical right-to-left order). - `my-java-package-name` is the name of your Java package. - `my-java-package-version` is the version number of your Java package. - `{org.slug}` can be obtained from the end of your Buildkite URL, after accessing **Package Registries** or **Pipelines** in the global navigation of your organization in Buildkite. - `{registry.slug}` is the slug of your Java source registry, which is the [kebab-case](https://en.wikipedia.org/wiki/Letter_case#Kebab_case) version of this registry's name, and can be obtained after accessing **Package Registries** in the global navigation > your Java source registry from the **Registries** page. - `registry-write-token` is your [API access token](https://buildkite.com/user/api-access-tokens) used to publish/upload packages to your Java source registry. Ensure this access token has the **Read Packages** and **Write Packages** REST API scopes, which allows this token to publish packages to any source registry your user account has access to within your Buildkite organization. 1. Publish your package: ```bash gradle publish ``` ##### Access a package's details A Java package's details can be accessed from its source registry through the **Releases** (tab) section of your Java source registry page. To do this: 1. Select **Package Registries** in the global navigation to access the **Registries** page. 1. Select your Java source registry on this page. 1. On your Java source registry page, select the package to display its details page. The package's details page provides the following information in the following sections: - **Installation** (tab): the [installation instructions](#access-a-packages-details-installing-a-package). - **Contents** (tab, where available): a list of directories and files contained within the package. - **Details** (tab): a list of checksum values for this package—MD5, SHA1, SHA256, and SHA512. - **About this version**: a brief (metadata) description about the package. - **Details**: details about: * the name of the package (typically the file name excluding any version details and extension). * the package version. * the source registry the package is located in. * the package's visibility (based on its registry's visibility)—whether the package is **Private** and requires authentication to access, or is publicly accessible. * the distribution name / version. * additional optional metadata contained within the package, such as a homepage, licenses, etc. - **Pushed**: the date when the last package was uploaded to the source registry. - **Total files**: the total number of files (and directories) within the package. - **Dependencies**: the number of dependency packages required by this package. - **Package size**: the storage size (in bytes) of this package. - **Downloads**: the number of times this package has been downloaded. ###### Downloading a package A Java package can be downloaded from the package's details page. To do this: 1. [Access the package's details](#access-a-packages-details). 1. Select **Download**. ###### ###### Installing a package from a source registry A Java package can be installed using code snippet details provided on the package's details page. To do this: 1. [Access the package's details](#access-a-packages-details). 1. Ensure the **Installation** (tab) > **Gradle (Kotlin)** section is displayed. 1. Copy the code snippet, paste this into the `build.gradle.kts` Gradle file, and modify the required values accordingly. You can then run `gradle install` on this modified script file to install this package. This code snippet is based on this format: ```kotlin repositories { maven { url = uri("https://packages.buildkite.com/{org.slug}/{registry.slug}/maven2/") authentication { create("header") } credentials(HttpHeaderCredentials::class) { name = "Authorization" value = "Bearer registry-read-token" } } } dependencies { implementation("com.name.domain.my:my-java-package-name:my-java-package-version") } ``` where: - `{org.slug}` can be obtained from the end of your Buildkite URL, after accessing **Package Registries** or **Pipelines** in the global navigation of your organization in Buildkite. - `{registry.slug}` is the slug of your registry, which is the [kebab-case](https://en.wikipedia.org/wiki/Letter_case#Kebab_case) version of your registry name, and can be obtained after accessing **Package Registries** in the global navigation > your registry from the **Registries** page. - `registry-read-token` is your [API access token](https://buildkite.com/user/api-access-tokens) or [registry token](/docs/package-registries/registries/manage#configure-registry-tokens) used to download packages from your Java source registry. Ensure this access token has the **Read Packages** REST API scope, which allows this token to download packages from any registry your user account has access to within your Buildkite organization. **Note:** Both the `authentication` and `credentials` sections are not required for registries that are publicly accessible. - `com.name.domain.my` is the domain name of your Java package (in typical right-to-left order). - `my-java-package-name` is the name of your Java package. - `my-java-package-version` is the version number of your Java package. --- ### Gradle (Groovy) URL: https://buildkite.com/docs/package-registries/ecosystems/gradle-groovy #### Gradle (Groovy) Buildkite Package Registries provides registry support for Gradle-based Java packages (using the [Maven Publish Plugin](https://docs.gradle.org/current/userguide/publishing_maven.html)), using the Gradle Groovy DSL. If you're using Kotlin, refer to the [Gradle (Kotlin)](/docs/package-registries/ecosystems/gradle-kotlin) page. Once your Java source registry has been [created](/docs/package-registries/registries/manage#create-a-source-registry), you can publish/upload packages (generated from your application's build) to this registry by configuring your `build.gradle` file. ##### Publish a package The **Publish Instructions** tab of your Java source registry includes a Gradle snippet you can use to configure your environment for publishing packages to this registry. To view and copy the required `build.gradle` configuration: 1. Select **Package Registries** in the global navigation to access the **Registries** page. 1. Select your Java source registry on this page. 1. Select the **Publish Instructions** tab and on the resulting page, in the **Using Gradle with `maven-publish` plugin** section, select **Gradle (Groovy)** to expand this section. 1. Use the copy icon at the top-right of the code box to copy the Gradle code snippet and paste it into the appropriate area/s of your `build.gradle` file. These `build.gradle` file configurations contain the: - Maven coordinates for your package (which you will need to manually configure yourself). - URL for your specific Java source registry in Buildkite. - API access token required to publish the package to this source registry. 1. You can then run the `gradle publish` command to publish the package to this source registry. ###### Detailed instructions You can also configure this file yourself (modifying the snippet as required), by following these detailed instructions. 1. Copy the following Gradle (Groovy) snippet, paste it into your `build.gradle` file, and modify accordingly: ```gradle plugins { id 'java' // To publish java libraries id 'maven-publish' // To publish to Maven repositories } // Download standard plugins, e.g., maven-publish from GradlePluginPortal repositories { gradlePluginPortal() } // Define Maven repository to publish to publishing { publications { maven(MavenPublication) { // MODIFY: Define your Maven coordinates of your package groupId = "com.name.domain.my" artifactId = "my-java-package-name" version = "my-java-package-version" // Tell gradle to publish project's jar archive from components.java } } repositories { maven { // Define the Buildkite repository to publish to url "https://packages.buildkite.com/{org.slug}/{registry.slug}/maven2/" authentication { header(HttpHeaderAuthentication) } credentials(HttpHeaderCredentials) { name = "Authorization" value = "Bearer registry-write-token" } } } } ``` where: - `com.name.domain.my` is the domain name of your Java package (in typical right-to-left order). - `my-java-package-name` is the name of your Java package. - `my-java-package-version` is the version number of your Java package. - `{org.slug}` can be obtained from the end of your Buildkite URL, after accessing **Package Registries** or **Pipelines** in the global navigation of your organization in Buildkite. - `{registry.slug}` is the slug of your Java source registry, which is the [kebab-case](https://en.wikipedia.org/wiki/Letter_case#Kebab_case) version of this registry's name, and can be obtained after accessing **Package Registries** in the global navigation > your Java source registry from the **Registries** page. - `registry-write-token` is your [API access token](https://buildkite.com/user/api-access-tokens) used to publish/upload packages to your Java source registry. Ensure this access token has the **Read Packages** and **Write Packages** REST API scopes, which allows this token to publish packages to any source registry your user account has access to within your Buildkite organization. 1. Publish your package: ```bash gradle publish ``` ##### Access a package's details A Java package's details can be accessed from its source registry through the **Releases** (tab) section of your Java source registry page. To do this: 1. Select **Package Registries** in the global navigation to access the **Registries** page. 1. Select your Java source registry on this page. 1. On your Java source registry page, select the package to display its details page. The package's details page provides the following information in the following sections: - **Installation** (tab): the [installation instructions](#access-a-packages-details-installing-a-package). - **Contents** (tab, where available): a list of directories and files contained within the package. - **Details** (tab): a list of checksum values for this package—MD5, SHA1, SHA256, and SHA512. - **About this version**: a brief (metadata) description about the package. - **Details**: details about: * the name of the package (typically the file name excluding any version details and extension). * the package version. * the source registry the package is located in. * the package's visibility (based on its registry's visibility)—whether the package is **Private** and requires authentication to access, or is publicly accessible. * the distribution name / version. * additional optional metadata contained within the package, such as a homepage, licenses, etc. - **Pushed**: the date when the last package was uploaded to the source registry. - **Total files**: the total number of files (and directories) within the package. - **Dependencies**: the number of dependency packages required by this package. - **Package size**: the storage size (in bytes) of this package. - **Downloads**: the number of times this package has been downloaded. ###### Downloading a package A Java package can be downloaded from the package's details page. To do this: 1. [Access the package's details](#access-a-packages-details). 1. Select **Download**. ###### ###### Installing a package from a source registry A Java package can be installed using code snippet details provided on the package's details page. To do this: 1. [Access the package's details](#access-a-packages-details). 1. Ensure the **Installation** tab is displayed and select the **Gradle (Groovy)** section to expand it. 1. Copy the code snippet, paste this into the `build.gradle` Gradle file, and modify the required values accordingly. You can then run `gradle install` on this modified script file to install this package. This code snippet is based on this format: ```gradle repositories { maven { url "https://packages.buildkite.com/{org.slug}/{registry.slug}/maven2/" authentication { header(HttpHeaderAuthentication) } credentials(HttpHeaderCredentials) { name = "Authorization" value = "Bearer registry-read-token" } } } dependencies { implementation "com.name.domain.my:my-java-package-name:my-java-package-version" } ``` where: - `{org.slug}` can be obtained from the end of your Buildkite URL, after accessing **Package Registries** or **Pipelines** in the global navigation of your organization in Buildkite. - `{registry.slug}` is the slug of your registry, which is the [kebab-case](https://en.wikipedia.org/wiki/Letter_case#Kebab_case) version of your registry name, and can be obtained after accessing **Package Registries** in the global navigation > your registry from the **Registries** page. - `registry-read-token` is your [API access token](https://buildkite.com/user/api-access-tokens) or [registry token](/docs/package-registries/registries/manage#configure-registry-tokens) used to download packages from your Java source registry. Ensure this access token has the **Read Packages** REST API scope, which allows this token to download packages from any registry your user account has access to within your Buildkite organization. **Note:** Both the `authentication` and `credentials` sections are not required for registries that are publicly accessible. - `com.name.domain.my` is the domain name of your Java package (in typical right-to-left order). - `my-java-package-name` is the name of your Java package. - `my-java-package-version` is the version number of your Java package. --- ### JavaScript URL: https://buildkite.com/docs/package-registries/ecosystems/javascript #### JavaScript Buildkite Package Registries provides registry support for JavaScript-based (Node.js npm) packages. Once your JavaScript source registry has been [created](/docs/package-registries/registries/manage#create-a-source-registry), you can publish/upload packages (generated from your application's build) to this registry by configuring your `~/.npmrc` and application's relevant `package.json` files. ##### Publish a package The **Publish Instructions** tab of your JavaScript source registry includes command/code snippets you can use to configure your environment for publishing packages to this registry. To view and copy the required command or code snippets for your `~/.npmrc` and `package.json` configurations: 1. Select **Package Registries** in the global navigation to access the **Registries** page. 1. Select your JavaScript source registry on this page. 1. Select the **Publish Instructions** tab and on the resulting page, use the copy icon at the top-right of each respective code box to copy the its snippet and paste it into your command line tool or the appropriate file. These file configurations contain the following: * `~/.npmrc`: the URL for your specific JavaScript source registry in Buildkite and a temporary API access token required to publish the package to this registry. * `package.json`: the URL for this source registry. 1. You can then run the `npm pack` and `npm publish` commands to publish the package to this source registry. ###### Detailed instructions You can also configure these files yourself (modifying the snippets as required), by following these detailed instructions. 1. Copy the following `npm` command, paste it into your terminal, and modify as required before running to update your `~/.npmrc` file: ```bash npm set //packages.buildkite.com/{org.slug}/{registry.slug}/npm/:_authToken registry-write-token ``` where: - `{org.slug}` can be obtained from the end of your Buildkite URL, after accessing **Package Registries** or **Pipelines** in the global navigation of your organization in Buildkite. - `{registry.slug}` is the slug of your JavaScript source registry, which is the [kebab-case](https://en.wikipedia.org/wiki/Letter_case#Kebab_case) version of this registry's name, and can be obtained after accessing **Package Registries** in the global navigation > your JavaScript source registry from the **Registries** page. - `registry-write-token` is your [API access token](https://buildkite.com/user/api-access-tokens) used to publish/upload packages to your JavaScript source registry. Ensure this access token has the **Read Packages** and **Write Packages** REST API scopes, which allows this token to publish packages to any source registry your user account has access to within your Buildkite organization. **Note:** * If your `.npmrc` file doesn't exist, this command automatically creates it for you. * This step only needs to be performed once for the life of your JavaScript source registry. 1. Copy the following JSON code snippet (or the line of code beginning `"publishConfig": ...`), paste it into your Node.js project's `package.json` file, and modify as required: ```json { ..., "publishConfig": {"registry": "https://packages.buildkite.com/{org.slug}/{registry.slug}/npm/"} } ``` **Note:** Don't forget to add the separating comma between `"publishConfig": ...` and the previous field. 1. Build and publish your package: ```bash npm pack npm publish ``` ##### Access a package's details A JavaScript package's details can be accessed from this registry through the **Releases** (tab) section of your JavaScript source registry page. To do this: 1. Select **Package Registries** in the global navigation to access the **Registries** page. 1. Select your JavaScript source registry on this page. 1. On your JavaScript source registry page, select the package to display its details page. The package's details page provides the following information in the following sections: - **Installation** (tab): the [installation instructions](#access-a-packages-details-installing-a-package). - **Contents** (tab, where available): a list of directories and files contained within the package. - **Details** (tab): a list of checksum values for this package—MD5, SHA1, SHA256, and SHA512. - **About this version**: a brief (metadata) description about the package. - **Details**: details about: * the name of the package (typically the file name excluding any version details and extension). * the package version. * the source registry the package is located in. * the package's visibility (based on its registry's visibility)—whether the package is **Private** and requires authentication to access, or is publicly accessible. * the distribution name / version. * additional optional metadata contained within the package, such as a homepage, licenses, etc. - **Pushed**: the date when the last package was uploaded to the source registry. - **Total files**: the total number of files (and directories) within the package. - **Dependencies**: the number of dependency packages required by this package. - **Package size**: the storage size (in bytes) of this package. - **Downloads**: the number of times this package has been downloaded. ###### Downloading a package A JavaScript package can be downloaded from the package's details page. To do this: 1. [Access the package's details](#access-a-packages-details). 1. Select **Download**. ###### ###### Installing a package from a source registry A JavaScript package can be installed using code snippet details provided on the package's details page. To do this: 1. [Access the package's details](#access-a-packages-details). 1. Ensure the **Installation** > **Instructions** section is displayed. 1. If your JavaScript source registry is _private_ (the default configuration for source registries) and you haven't already performed this `.npmrc` configuration step, copy the `npm set` command from the [**Registry Configuration**](#registry-configuration) section, paste it into your terminal, and modify as required before running to update your `~/.npmrc` file. 1. Copy the `npm install ...` command from the [**Package Installation**](#package-installation) section, paste it into your terminal, and modify as required before running it. ###### Registry Configuration If your JavaScript source registry is _private_, set its authentication details in the `.npmrc` file by running the `npm set` command: ```bash npm set //packages.buildkite.com/{org.slug}/{registry.slug}/npm/:_authToken registry-read-token ``` where: - `{org.slug}` can be obtained from the end of your Buildkite URL, after accessing **Package Registries** or **Pipelines** in the global navigation of your organization in Buildkite. - `{registry.slug}` is the slug of your registry, which is the [kebab-case](https://en.wikipedia.org/wiki/Letter_case#Kebab_case) version of your registry name, and can be obtained after accessing **Package Registries** in the global navigation > your registry from the **Registries** page. - `registry-read-token` is your [API access token](https://buildkite.com/user/api-access-tokens) or [registry token](/docs/package-registries/registries/manage#configure-registry-tokens) used to download packages from your JavaScript source registry. Ensure this access token has the **Read Packages** REST API scope, which allows this token to download packages from any registry your user account has access to within your Buildkite organization. > 📘 > If your `.npmrc` file doesn't exist, this command automatically creates it for you. > This step only needs to be performed once for the life of your JavaScript registry, and it is not required for public JavaScript registries. ###### Package Installation Install your JavaScript package by running the `npm install` command: ```bash npm install nodejs-package-name@version.number \ --registry https://packages.buildkite.com/{org.slug}/{registry.slug}/npm/ ``` where: - `nodejs-package-name` is the name of your Node.js package (that is, the `name` field value from its `package.json` file). - `version.number` is the version of your Node.js package (that is, the `version` field value from its `package.json` file). - `{org.slug}` can be obtained from the end of your Buildkite URL, after accessing **Package Registries** or **Pipelines** in the global navigation of your organization in Buildkite. - `{registry.slug}` is the slug of your registry, which is the [kebab-case](https://en.wikipedia.org/wiki/Letter_case#Kebab_case) version of your registry name, and can be obtained after accessing **Package Registries** in the global navigation > your registry from the **Registries** page. --- ### NuGet URL: https://buildkite.com/docs/package-registries/ecosystems/nuget #### NuGet Buildkite Package Registries provides registry support for NuGet-based (.NET) packages. Once your NuGet source registry has been [created](/docs/package-registries/registries/manage#create-a-source-registry), you can publish/upload packages (generated from your application's build) to this registry using a single command, or by configuring your `nuget.config` file. ##### Publish a package The **Publish Instructions** tab of your NuGet source registry includes command/code snippets you can use to publish a package to this registry with a single command, or to configure your environment for publishing packages to this registry on an ongoing basis. To view and copy the required command or `nuget.config` configurations: 1. Select **Package Registries** in the global navigation to access the **Registries** page. 1. Select your NuGet source registry on this page. 1. Select the **Publish Instructions** tab and on the resulting page, use the copy icon at the top-right of each respective code box to copy its snippet and paste it into your command line tool or the appropriate file. 1. The following subsections describe the processes in the code boxes above, serving the following use cases: * **Quick start** section—for rapid NuGet package publishing, using a temporary token. See [Single command](#publish-a-package-single-command) for detailed instructions on how to configure this command yourself. * **Setup** section—implements configurations for a more permanent NuGet package publishing solution. See [Ongoing publishing](#publish-a-package-ongoing-publishing) for detailed instructions on how to configure these commands yourself. ###### Single command The first code box provides a quick mechanism for uploading a NuGet package to your NuGet registry. ```bash dotnet nuget push *.nupkg --api-key "temporary-write-token-that-expires-after-5-minutes" \ --source "https://packages.buildkite.com/{org.slug}/{registry.slug}/nuget/package" ``` where: - `{org.slug}` can be obtained from the end of your Buildkite URL, after accessing **Package Registries** or **Pipelines** in the global navigation of your organization in Buildkite. - `{registry.slug}` is the slug of your NuGet source registry, which is the [kebab-case](https://en.wikipedia.org/wiki/Letter_case#Kebab_case) version of your NuGet registry name, and can be obtained after accessing **Package Registries** in the global navigation > your NuGet source registry from the **Registries** page. Since the `temporary-write-token-that-expires-after-5-minutes` expires quickly, it is recommended that you just copy this command directly from the **Publish Instructions** page. ###### Ongoing publishing The remaining code boxes on the **Publish Instructions** page provide configurations for a more permanent solution for ongoing NuGet uploads to your NuGet registry. 1. Create a `nuget.config` file in your project (if one doesn't already exist): ```bash dotnet new nugetconfig ``` 1. Copy the following command, paste it and modify as required before running to add the NuGet registry to your `nuget.config` file: ```bash dotnet nuget add source https://packages.buildkite.com/{org.slug}/{registry.slug}/nuget/index.json \ --name {org.slug}_{registry.slug} \ --username _ \ --password $TOKEN \ --store-password-in-clear-text \ --configfile ./nuget.config \ ``` where: - `{org.slug}` can be obtained from the end of your Buildkite URL, after accessing **Package Registries** or **Pipelines** in the global navigation of your organization in Buildkite. - `{registry.slug}` is the slug of your NuGet source registry, which is the [kebab-case](https://en.wikipedia.org/wiki/Letter_case#Kebab_case) version of your NuGet registry name, and can be obtained after accessing **Package Registries** in the global navigation > your NuGet source registry from the **Registries** page. - `$TOKEN` is your [API access token](https://buildkite.com/user/api-access-tokens) used to publish/upload packages to your NuGet source registry. Ensure this access token has the **Read Packages** and **Write Packages** REST API scopes, which allows this token to publish packages to any source registry your user account has access to within your Buildkite organization. **Note:** This step only needs to be conducted once for the life of your NuGet source registry. 1. Publish your NuGet package: ```bash dotnet nuget push *.nupkg --source {org.slug}_{registry.slug} --api-key $TOKEN ``` ##### Access a package's details A NuGet package's details can be accessed from its source registry through the **Releases** (tab) section of your NuGet source registry page. To do this: 1. Select **Package Registries** in the global navigation to access the **Registries** page. 1. Select your NuGet source registry on this page. 1. On your NuGet source registry page, select the package within the **Releases** (tab) section. The package's details page is displayed. The package's details page provides the following information in the following sections: - **Installation** (tab): the [installation instructions](#access-a-packages-details-installing-a-package). - **Contents** (tab, where available): a list of directories and files contained within the package. - **Details** (tab): a list of checksum values for this package—MD5, SHA1, SHA256, and SHA512. - **About this version**: a brief (metadata) description about the package. - **Details**: details about: * the name of the package (typically the file name excluding any version details and extension). * the package version. * the source registry the package is located in. * the package's visibility (based on its registry's visibility)—whether the package is **Private** and requires authentication to access, or is publicly accessible. * the distribution name / version. * additional optional metadata contained within the package, such as a homepage, licenses, etc. - **Pushed**: the date when the last package was uploaded to the source registry. - **Total files**: the total number of files (and directories) within the package. - **Dependencies**: the number of dependency packages required by this package. - **Package size**: the storage size (in bytes) of this package. - **Downloads**: the number of times this package has been downloaded. ###### Downloading a package A NuGet package can be downloaded from the package's details page. To download a package: 1. [Access the package's details](#access-a-packages-details). 1. Select **Download**. ###### Installing a package A NuGet package can be installed using code snippet details provided on the package's details page. To install a package: 1. [Access the package's details](#access-a-packages-details). 1. Ensure the **Installation** tab is displayed. 1. Follow the relevant section to install the NuGet package, based on your requirements: * [Single command](#package-installation-with-a-single-command) (**Quick install** section)—for rapid NuGet package installation, using a temporary token. * [Ongoing publishing](#ongoing-package-installation) (**Setup** section)—implements configurations for a more permanent NuGet package installation solution. ###### Package installation with a single command The **Quick install** code snippet is based on this format: ```bash dotnet add package package-name -v version.number \ --source "https://buildkite:temporary-read-token-that-expires-after-5-minutes@packages.buildkite.com/{org.slug}/{registry.slug}/nuget/index.json" ``` where: - `package-name` is the name of your NuGet package. - `version.number` is the version of your NuGet package. - `{org.slug}` can be obtained from the end of your Buildkite URL, after accessing **Package Registries** or **Pipelines** in the global navigation of your organization in Buildkite. - `{registry.slug}` is the name of your NuGet registry. Since the `temporary-read-token-that-expires-after-5-minutes` expires quickly, it is recommended that you just copy this command directly from the **Installation** page. ###### Ongoing package installation The **Setup** section's instructions are as follows: 1. Create a `nuget.config` file in your project (if one doesn't already exist): ```bash dotnet new nugetconfig ``` 1. Copy the following command, paste it and modify as required before running to add the NuGet registry to your `nuget.config` file: ```bash dotnet nuget add source https://packages.buildkite.com/{org.slug}/{registry.slug}/nuget/index.json \ --name {registry.slug} \ --username _ \ --password $TOKEN \ --store-password-in-clear-text \ --configfile ./nuget.config \ --valid-authentication-types basic ``` where: * `$TOKEN` is your [API access token](https://buildkite.com/user/api-access-tokens) or [registry token](/docs/package-registries/manage-registries#configure-registry-tokens) used to download packages to your NuGet registry. Ensure this access token has the **Read Packages** REST API scope, which allows this token to download packages from any registry your user account has access to within your Buildkite organization. This command option and value are not required for registries that are publicly accessible. - `{org.slug}` can be obtained from the end of your Buildkite URL, after accessing **Package Registries** or **Pipelines** in the global navigation of your organization in Buildkite. - `{registry.slug}` is the slug of your NuGet source registry, which is the [kebab-case](https://en.wikipedia.org/wiki/Letter_case#Kebab_case) version of your NuGet registry name, and can be obtained after accessing **Package Registries** in the global navigation > your NuGet source registry from the **Registries** page. **Note:** This step only needs to be conducted once for the life of your NuGet source registry. 1. Publish your NuGet package: ```bash dotnet nuget push *.nupkg --source {registry.slug} --api-key $TOKEN ``` --- ### OCI URL: https://buildkite.com/docs/package-registries/ecosystems/oci #### OCI Buildkite Package Registries provides registry support for Docker and other Open Container Initiative (OCI) images. Buildkite registries follow the [OCI Distribution Specification](https://github.com/opencontainers/distribution-spec) version 1.1. Once your OCI source registry has been [created](/docs/package-registries/registries/manage#create-a-source-registry), you can publish/upload images (generated from your application's build) to this registry using relevant `docker` commands. ##### Publish an image The **Publish Instructions** tab of your OCI source registry includes command snippets you can use to publish container images to this registry. To view and copy the required commands: 1. Select **Package Registries** in the global navigation to access the **Registries** page. 1. Select your OCI source registry on this page. 1. Select the **Publish Instructions** tab and on the resulting page, for each required `docker` command in code snippets provided, copy the relevant code snippet (using the icon at the top-right of its code box), paste it into your terminal, and run it with the appropriate values to publish the image to this source registry. The code snippets are used to: - Log in to your Buildkite OCI source registry with an API access token. - Tag your container image to be published. - Push the image to this source registry. ###### Detailed instructions You can also configure these files yourself (modifying the snippets as required), by following these detailed instructions. 1. Copy the following `docker login` command, paste it into your terminal, and modify as required before running to log in to your OCI source registry: ```bash docker login packages.buildkite.com/{org.slug}/{registry.slug} -u buildkite -p registry-write-token ``` where: * `registry-write-token` is your [API access token](https://buildkite.com/user/api-access-tokens) used to publish/upload images to your OCI source registry. Ensure this access token has the **Read Packages** and **Write Packages** REST API scopes, which allows this token to publish container images and other package types to any source registry your user account has access to within your Buildkite organization. Alternatively, you can use an OIDC token that meets your OCI source registry's [OIDC policy](/docs/package-registries/security/oidc#define-an-oidc-policy-for-a-registry). Learn more about these tokens in [OIDC in Buildkite Package Registries](/docs/package-registries/security/oidc). - `{org.slug}` can be obtained from the end of your Buildkite URL, after accessing **Package Registries** or **Pipelines** in the global navigation of your organization in Buildkite. - `{registry.slug}` is the slug of your OCI source registry, which is the [kebab-case](https://en.wikipedia.org/wiki/Letter_case#Kebab_case) version of this registry's name, and can be obtained after accessing **Package Registries** in the global navigation > your OCI source registry from the **Registries** page. 1. Copy the following `docker tag` command, paste it into your terminal, and modify as required before running to tag your container image: ```bash docker tag current-image-name:tag packages.buildkite.com/{org.slug}/{registry.slug}/image-name:tag ``` where: * `current-image-name:tag` is the existing `image-name:tag` combination of your container image name and its current tag to published to your OCI source registry. The `:tag` component can be optional. This component of this command also supports the other tag syntax references mentioned in the [`docker tag` documentation](https://docs.docker.com/reference/cli/docker/image/tag/). * `image-name:tag` is the image name and tag to provide to this image when it is published to your source OCI registry, where the `:tag` part of this command is optional. 1. Copy the following `docker push` command, paste it into your terminal, and modify as required before running to push your container image: ```bash docker push packages.buildkite.com/{org.slug}/{registry.slug}/image-name:tag ``` where `image-name:tag` is the image name and tag combination you configured in the previous step. ##### Access an image's details A container image's details can be accessed from its source registry through the **Releases** (tab) section of your OCI source registry page. To do this: 1. Select **Package Registries** in the global navigation to access the **Registries** page. 1. Select your OCI source registry on this page. 1. On your OCI source registry page, select the image to display its details page. The image's details page provides the following information in the following sections: - **Installation** (tab): the [installation instructions](#access-an-images-details-installing-an-image). - **Contents** (tab, where available): a list of directories and files contained within the image. - **Details** (tab): a list of checksum values for this image—MD5, SHA1, SHA256, and SHA512. - **About this version**: a brief (metadata) description about the image. - **Details**: details about: * the name of the image (typically the file name excluding any version details and extension). * the image version. * the source registry the image is located in. * the image's visibility (based on its registry's visibility)—whether the image is **Private** and requires authentication to access, or is publicly accessible. * the distribution name / version. * additional optional metadata contained within the image, such as a homepage, licenses, etc. - **Pushed**: the date when the last image was uploaded to the source registry. - **Total files**: the total number of files (and directories) within the image. - **Dependencies**: the number of dependency images required by this image. - **Package size**: the storage size (in bytes) of this image. - **Downloads**: the number of times this image has been downloaded. ###### Installing an image A container image can be obtained using code snippet details provided on the image's details page. To do this: 1. [Access the image's details](#access-an-images-details). 1. Ensure the **Installation** > **Instructions** section is displayed. 1. For each required command in the relevant code snippets, copy the relevant code snippet, paste it into your terminal, and run it. The following set of code snippets are descriptions of what each code snippet does and where applicable, its format: ###### Registry configuration If your OCI source registry is _private_ (the default configuration for source registries), log in to the OCI registry containing the image to obtain with the following `docker login` command: ```bash docker login packages.buildkite.com/{org.slug}/{registry.slug} -u buildkite -p registry-read-token ``` where: - `{org.slug}` can be obtained from the end of your Buildkite URL, after accessing **Package Registries** or **Pipelines** in the global navigation of your organization in Buildkite. - `{registry.slug}` is the slug of your OCI source registry, which is the [kebab-case](https://en.wikipedia.org/wiki/Letter_case#Kebab_case) version of this registry's name, and can be obtained after accessing **Package Registries** in the global navigation > your OCI source registry from the **Registries** page. - `registry-read-token` is your [API access token](https://buildkite.com/user/api-access-tokens) or [registry token](/docs/package-registries/registries/manage#configure-registry-tokens) used to download images from your OCI registry. Ensure this access token has the **Read Packages** REST API scope, which allows this token to download container images and other package types from any registry your user account has access to within your Buildkite organization. > 📘 > This step is not required for public OCI registries. ###### Package installation Use the following `docker pull` command to obtain the image: ```bash docker pull packages.buildkite.com/{org.slug}/{registry.slug}/image-name:tag ``` where: - `{org.slug}` can be obtained from the end of your Buildkite URL, after accessing **Package Registries** or **Pipelines** in the global navigation of your organization in Buildkite. - `{registry.slug}` is the slug of your OCI source registry, which is the [kebab-case](https://en.wikipedia.org/wiki/Letter_case#Kebab_case) version of this registry's name, and can be obtained after accessing **Package Registries** in the global navigation > your OCI source registry from the **Registries** page. - `image-name` is the name of your image. - `tag` is the tag associated with this image. --- ### Python URL: https://buildkite.com/docs/package-registries/ecosystems/python #### Python Buildkite Package Registries provides registry support for Python-based (PyPI) packages. Once your Python source registry has been [created](/docs/package-registries/registries/manage#create-a-source-registry), you can publish/upload packages (generated from your application's build) to this registry. ##### Publish a package You can use two approaches to publish a Python package to your Python source registry—[`curl`](#publish-a-package-using-curl) or the [Buildkite CLI](#publish-a-package-using-the-buildkite-cli). ###### Using curl The **Publish Instructions** tab of your Python source registry includes a `curl` command you can use to upload a package to this registry. To view and copy this `curl` command: 1. Select **Package Registries** in the global navigation to access the **Registries** page. 1. Select your Python source registry on this page. 1. Select the **Publish Instructions** tab and on the resulting page, use the copy icon at the top-right of the relevant code box to copy this `curl` command and run it (with the appropriate values) to publish the package to this source registry. This command provides: - The specific URL to publish a package to your specific Python source registry in Buildkite. - A temporary API access token to publish packages to this source registry. - The Python package file to be published. You can also create this command yourself using the following `curl` command (which you'll need to modify as required before submitting): ```bash curl -X POST https://api.buildkite.com/v2/packages/organizations/{org.slug}/registries/{registry.slug}/packages \ -H "Authorization: Bearer $REGISTRY_WRITE_TOKEN" \ -F "file=@path/to/python/package.tar.gz" ``` where: - `{org.slug}` can be obtained from the end of your Buildkite URL, after accessing **Package Registries** or **Pipelines** in the global navigation of your organization in Buildkite. - `{registry.slug}` is the slug of your Python source registry, which is the [kebab-case](https://en.wikipedia.org/wiki/Letter_case#Kebab_case) version of this registry's name, and can be obtained after accessing **Package Registries** in the global navigation > your Python source registry from the **Registries** page. - `$REGISTRY_WRITE_TOKEN` is your [API access token](https://buildkite.com/user/api-access-tokens) used to publish/upload packages to your Python source registry. Ensure this access token has the **Read Packages** and **Write Packages** REST API scopes, which allows this token to publish packages to any source registry your user account has access to within your Buildkite organization. Alternatively, you can use an OIDC token that meets your Python source registry's [OIDC policy](/docs/package-registries/security/oidc#define-an-oidc-policy-for-a-registry). Learn more about these tokens in [OIDC in Buildkite Package Registries](/docs/package-registries/security/oidc). - `path/to/python/package.tar.gz` is the full path to the Python `.tar.gz` package, including the file's name. If the file is located in the same directory that this command is running from, then no path is required. For example, to upload the file `my-python-package-0.9.7b1.tar.gz` from the current directory to the **My Python packages** source registry in the **My organization** Buildkite organization, run the `curl` command: ```bash curl -X POST https://api.buildkite.com/v2/packages/organizations/my-organization/registries/my-python-packages/packages \ -H "Authorization: Bearer $REPLACE_WITH_YOUR_REGISTRY_WRITE_TOKEN" \ -F "file=@my-python-package-0.9.7b1.tar.gz" ``` ###### Using the Buildkite CLI The following [Buildkite CLI](/docs/platform/cli) command can also be used to publish a Python package to your Python source registry from your local environment, once it has been [installed](/docs/platform/cli/installation) and [configured with an appropriate token](#token-usage-with-the-buildkite-cli): ```bash bk package push registry-slug path/to/python/package.tar.gz ``` where: - `registry-slug` is the slug of your Python source registry, which is the [kebab-case](https://en.wikipedia.org/wiki/Letter_case#Kebab_case) version of this registry's name, and can be obtained after accessing **Package Registries** in the global navigation > your file source registry from the **Registries** page. - `path/to/python/package.tar.gz` is the full path to the Python `.tar.gz` package, including the file's name. If the file is located in the same directory that this command is running from, then no path is required. ###### Token usage with the Buildkite CLI When [configuring the Buildkite CLI with an API access token](/docs/platform/cli/configuration), ensure it has the **Read Packages** and **Write Packages** REST API scopes, which allows this token to publish files to any source registry your user account has access to within your Buildkite organization. You can also override this configured token by passing in a different token value using the `BUILDKITE_API_TOKEN` environment variable when running the `bk` command: ```bash BUILDKITE_API_TOKEN=$another_token_value bk package push organization-slug/registry-slug ./path/to/my/file.ext ``` If you have [installed the Buildkite CLI](/docs/platform/cli/installation) to your [self-hosted agents](/docs/agent/self-hosted/install), you can also do the following: - Use the `bk` command from within your Buildkite pipelines. - Using the `BUILDKITE_API_TOKEN` environment variable, pass in a Buildkite OIDC token value generated from your agents that meets your source registry's [OIDC policy](/docs/package-registries/security/oidc#define-an-oidc-policy-for-a-registry). Learn more about these tokens in [OIDC in Buildkite Package Registries](/docs/package-registries/security/oidc). ##### Access a package's details A Python package's details can be accessed from this registry through the **Releases** (tab) section of your Python source registry page. To do this: 1. Select **Package Registries** in the global navigation to access the **Registries** page. 1. Select your Python source registry on this page. 1. On your Python source registry page, select the package to display its details. The package's details page provides the following information in the following sections: - **Installation** (tab): the [installation instructions](#access-a-packages-details-installing-a-package). - **Contents** (tab, where available): a list of directories and files contained within the package. - **Details** (tab): a list of checksum values for this package—MD5, SHA1, SHA256, and SHA512. - **About this version**: a brief (metadata) description about the package. - **Details**: details about: * the name of the package (typically the file name excluding any version details and extension). * the package version. * the source registry the package is located in. * the package's visibility (based on its registry's visibility)—whether the package is **Private** and requires authentication to access, or is publicly accessible. * the distribution name / version. * additional optional metadata contained within the package, such as a homepage, licenses, etc. - **Pushed**: the date when the last package was uploaded to the source registry. - **Total files**: the total number of files (and directories) within the package. - **Dependencies**: the number of dependency packages required by this package. - **Package size**: the storage size (in bytes) of this package. - **Downloads**: the number of times this package has been downloaded. ###### Downloading a package A Python package can be downloaded from the package's details page. To do this: 1. [Access the package's details](#access-a-packages-details). 1. Select **Download**. ###### ###### Installing a package from a source registry A Python package can be installed using code snippet details provided on the package's details page. To do this: 1. [Access the package's details](#access-a-packages-details). 1. Ensure the **Installation** > **Instructions** section is displayed. 1. Copy the relevant code snippet from the [**Registry Configuration**](#registry-configuration-source-registry) section and paste it into either the package installer for Python (pip) configuration (`pip.conf`) file or end of the virtualenv `requirements.txt` file. 1. Run the installation command from the [**Package Installation**](#package-installation-source-registry) section. ###### Registry Configuration The `pip.conf` code snippet is based on this format: ```conf #### Add this to the [global] section in your ~/.pip/pip.conf: [global] extra-index-url="https://buildkite:{registry.read.token}@packages.buildkite.com/{org.slug}/{registry.slug}/pypi/simple" ``` or the alternative `requirements.txt` (for virtualenv) code snippet is based on this format: ```ini #### Otherwise if installing on a virtualenv, add this to the bottom of your requirements.txt: --extra-index-url="https://buildkite:{registry.read.token}@packages.buildkite.com/{org.slug}/{registry.slug}/pypi/simple" ``` where: - `{registry.read.token}` is your [API access token](https://buildkite.com/user/api-access-tokens) or [registry token](/docs/package-registries/registries/manage#configure-registry-tokens) used to download packages from your Python source registry. Ensure this access token has the **Read Packages** REST API scope, which allows this token to download packages from any registry your user account has access to within your Buildkite organization. This URL component, along with its surrounding `buildkite:` and `@` components are not required for registries that are publicly accessible. - `{org.slug}` can be obtained from the end of your Buildkite URL, after accessing **Package Registries** or **Pipelines** in the global navigation of your organization in Buildkite. - `{registry.slug}` is the slug of your registry, which is the [kebab-case](https://en.wikipedia.org/wiki/Letter_case#Kebab_case) version of your registry name, and can be obtained after accessing **Package Registries** in the global navigation > your registry from the **Registries** page. ###### Package Installation Use `pip` to install the package: ```bash pip install package-name==version-number ``` where: - `package-name` is the name of your package. - `version-number` is the version number of this package. --- ### Red Hat URL: https://buildkite.com/docs/package-registries/ecosystems/red-hat #### Red Hat Buildkite Package Registries provides registry support for Red Hat-based (RPM) packages for Red Hat Linux operating systems. Once your Red Hat source registry has been [created](/docs/package-registries/registries/manage#create-a-source-registry), you can publish/upload packages (generated from your application's build) to this registry. ##### Publish a package You can use two approaches to publish an RPM package to your Red Hat source registry—[`curl`](#publish-a-package-using-curl) or the [Buildkite CLI](#publish-a-package-using-the-buildkite-cli). ###### Using curl The **Publish Instructions** tab of your Red Hat source registry includes a `curl` command you can use to upload a package to this registry. To view and copy this `curl` command: 1. Select **Package Registries** in the global navigation to access the **Registries** page. 1. Select your Red Hat source registry on this page. 1. Select the **Publish Instructions** tab and on the resulting page, use the copy icon at the top-right of the relevant code box to copy this `curl` command and run it (with the appropriate values) to publish the package to this source registry. This command provides: - The specific URL to publish a package to your specific Red Hat source registry in Buildkite. - A temporary API access token to publish packages to this source registry. - The Red Hat (RPM) package file to be published. You can also create this command yourself using the following `curl` command (which you'll need to modify as required before submitting): ```bash curl -X POST https://api.buildkite.com/v2/packages/organizations/{org.slug}/registries/{registry.slug}/packages \ -H "Authorization: Bearer $REGISTRY_WRITE_TOKEN" \ -F "file=@path/to/red-hat/package.rpm" ``` where: - `{org.slug}` can be obtained from the end of your Buildkite URL, after accessing **Package Registries** or **Pipelines** in the global navigation of your organization in Buildkite. - `{registry.slug}` is the slug of your Red Hat registry, which is the [kebab-case](https://en.wikipedia.org/wiki/Letter_case#Kebab_case) version of your Red Hat registry name, and can be obtained after accessing **Package Registries** in the global navigation > your Red Hat registry from the **Registries** page. - `$REGISTRY_WRITE_TOKEN` is your [API access token](https://buildkite.com/user/api-access-tokens) used to publish/upload packages to your Red Hat source registry. Ensure this access token has the **Read Packages** and **Write Packages** REST API scopes, which allows this token to publish packages to any source registry your user account has access to within your Buildkite organization. Alternatively, you can use an OIDC token that meets your Red Hat source registry's [OIDC policy](/docs/package-registries/security/oidc#define-an-oidc-policy-for-a-registry). Learn more about these tokens in [OIDC in Buildkite Package Registries](/docs/package-registries/security/oidc). - `path/to/red-hat/package.rpm` is the full path to the RPM package, including the file's name. If the file is located in the same directory that this command is running from, then no path is required. For example, to upload the file `my-red-hat-package_1.0-2.x86_64.rpm` from the current directory to the **My Red Hat packages** source registry in the **My organization** Buildkite organization, run the `curl` command: ```bash curl -X POST https://api.buildkite.com/v2/packages/organizations/my-organization/registries/my-red-hat-packages/packages \ -H "Authorization: Bearer $REPLACE_WITH_YOUR_REGISTRY_WRITE_TOKEN" \ -F "file=@my-red-hat-package_1.0-2.x86_64.rpm" ``` ###### Using the Buildkite CLI The following [Buildkite CLI](/docs/platform/cli) command can also be used to publish an RPM package to your Red Hat source registry from your local environment, once it has been [installed](/docs/platform/cli/installation) and [configured with an appropriate token](#token-usage-with-the-buildkite-cli): ```bash bk package push registry-slug path/to/red-hat/package.rpm ``` where: - `registry-slug` is the slug of your Red Hat source registry, which is the [kebab-case](https://en.wikipedia.org/wiki/Letter_case#Kebab_case) version of this registry's name, and can be obtained after accessing **Package Registries** in the global navigation > your file source registry from the **Registries** page. - `path/to/red-hat/package.rpm` is the full path to the RPM package, including the file's name. If the file is located in the same directory that this command is running from, then no path is required. ###### Token usage with the Buildkite CLI When [configuring the Buildkite CLI with an API access token](/docs/platform/cli/configuration), ensure it has the **Read Packages** and **Write Packages** REST API scopes, which allows this token to publish files to any source registry your user account has access to within your Buildkite organization. You can also override this configured token by passing in a different token value using the `BUILDKITE_API_TOKEN` environment variable when running the `bk` command: ```bash BUILDKITE_API_TOKEN=$another_token_value bk package push organization-slug/registry-slug ./path/to/my/file.ext ``` If you have [installed the Buildkite CLI](/docs/platform/cli/installation) to your [self-hosted agents](/docs/agent/self-hosted/install), you can also do the following: - Use the `bk` command from within your Buildkite pipelines. - Using the `BUILDKITE_API_TOKEN` environment variable, pass in a Buildkite OIDC token value generated from your agents that meets your source registry's [OIDC policy](/docs/package-registries/security/oidc#define-an-oidc-policy-for-a-registry). Learn more about these tokens in [OIDC in Buildkite Package Registries](/docs/package-registries/security/oidc). ##### Access a package's details A Red Hat (RPM) package's details can be accessed from this registry through the **Releases** (tab) section of your Red Hat source registry page. To do this: 1. Select **Package Registries** in the global navigation to access the **Registries** page. 1. Select your Red Hat source registry on this page. 1. On your Red Hat source registry page, select the package to display its details page. The package's details page provides the following information in the following sections: - **Installation** (tab): the [installation instructions](#access-a-packages-details-installing-a-package). - **Contents** (tab, where available): a list of directories and files contained within the package. - **Details** (tab): a list of checksum values for this package—MD5, SHA1, SHA256, and SHA512. - **About this version**: a brief (metadata) description about the package. - **Details**: details about: * the name of the package (typically the file name excluding any version details and extension). * the package version. * the source registry the package is located in. * the package's visibility (based on its registry's visibility)—whether the package is **Private** and requires authentication to access, or is publicly accessible. * the distribution name / version. * additional optional metadata contained within the package, such as a homepage, licenses, etc. - **Pushed**: the date when the last package was uploaded to the source registry. - **Total files**: the total number of files (and directories) within the package. - **Dependencies**: the number of dependency packages required by this package. - **Package size**: the storage size (in bytes) of this package. - **Downloads**: the number of times this package has been downloaded. ###### Downloading a package A Red Hat (RPM) package can be downloaded from the package's details page. To do this: 1. [Access the package's details](#access-a-packages-details). 1. Select **Download**. ###### Installing a package A Red Hat package can be installed using code snippet details provided on the package's details page. To do this: 1. [Access the package's details](#access-a-packages-details). 1. Ensure the **Installation** > **Instructions** section is displayed. 1. For each required command in the relevant code snippets, copy the relevant code snippet, paste it into your terminal, and run it. The following set of code snippets are descriptions of what each code snippet does and where applicable, its format: ###### Registry configuration Configure your Red Hat registry as the source for your Red Hat (RPM) packages: ```bash sudo sh -c 'echo -e "[{registry.slug}]\nname={registry.name}\nbaseurl=https://buildkite:{registry.read.token}@packages.buildkite.com/{org.slug}/{registry.slug}/rpm_any/rpm_any/\$basearch\nenabled=1\nrepo_gpgcheck=1\ngpgcheck=0\ngpgkey=https://buildkite:{registry.read.token}@packages.buildkite.com/{org.slug}/{registry.slug}/gpgkey\npriority=1"' > /etc/yum.repos.d/{registry.slug}.repo ``` where: - `{registry.slug}` is the slug of your Red Hat registry, which is the [kebab-case](https://en.wikipedia.org/wiki/Letter_case#Kebab_case) version of your Red Hat registry name, and can be obtained after accessing **Package Registries** in the global navigation > your Red Hat registry from the **Registries** page. - `{registry.name}` is the name of your Red Hat registry. - `{registry.read.token}` is your [API access token](https://buildkite.com/user/api-access-tokens) or [registry token](/docs/package-registries/registries/manage#configure-registry-tokens) used to download packages from your Red Hat registry. Ensure this access token has the **Read Packages** REST API scope, which allows this token to download packages from any registry your user account has access to within your Buildkite organization. This URL component, along with its surrounding `buildkite:` and `@` components are not required for registries that are publicly accessible. - `{org.slug}` can be obtained from the end of your Buildkite URL, after accessing **Package Registries** or **Pipelines** in the global navigation of your organization in Buildkite. ###### Package installation Use `dnf` to install the package: ```bash dnf install -y package-name ``` where `package-name` is the name of your package, which usually includes the version number and distribution type. --- ### Ruby URL: https://buildkite.com/docs/package-registries/ecosystems/ruby #### Ruby Buildkite Package Registries provides registry support for Ruby-based (RubyGems) packages. Once your Ruby source registry has been [created](/docs/package-registries/registries/manage#create-a-source-registry), you can publish/upload packages (generated from your application's build) to this registry using a single command, or by configuring your `~/.gem/credentials` and `gemspec` files. ##### Publish a package The **Publish Instructions** tab of your Ruby source registry includes command/code snippets you can use to publish a package to this registry with a single command, or to configure your environment for publishing packages to this registry on an ongoing basis. To view and copy the required command or `~/.gem/credentials` and `gemspec` configurations: 1. Select **Package Registries** in the global navigation to access the **Registries** page. 1. Select your Ruby source registry on this page. 1. Select the **Publish Instructions** tab and on the resulting page, use the copy icon at the top-right of each respective code box to copy its snippet and paste it into your command line tool or the appropriate file. 1. The following subsections describe the processes in the code boxes above, serving the following use cases: * **Quick start** section—for rapid RubyGems package publishing, using a temporary token. See [Single command](#publish-a-package-single-command) for detailed instructions on how to configure this command yourself. * **Setup** section—implements configurations for a more permanent RubyGems package publishing solution. See [Ongoing publishing](#publish-a-package-ongoing-publishing) for detailed instructions on how to configure these commands yourself. ###### Single command The first code box provides a quick mechanism for uploading RubyGems package to your Ruby registry. ```bash GEM_HOST_API_KEY="temporary-write-token-that-expires-after-5-minutes" \ gem push --host="https://packages.buildkite.com/{org.slug}/{registry.slug}" *.gem ``` where: - `{org.slug}` can be obtained from the end of your Buildkite URL, after accessing **Package Registries** or **Pipelines** in the global navigation of your organization in Buildkite. - `{registry.slug}` is the slug of your Ruby source registry, which is the [kebab-case](https://en.wikipedia.org/wiki/Letter_case#Kebab_case) version of your Ruby registry name, and can be obtained after accessing **Package Registries** in the global navigation > your Ruby source registry from the **Registries** page. Since the `temporary-write-token-that-expires-after-5-minutes` expires quickly, it is recommended that you just copy this command directly from the **Publish Instructions** page. ###### Ongoing publishing The remaining code boxes on the **Publish Instructions** page provide configurations for a more permanent solution for ongoing RubyGems uploads to your Ruby registry. 1. Copy the following set of commands, paste them and modify as required before running to create your `~/.gem/credentials` file: ```bash mkdir ~/.gem touch ~/.gem/credentials chmod 600 ~/.gem/credentials echo "https://packages.buildkite.com/{org.slug}/{registry.slug}: registry-write-token" >> ~/.gem/credentials ``` where: - `{org.slug}` can be obtained from the end of your Buildkite URL, after accessing **Package Registries** or **Pipelines** in the global navigation of your organization in Buildkite. - `{registry.slug}` is the slug of your Ruby source registry, which is the [kebab-case](https://en.wikipedia.org/wiki/Letter_case#Kebab_case) version of your Ruby registry name, and can be obtained after accessing **Package Registries** in the global navigation > your Ruby source registry from the **Registries** page. - `registry-write-token` is your [API access token](https://buildkite.com/user/api-access-tokens) used to publish/upload packages to your Ruby source registry. Ensure this access token has the **Read Packages** and **Write Packages** REST API scopes, which allows this token to publish packages to any source registry your user account has access to within your Buildkite organization. **Note:** This step only needs to be conducted once for the life of your Ruby source registry. 1. Copy the following code snippet and paste it to modify the `allowed_push_host` line of your Ruby (gem) package's `.gemspec` file: ```conf spec.metadata["allowed_push_host"] = "https://packages.buildkite.com/{org.slug}/{registry.slug}" ``` **Note:** This configuration prevents your Ruby package accidentally being published to the main [RubyGems registry](https://rubygems.org/). 1. Publish your Ruby (RubyGems) package: ```bash gem build *.gemspec gem push *.gem ``` Alternatively, if you are using a Ruby (gem) package created with Bundler, publish the package this way: ```bash rake release ``` ##### Access a package's details A Ruby package's details can be accessed from this registry through the **Releases** (tab) section of your Ruby source registry page. To do this: 1. Select **Package Registries** in the global navigation to access the **Registries** page. 1. Select your Ruby source registry on this page. 1. On your Ruby source registry page, select the package within the **Releases** (tab) section. The package's details page is displayed. The package's details page provides the following information in the following sections: - **Installation** (tab): the [installation instructions](#access-a-packages-details-installing-a-package). - **Contents** (tab, where available): a list of directories and files contained within the package. - **Details** (tab): a list of checksum values for this package—MD5, SHA1, SHA256, and SHA512. - **About this version**: a brief (metadata) description about the package. - **Details**: details about: * the name of the package (typically the file name excluding any version details and extension). * the package version. * the source registry the package is located in. * the package's visibility (based on its registry's visibility)—whether the package is **Private** and requires authentication to access, or is publicly accessible. * the distribution name / version. * additional optional metadata contained within the package, such as a homepage, licenses, etc. - **Pushed**: the date when the last package was uploaded to the source registry. - **Total files**: the total number of files (and directories) within the package. - **Dependencies**: the number of dependency packages required by this package. - **Package size**: the storage size (in bytes) of this package. - **Downloads**: the number of times this package has been downloaded. A Ruby registry's package also has a **Dependencies** tab, which lists other RubyGems gem packages that your currently viewed Ruby gem package has dependencies on. ###### Downloading a package A Ruby package can be downloaded from the package's details page. To download a package: 1. [Access the package's details](#access-a-packages-details). 1. Select **Download**. ###### Installing a package A Ruby package can be installed using code snippet details provided on the package's details page. To install a package: 1. [Access the package's details](#access-a-packages-details). 1. Ensure the **Installation** > **Instructions** section is displayed. 1. Copy the command in the code snippet, paste it into your terminal, and run it. This code snippet is based on this format: ```bash gem install gem-package-name -v version.number \ --clear-sources --source https://buildkite:{registry.read.token}@packages.buildkite.com/{org.slug}/{registry.slug} ``` where: - `gem-package-name` is the name of your RubyGems gem package. - `version.number` is the version of your RubyGems gem package - `{registry.read.token}` is your [API access token](https://buildkite.com/user/api-access-tokens) or [registry token](/docs/package-registries/registries/manage#configure-registry-tokens) used to download packages from your Ruby registry. Ensure this access token has the **Read Packages** REST API scope, which allows this token to download packages from any registry your user account has access to within your Buildkite organization. This URL component, along with its surrounding `buildkite:` and `@` components are not required for registries that are publicly accessible. - `{org.slug}` can be obtained from the end of your Buildkite URL, after accessing **Package Registries** or **Pipelines** in the global navigation of your organization in Buildkite. - `{registry.slug}` is the name of your Ruby registry. --- ### Terraform URL: https://buildkite.com/docs/package-registries/ecosystems/terraform #### Terraform Buildkite Package Registries provides registry support for Terraform modules. Once your Terraform source registry has been [created](/docs/package-registries/registries/manage#create-a-source-registry), you can publish/upload modules (generated from your application's build) to this registry. ##### Publish a module You can use two approaches to publish a module to your Terraform source registry—[`curl`](#publish-a-module-using-curl) or the [Buildkite CLI](#publish-a-module-using-the-buildkite-cli). The [SemVer-style](https://semver.org/) `major.minor.patch` must be included in the filename of the `.tgz` package and be unique, or Package Registries will return an error. The format of the filename must also be in accordance with [Terraform developer documentation](https://developer.hashicorp.com/terraform/registry/modules/publish#requirements). ###### Using curl The **Publish Instructions** tab of your Terraform source registry includes a `curl` command you can use to upload a module to this registry. To view and copy this `curl` command: 1. Select **Package Registries** in the global navigation to access the **Registries** page. 1. Select your Terraform source registry on this page. 1. Select the **Publish Instructions** tab and on the resulting page, use the copy icon at the top-right of the relevant code box to copy this `curl` command and run it (with the appropriate values) to publish the module to this source registry. This command provides: - The specific URL to publish a module to your specific Terraform source registry in Buildkite. - A temporary API access token to publish modules to this source registry. - The Terraform module file to be published. You can also create this command yourself using the following `curl` command (which you'll need to modify as required before submitting): ```bash curl -X POST https://api.buildkite.com/v2/packages/organizations/{org.slug}/registries/{registry.slug}/packages \ -H "Authorization: Bearer $REGISTRY_WRITE_TOKEN" \ -F "file=@path/to/terraform/terraform-{provider}-{module}-{major.minor.patch}.tgz" ``` where: - `{org.slug}` can be obtained from the end of your Buildkite URL, after accessing **Package Registries** or **Pipelines** in the global navigation of your organization in Buildkite. - `{registry.slug}` is the slug of your Terraform registry, which is the [kebab-case](https://en.wikipedia.org/wiki/Letter_case#Kebab_case) version of your Terraform registry name, and can be obtained after accessing **Package Registries** in the global navigation > your Terraform registry from the **Registries** page. - `$REGISTRY_WRITE_TOKEN` is your [API access token](https://buildkite.com/user/api-access-tokens) used to publish/upload modules to your Terraform source registry. Ensure this access token has the **Read Packages** and **Write Packages** REST API scopes, which allows this token to publish modules and other package types to any source registry your user account has access to within your Buildkite organization. Alternatively, you can use an OIDC token that meets your Terraform source registry's [OIDC policy](/docs/package-registries/security/oidc#define-an-oidc-policy-for-a-registry). Learn more about these tokens in [OIDC in Buildkite Package Registries](/docs/package-registries/security/oidc). - `path/to/terraform/terraform-{provider}-{module}-{major.minor.patch}.tgz` is the full path to the module, including the file's name. If the file is located in the same directory that this command is running from, then no path is required. For example, to upload the file `terraform-buildkite-pipeline-1.0.0.tgz` from the current directory to the **My Terraform modules** source registry in the **My organization** Buildkite organization, run the `curl` command: ```bash curl -X POST https://api.buildkite.com/v2/packages/organizations/my-organization/registries/my-terraform-modules/packages \ -H "Authorization: Bearer $REPLACE_WITH_YOUR_REGISTRY_WRITE_TOKEN" \ -F "file=@terraform-buildkite-pipeline-1.0.0.tgz" ``` ###### Using the Buildkite CLI The following [Buildkite CLI](/docs/platform/cli) command can also be used to publish a module to your Terraform source registry from your local environment, once it has been [installed](/docs/platform/cli/installation) and [configured with an appropriate token](#token-usage-with-the-buildkite-cli): ```bash bk package push registry-slug path/to/terraform/terraform-{provider}-{module}-{major.minor.patch}.tgz ``` where: - `registry-slug` is the slug of your Terraform source registry, which is the [kebab-case](https://en.wikipedia.org/wiki/Letter_case#Kebab_case) version of this registry's name, and can be obtained after accessing **Package Registries** in the global navigation > your file source registry from the **Registries** page. - `path/to/terraform/terraform-{provider}-{module}-{major.minor.patch}.tgz` is the full path to the module, including the file's name. If the file is located in the same directory that this command is running from, then no path is required. ###### Token usage with the Buildkite CLI When [configuring the Buildkite CLI with an API access token](/docs/platform/cli/configuration), ensure it has the **Read Packages** and **Write Packages** REST API scopes, which allows this token to publish files to any source registry your user account has access to within your Buildkite organization. You can also override this configured token by passing in a different token value using the `BUILDKITE_API_TOKEN` environment variable when running the `bk` command: ```bash BUILDKITE_API_TOKEN=$another_token_value bk package push organization-slug/registry-slug ./path/to/my/file.ext ``` If you have [installed the Buildkite CLI](/docs/platform/cli/installation) to your [self-hosted agents](/docs/agent/self-hosted/install), you can also do the following: - Use the `bk` command from within your Buildkite pipelines. - Using the `BUILDKITE_API_TOKEN` environment variable, pass in a Buildkite OIDC token value generated from your agents that meets your source registry's [OIDC policy](/docs/package-registries/security/oidc#define-an-oidc-policy-for-a-registry). Learn more about these tokens in [OIDC in Buildkite Package Registries](/docs/package-registries/security/oidc). ##### Access a module's details A Terraform module's details can be accessed from this registry through the **Releases** (tab) section of your Terraform source registry page. To do this: 1. Select **Package Registries** in the global navigation to access the **Registries** page. 1. Select your Terraform source registry on this page. 1. On your Terraform source registry page, select the module within the **Releases** (tab) section. The module's details page is displayed. The module's details page provides the following information in the following sections: - **Installation** (tab): the [installation instructions](#access-a-modules-details-installing-a-module). - **Contents** (tab, where available): a list of directories and files contained within the module. - **Details** (tab): a list of checksum values for this module—MD5, SHA1, SHA256, and SHA512. - **About this version**: a brief (metadata) description about the module. - **Details**: details about: * the name of the module (typically the file name excluding any version details and extension). * the module version. * the source registry the module is located in. * the module's visibility (based on its registry's visibility)—whether the module is **Private** and requires authentication to access, or is publicly accessible. * the distribution name / version. * additional optional metadata contained within the module, such as a homepage, licenses, etc. - **Pushed**: the date when the last module was uploaded to the source registry. - **Total files**: the total number of files (and directories) within the module. - **Dependencies**: the number of dependency modules required by this module. - **Package size**: the storage size (in bytes) of this module. - **Downloads**: the number of times this module has been downloaded. ###### Downloading a module A Terraform module can be downloaded from the module's details page. To download a module: 1. [Access the module's details](#access-a-modules-details). 1. Select **Download**. ###### Installing a module A Terraform module can be installed using code snippet details provided on the module's details page. To install a module: 1. [Access the module's details](#access-a-modules-details). 1. Ensure the **Installation** > **Instructions** section is displayed. 1. If your Terraform source registry is private (the default configuration for source registries), copy the top section of the code snippet, and paste it into your `~/.terraformrc` configuration file. This code snippet is based on the format: ```config credentials "packages.buildkite.com" { token = "registry-read-token" } ``` where `registry-read-token` is your [API access token](https://buildkite.com/user/api-access-tokens) or [registry token](/docs/package-registries/registries/manage#configure-registry-tokens) used to download modules from your Terraform registry. Ensure this access token has the **Read Packages** REST API scope, which allows this token to download modules and other package types from any registry your user account has access to within your Buildkite organization. **Note:** This step only needs to be performed once for the life of your Terraform registry. 1. Copy the lower section of the code snippet, and paste it into your Terraform file. This code snippet is based on the format: ```terraform module "org_slug___registry_name_module_name" { source = "packages.buildkite.com/org-slug---registry-name/ksh/all" version = "version.number" } ``` where: * `org_slug` can be derived from the end of your Buildkite URL (in [snake_case](https://en.wikipedia.org/wiki/Letter_case#Snake_case)), after accessing **Pipelines** in the global navigation of your organization in Buildkite. * `registry_slug` is the slug of your Terraform registry (derived from the registry name in snake_case). * `module_name` is the name of your Terraform module. * `org-slug` can be obtained from the end of your Buildkite URL (in [kebab-case](https://en.wikipedia.org/wiki/Letter_case#Kebab_case)), after accessing **Pipelines** in the global navigation of your organization in Buildkite. * `registry-slug` is the slug of your Terraform registry (derived from the registry name in kebab-case). * `version.number` is the version of your Terraform module. 1. Run the Terraform command: ```bash terraform init ``` --- ## Platform ### Platform URL: https://buildkite.com/docs/platform #### The Buildkite platform Buildkite is an adaptable, composable, and scalable platform with everything platform teams need to build software delivery systems for their businesses—and rapidly deliver value to users. The Buildkite platform documentation contains docs for _common_ features of Buildkite available across Buildkite [Pipelines](/docs/pipelines), [Test Engine](/docs/test-engine), and [Package Registries](/docs/package-registries). This area of the docs covers the following topics: | Topic | Description | | --- ### Overview URL: https://buildkite.com/docs/platform #### The Buildkite platform Buildkite is an adaptable, composable, and scalable platform with everything platform teams need to build software delivery systems for their businesses—and rapidly deliver value to users. The Buildkite platform documentation contains docs for _common_ features of Buildkite available across Buildkite [Pipelines](/docs/pipelines), [Test Engine](/docs/test-engine), and [Package Registries](/docs/package-registries). This area of the docs covers the following topics: | Topic | Description | | --- ### Overview URL: https://buildkite.com/docs/platform/team-management #### Team management Managing users and teams across your CI/CD platform is fundamental to collaboration, streamlined processes, and ensuring adequate access controls. Buildkite provides features to manage team access: - [User and team permissions](/docs/platform/team-management/permissions) - [Enforce 2FA](/docs/platform/team-management/enforce-2fa) - [System banners](/docs/platform/team-management/system-banners) ([Enterprise](https://buildkite.com/pricing/) plan only) - [Inactive user list](/docs/platform/team-management/inactive-user-list) - [Managing API access tokens](/docs/apis/managing-api-tokens) (under the APIs section) --- ### User and team permissions URL: https://buildkite.com/docs/platform/team-management/permissions #### User and team permissions Customers on any [Buildkite plan](https://buildkite.com/pricing) can manage permissions using the _teams_ feature. Learn more about this feature in [Manage teams and permissions](#manage-teams-and-permissions). ##### Manage teams and permissions The _teams_ feature allows you to apply access permissions and functionality controls for one or more groups of users (that is, _teams_) on each pipeline, test suite, registry, or any combination of these, throughout your organization. To manage teams across Buildkite's applications, a _Buildkite organization administrator_ first needs to enable this feature across their organization. To access or enable the teams feature for your organization, or both: 1. Select **Settings** in the global navigation to access the [**Organization Settings**](https://buildkite.com/organizations/~/settings) page. 1. Select **Teams** to access your organization's [**Teams**](https://buildkite.com/organizations/~/teams) page. 1. If the teams feature is not enabled, select **Enable Teams** to activate this feature. When you first enable the teams feature, a team is automatically created for your organization called **Everyone**, which includes all users. This maintains existing access to pipelines for all the users in your Buildkite organization. Without the **Teams** feature activated, all users are able to access all items within your Buildkite organization. ###### Organization-level permissions A user who is a _Buildkite organization administrator_ can access the [**Organization Settings** page](https://buildkite.com/organizations/~/settings) by selecting **Settings** in the global navigation, and do the following throughout their Buildkite organization: - Access the **Teams** feature and page, by selecting **Settings** in the global navigation > **Teams**. - From the **Teams** page: * Create a new team, using the **New Team** button. * Administer (with full control) the [team-](#manage-teams-and-permissions-team-level-permissions), [pipeline-](/docs/pipelines/security/permissions#manage-teams-and-permissions-pipeline-level-permissions), [test suite-](/docs/test-engine/permissions#manage-teams-and-permissions-test-suite-level-permissions) and [registry-](/docs/package-registries/security/permissions#manage-teams-and-permissions-registry-level-permissions)level settings throughout their Buildkite organization. **Note:** Registry-level settings are only available once [Buildkite Package Registries has been enabled](/docs/package-registries/security/permissions#enabling-buildkite-packages). * Delete an existing team, by selecting the team > **Settings** tab > **Delete Team** button. * [Enable](#manage-teams-and-permissions) and disable the teams feature for their organization. This feature can only be disabled once all teams have been deleted from the organization (including the automatically-created **Everyone** team) using the **Disable Teams** button on the **Teams** page. Once the teams feature has been disabled, it can be [re-enabled](#manage-teams-and-permissions) at any time. - Configure other organization-level settings for Buildkite Pipelines and Package Registries, as well as various [integrations](/docs/pipelines/integrations) with Buildkite. - Access and view Buildkite Pipelines and Package Registries usage reports and [audit logs](/docs/platform/audit-log). ###### Team-level permissions A user who is a _team maintainer_ on an existing team can: - Access the **Teams** feature and page, by selecting **Teams** in the global navigation > **Teams**. **Note:** If a team maintainer is also a Buildkite organization administrator, **Teams** is not available in the global navigation and instead, accessing this feature is performed as an [organization administrator](#manage-teams-and-permissions-organization-level-permissions). - From the **Teams** page: * Add another existing user to this team, using the **Add Member** button from the **Members** tab. * Remove a user from this team, by selecting the user's **Remove** button. * Change the permission for all users in this team on any: - [pipeline](/docs/pipelines/security/permissions#manage-teams-and-permissions-pipeline-level-permissions) in the team to **Full Access**, **Build & Read**, or **Read Only**. - [test suite](/docs/test-engine/permissions#manage-teams-and-permissions-test-suite-level-permissions) in the team to **Full Access**, or **Read Only**. - [registry](/docs/package-registries/security/permissions#manage-teams-and-permissions-registry-level-permissions) in the team to **Full Access**, **Read & Write**, or **Read Only**. To do this, select the appropriate tab (**Pipelines**, **Test Suites** or **Package Registries**) and then select the required permission for the item. **Note:** Managing team permissions for registries is only available once [Buildkite Package Registries has been enabled](/docs/package-registries/security/permissions#enabling-buildkite-packages). * Edit the team's details and other settings using the **Settings** tab, which includes the ability to: - Change the team's **Visibility**. - **Automatically add new users to this team**. - Set the **Default Member Role** (that is, team **Member** or **Maintainer**) for new users joining the team. - Set the **Team Member Permissions**, which allows team members to do any combination of the following in this team: * **Create pipelines** * **Create test suites** * **Create registries** * **Delete registries** * **Delete packages** **Note:** If these permissions are removed from a team, all team maintainers in this team will still be able to create and add new pipelines, test suites and registries within the team. - Delete the team, using the **Delete** button. As indicated in the Buildkite interface, a user who is in a team is known as a **Team Member**, and such users have fewer permissions within the team (that is, no team management capabilities) than a **Team Maintainer**. All team members in a team have the same level of access to the [pipelines](/docs/pipelines/security/permissions#manage-teams-and-permissions-pipeline-level-permissions), [test suites](/docs/test-engine/permissions#manage-teams-and-permissions-test-suite-level-permissions), and [registries](/docs/package-registries/security/permissions#manage-teams-and-permissions-registry-level-permissions) in the team. If you need to have more fine grained control over the pipelines, test suites or registries in a team, you can create more teams with different permissions. > 📘 Pipeline-level permissions override team member permissions > When a user belongs to multiple teams that have access to the same pipeline, the highest pipeline-level permission across all of those teams applies. For example, if a user belongs to Team A, which has **Read Only** access to a pipeline, and also belongs to Team B, which has **Full Access** to the same pipeline, the user has **Full Access** to that pipeline. This means the user can edit pipeline settings, create builds, and manage access, regardless of the lower permission set through Team A. > 🚧 Changing **Full Access** permissions on pipelines, test suites and registries > As a team maintainer, once you change the permission on any of these items away from **Full Access**, you could lose the ability to change the permissions on that item again. This can happen if you are no longer a member of a team that provides **Full Access** to this item. > A [Buildkite organization administrator](#manage-teams-and-permissions-organization-level-permissions) is required to change any item's permissions back to **Full Access** again. ###### Programmatically managing teams You can programmatically manage your teams using our GraphQL API. If you're creating pipelines programmatically using the REST API, you can add them directly to teams using the team's UUID. More information about creating pipelines can be found in our [REST API documentation](/docs/apis/rest-api/pipelines#create-a-yaml-pipeline). You can also restrict agents to specific teams with the `BUILDKITE_BUILD_CREATOR_TEAMS` environment variable. Using agent hooks, you can allow or disallow builds based on the creator's team memberships. > 🚧 Unverified commits > Note that GitHub accepts [unsigned commits](https://docs.github.com/en/authentication/managing-commit-signature-verification/about-commit-signature-verification), including information about the commit author and passes them along to webhooks, so you should not rely on these for authentication unless you are confident that all of your commits are trusted. For example, the following [`environment` hook](/docs/agent/hooks#job-lifecycle-hooks) prevents anyone from outside of the ops team from running a build on the agent: ```bash set -euo pipefail if [[ ":$BUILDKITE_BUILD_CREATOR_TEAMS:" != *":ops:"* ]]; then echo "You must be in the ops team to run a job on this agent" exit 1 fi ``` ###### Frequently asked questions ###### Is there a limit to the number of teams an organization can have? Yes, Buildkite has a limit of _250 teams per organization_. If you are an [Enterprise](https://buildkite.com/pricing/) plan customer and require additional teams, please contact support@buildkite.com. ###### Will users (and API tokens) still have access to their pipelines? When you enable the teams feature, a default team called "Everyone" is created, containing all your users and pipelines. This ensures that users, and their API tokens, will still have access to their pipelines. ###### How does Teams work with SSO? When a user joins the organization using SSO, they'll be automatically added to any teams that have the "Automatically add new users to this team" setting enabled. ###### Can I delete the "Everyone" team? Yes, you can delete or edit the "Everyone" team. To ensure uninterrupted access to pipelines we recommend creating new teams before deleting the "Everyone" team. ###### Can I set separate permissions specifically on rebuilds? No, rebuilds are in the same category with builds. Therefore, all team members with permissions to run builds on a certain pipeline are also able to perform rebuilds. ###### Once enabled, can I disable teams? Yes, you can disable teams by deleting all your teams, and then selecting "Disable Teams". ###### Can I automate the removal of users from Buildkite? Yes, you can automatically remove users using the GraphQL API. You'll need a [GraphQL API token](https://buildkite.com/user/api-access-tokens) to do it. You'll need to look up your organization's slug in the [Organization Settings](https://buildkite.com/organizations/-/settings) and check the name or email of the user you want to remove in the [team](https://buildkite.com/organizations/-/teams) that this user belongs to. Next, use the first query to get the user ID (make sure to replace `your-organization-slug` with your Buildkite organization's slug and `Jane Doe` with the name or email of the user you want to remove), and then run the RemoveOrganizationMember mutation with the user ID to remove the user: ```bash query FindOrganizationMember { organization(slug: "your-organization-slug") { members(first: 1, search: "Jane Doe") { edges { node { id # You will need to use this info on the next step as OrganizationMember.id user { # Double check that this is the right user you are about to remove name email } } } } } } ``` Copy the user ID you've received into the following mutation and run it to remove the user from your Buildkite organization: ```bash mutation RemoveOrganizationMember { organizationMemberDelete(input: { id: "user-ID-you-copied-goes-here" }) { deletedOrganizationMemberID } } ``` ##### Removing users during a security incident If you believe that a user account has been compromised, the recommended incident response is to remove such a user from your Buildkite organization immediately. This will entirely remove their ability to impact your organization and protect you from any further actions that the user could take. You can remove a user in your organization's **Settings** in the Buildkite interface. A Buildkite organization administrator can also delete organization members using GraphQL. To do this: 1. Find the `id` for the user to be deleted (in this example `Jane Doe`): ```graphql query { organization(slug: "your-organization-slug") { members(search: "Jane Doe", first: 10) { edges { node { role user { name } id } } } } } ``` 1. Use the `id` from the previous query in a mutation: ```graphql mutation deleteOrgMember { organizationMemberDelete(input: { id: "abc123" }) { organization { name } deletedOrganizationMemberID user { name } } } ``` ###### Security guarantees of removing a user When you remove a user from your organization, the active session tokens belonging to this user cannot make calls to the product that will return (or make changes to) any of the data available using the web UI, API, etc. So removing a compromised or rogue user is as effective as killing all sessions by the user. Within Buildkite's access model, organizations don't own users, so they can't control users' sessions because users represent individuals who may be members of multiple organizations. Removing a compromised user from your organization immediately protects all the organization's resources from that user. The user will technically still be able to view their personal settings page. In case of a non-responsive or rogue user, or if multiple accounts are compromised, you can send a list of impacted user IDs to [support@buildkite.com](mailto:support@buildkite.com) and ask the Buildkite support to log out the specific user or all the users out of all sessions. > 📘 Enterprise plan > As a part of the Buildkite SLA, customers on the [Enterprise](https://buildkite.com/pricing/) plan have an emergency email available for operational and security incidents. Contact your Customer Success Manager for more information. If you suspect or have already detected a security breach, and the affected user is cooperative, they can also log out and [reset](https://buildkite.com/forgot-password) their password, which will automatically reset all of their active sessions. Then you can work with the affected user to ensure their account is safe and re-add them to your Buildkite organization. Note that resetting a password might not always be an option. If you have SSO enabled for your organization, the user in question may not even have a dedicated password for their Buildkite account. ###### Removing users from an organization with enabled SSO If you're using SSO, you also need to protect against the attackers regaining organization membership by logging in again with SSO. This _will not_ renew the revoked authorizations and _does not_ authorize any other sessions that might still be active for the user, as a new organization member will be created. However, if the attacker has access to a Buildkite session, they may also have access to a regular session with the permissions granted for your SSO session defaults. So it is important to disable or remove such a compromised user account from your SSO. If the attacker has control of the SSO, the scope of the security incident is beyond what could be remediated using Buildkite's tools only. ###### Disabling and re-enabling SSO The other control you have is the organization membership's SSO mode. If the membership requires SSO, the user will only have access to your organization in the particular sessions authenticated through your SSO provider. > 🚧 > Before you proceed, make sure that you have at least one user with SSO as an optional log in requirement in your organization to make it possible for someone to log back in! Admins of your Buildkite organization can disable and then re-enable the SSO, which will force all users in your organization to re-authorize with SSO. When you disable an SSO provider, it rescinds all active SSO authorizations for all users _including the admin who disables the SSO_! The admin will need to log back into the organization by using a non-SSO method. You can [disable](/docs/platform/sso/sso-setup-with-graphql#disabling-an-sso-provider) and [re-enable](/docs/platform/sso/sso-setup-with-graphql#setting-up-saml-google-cloud-identity-okta-onelogin-adfs-and-others-step-4) the SSO using GraphQL or the Buildkite UI. Remember that if an attacker had a fully authenticated session, they've potentially configured API tokens, which will not be subject to SSO requirements. Therefore, the only truly safe response is still to remove the compromised user from your Buildkite organization. --- ### Enforce 2FA URL: https://buildkite.com/docs/platform/team-management/enforce-2fa #### Enforce two-factor authentication (2FA) Two-factor authentication can be enforced for the whole organization to ensure that all users who access the organization have two-factor authentication enabled. ##### Before enforcing two-factor authentication Before you enforce two-factor authentication (2FA) for your organization, consider that users without 2FA enabled will immediately lose access to the organization and the subsequent pipelines in that organization. Users can set up 2FA by following the [2FA tutorial]. ##### Steps to enforce two-factor authentication To enforce 2FA: 1. Ensure you are logged in as a [Buildkite organization administrator](/docs/platform/team-management/permissions#manage-teams-and-permissions-organization-level-permissions). 1. Access your Buildkite organization's **Settings** (in the global navigation) > [**Security** page](https://buildkite.com/organizations/~/security). 1. Select the **Enforce Two-factor authentication** checkbox. 1. Select **Update Access Control**. ##### Programmatically enforcing two-factor authentication Please review the GraphQL [cookbook] for instructions on how to enable enforced 2FA via the GraphQL API. [cookbook]: [2FA tutorial]: ##### API access tokens Enforcing 2FA does not invalidate existing [API access tokens][access-tokens]. Existing tokens will continue to work, but users must enable 2FA before they can update existing tokens or create new ones. [access-tokens]: --- ### System banners URL: https://buildkite.com/docs/platform/team-management/system-banners #### System banners > 📘 Enterprise plan feature > The system banners feature is only available to Buildkite customers on [Enterprise](https://buildkite.com/pricing) plans. Buildkite organization administrators can create announcement banners for their Buildkite organization. Banners are displayed to all members of the organization at the top of every page throughout the Buildkite interface. You can use Markdown to format your message and link to other URLs or pages for more context. ##### Steps to creating a banner 1. Ensure you are logged in as a Buildkite organization administrator. 1. Access your Buildkite organization's [**Settings** page](https://buildkite.com/organizations/~/settings) from the global navigation. 1. On the **Organization Settings** page, add a message to the **System banners** text box. 1. Select **Save Banner**. [settings page]: ##### Programmatically creating a system banner You can create a system banner programmatically via the GraphQL API. Please review the GraphQL [cookbook] on instructions on how to create a banner via the API. [cookbook]: --- ### Inactive user list URL: https://buildkite.com/docs/platform/team-management/inactive-user-list #### Inactive user list Buildkite organization administrators can audit inactive users within their organization using the inactive user list. An _inactive user_ is an organization member who has not interacted with Buildkite within a selected time period. This helps administrators identify and remove users who no longer need access. > 📘 Enterprise plan feature > The inactive user list feature is only available to Buildkite customers on [Enterprise](https://buildkite.com/pricing) plans, as it requires the [Audit Logging](/docs/platform/audit-log) feature. ##### View inactive users To view inactive users in your organization: 1. Select **Settings** in the global navigation to access the [**Organization Settings**](https://buildkite.com/organizations/~/settings) page. 1. In the sidebar, select **Audit** > **Inactive User List** to access your organization's inactive user list. 1. Select a time period to filter the list. Available periods are **30 days**, **90 days** (the default), and **120 days**. For example, selecting **30 days** shows organization members who have not been active in the last 30 days. Each entry displays the member's name, email address, and the date they were last active. > 📘 Last seen data > The inactive user list relies on each member's _last seen_ timestamp. Members who have never logged in appear in the list with a placeholder date of **30 July 2020**, indicating that no activity has been recorded for that user. ##### Export inactive users to CSV You can export the current filtered list of inactive users to a CSV file by selecting the **Export to CSV** button at the top of the page. ##### Remove inactive users After identifying inactive users, you can remove them from your organization to maintain a clean membership list. Removing a user from the organization does not delete their user account, and builds created by the user will not be deleted. To remove inactive users: 1. From the **Inactive User List** page, select the checkbox next to each user you want to remove. You can select multiple users. 1. Select **Remove selected users**. 1. Confirm the removal when prompted. You can also remove users programmatically using the [GraphQL API](/docs/apis/graphql/cookbooks/organizations#delete-an-organization-member). ##### Query inactive users with the GraphQL API You can query inactive organization members programmatically using the [GraphQL API](/docs/apis/graphql-api). Use the `inactiveSince` argument on the `members` field to filter for members who have not been active since a specific date. ```graphql query getInactiveOrgMembers { organization(slug: "organization-slug") { members(first: 100, inactiveSince: "2025-01-01T00:00:00Z") { count edges { node { id lastSeenAt user { name email } } } } } } ``` The `inactiveSince` value is an ISO 8601 encoded UTC date string. The query returns all members whose `lastSeenAt` is either `null` (never seen) or before the specified date, along with the total `count` of matching members. For more GraphQL recipes related to organization member management, see the [Organizations cookbook](/docs/apis/graphql/cookbooks/organizations). --- ### Two-factor authentication (2FA) URL: https://buildkite.com/docs/platform/tutorials/2fa #### Two-factor authentication Two-factor authentication (2FA) can be added to your Buildkite account to provide an additional layer of security and to make sure your builds are safe even if your login credentials are compromised (exposed or stolen). Once 2FA is enabled on your Buildkite account, the only way to log in to your account is by knowing both your password and a unique code generated by a third-party application such as [1Password], [OTP Auth], [Duo Mobile], [Authy], or [Google Authenticator]. ##### Setting up two-factor authentication You can set up two-factor authentication in the Buildkite dashboard. To do it, select **Personal Settings** in the dropdown menu under your profile picture. Next, navigate to the **Two-Factor Authentication** item and click it (you may be asked to enter your password in the **Confirm Password** field). Enter your Buildkite account password and proceed. Click the **Setup Two-Factor Authentication** button to start securing your Buildkite account. ###### Step 1: Store recovery codes You will need them to restore access to your account if you lose access to your authenticator application. Use the buttons to either copy the codes to clipboard or download them as a text file. Keep your recovery codes in a safe digital space or print them out and hide them well. Never share your recovery codes. Saved your recovery codes and proceed. ###### Step 2: Configure authenticator application To activate two-factor authentication, scan the barcode that appears in the Buildkite dashboard with the authenticator application of your choice. If you cannot scan the barcode, you can use the secret key below the barcode. After you've scanned the barcode or activated the authenticator application using the secret key, Buildkite will appear on the list of accounts registered in that application. Your authenticator will provide a new randomly generated six-digit code (your One Time Pass) roughly every 30 seconds. Enter this code into the corresponding field in the Buildkite app and click **Activate**. Congratulations! You have now successfully enabled the two-factor authentication for your Buildkite account. This will be confirmed by an **Enabled** badge next to the **Two-Factor Authentication** option in your **Personal Settings**. Next time you try to log into your Buildkite account from a new browser, device, or location, you will be asked to enter the current one-time password provided by your authentication app. You can always reconfigure or deactivate the 2FA if you need to. This can be done in the **Two-Factor Authentication** area in **Personal Settings** for your Buildkite account in the dashboard. ##### Recovering access after losing recovery codes If you are locked out of your Buildkite account with two-factor authorization enabled and have no recovery codes, there is still a way to regain access to your Buildkite builds. You need to ask the administrator of your Buildkite organization to remove your account. Next, contact support@buildkite.com and ask your account to be deleted. Once it's deleted, you can create a new one. ##### Enforcing two-factor authentication for the whole organization Organization administrators who would like to enforce two-factor authentication across their entire organization can do so following the [Enforce 2FA](/docs/platform/team-management/enforce-2fa) guide. [1Password]: [OTP Auth]: [Authy]: [Duo Mobile]: [Google Authenticator]: --- ### Audit log URL: https://buildkite.com/docs/platform/audit-log #### Audit log The **Audit Log** is an interactive track record of all organization activity. > 📘 Enterprise plan feature and storage period > The audit/activity log feature is only available to Buildkite customers on the [Enterprise](https://buildkite.com/pricing) plan, and is only accessible to Buildkite organization administrators. > **Audit Log** events are stored indefinitely and can be accessed in the [Buildkite Pipelines](/docs/pipelines) web interface for up to 12 months. After 12 months, **Audit Log** events can be accessed using [GraphQL](/docs/apis/graphql-api). To access the **Audit Log** feature: 1. Select **Settings** in the global navigation to access the [**Organization Settings**](https://buildkite.com/organizations/~/settings) page. 1. Select **Audit** > **Audit Log** to access your organization's [**Audit Log**](https://buildkite.com/organizations/~/audit-log) page. The Audit Log contains two tabs: - **Events** - lists all the events that take place within your Buildkite organization. Learn more about which events are logged in [Logged events](#logged-events). - **Query & Export** - allows you to query and export your Buildkite organization's audit log using [GraphQL API](/docs/graphql-api). The following GraphQL `Audit Event` types are available and you can find more details about them in the [GraphQL explorer](/docs/apis/graphql-api#getting-started). ##### Search events The **Events** tab has a search bar to filter events by type. The search supports the following syntax: - Use `type:EVENT_TYPE` to include events of a specific type. For example: `type:PIPELINE_CREATED`. - Use `-type:EVENT_TYPE` to exclude events of a specific type. For example: `-type:SECRET_READ`. - Combine multiple space-separated terms to search for more than one event type. Positive `type:` filters use `OR` logic, matching any of the specified types. Negative `-type:` filters use `AND-NOT` logic, excluding all specified types. For example, `type:TEAM_CREATED type:TEAM_DELETED -type:TEAM_UPDATED` returns events where the type is either `TEAM_CREATED` or `TEAM_DELETED`, but not `TEAM_UPDATED`. The search has the following constraints: - Maximum of three unique terms (positive and negative combined) - Maximum of 250 characters for the query string - Only events from the last 90 days are returned The available event type names are listed in [Logged events](#logged-events) below. ##### Logged events This section lists the events that are currently logged by Buildkite. ###### Unclustered agent tokens ``` AGENT_TOKEN_CREATED AGENT_TOKEN_REVOKED AGENT_TOKEN_UPDATED ``` ###### Access tokens ``` API_ACCESS_TOKEN_CREATED API_ACCESS_TOKEN_DELETED API_ACCESS_TOKEN_ORGANIZATION_ACCESS_REVOKED API_ACCESS_TOKEN_UPDATED USER_API_ACCESS_TOKEN_ORGANIZATION_ACCESS_ADDED USER_API_ACCESS_TOKEN_ORGANIZATION_ACCESS_REMOVED AUTHORIZATION_CREATED AUTHORIZATION_DELETED ``` ###### User account management ``` USER_EMAIL_CREATED USER_EMAIL_DELETED USER_EMAIL_MARKED_PRIMARY USER_EMAIL_VERIFIED USER_PASSWORD_RESET USER_PASSWORD_RESET_REQUESTED USER_TOTP_ACTIVATED USER_TOTP_CREATED USER_TOTP_DELETED USER_UPDATED ``` ###### Notifications ``` NOTIFICATION_SERVICE_BROKEN NOTIFICATION_SERVICE_CREATED NOTIFICATION_SERVICE_DELETED NOTIFICATION_SERVICE_DISABLED NOTIFICATION_SERVICE_ENABLED NOTIFICATION_SERVICE_UPDATED ``` ###### Organization management ``` ORGANIZATION_CREATED ORGANIZATION_DELETED ORGANIZATION_TEAMS_DISABLED ORGANIZATION_TEAMS_ENABLED ORGANIZATION_UPDATED ORGANIZATION_BANNER_CREATED ORGANIZATION_BANNER_DELETED ORGANIZATION_BANNER_UPDATED ORGANIZATION_INVITATION_ACCEPTED ORGANIZATION_INVITATION_CREATED ORGANIZATION_INVITATION_RESENT ORGANIZATION_INVITATION_REVOKED ORGANIZATION_MEMBER_CREATED ORGANIZATION_MEMBER_DELETED ORGANIZATION_MEMBER_UPDATED ORGANIZATION_BUILD_EXPORT_UPDATED ``` ###### Buildkite subscriptions ``` SUBSCRIPTION_PLAN_CHANGED SUBSCRIPTION_PLAN_CHANGE_SCHEDULED SUBSCRIPTION_PLAN_ADDED ``` ###### Pipelines ``` PIPELINE_CREATED PIPELINE_DELETED PIPELINE_UPDATED PIPELINE_WEBHOOK_URL_ROTATED PIPELINE_SCHEDULE_CREATED PIPELINE_SCHEDULE_DELETED PIPELINE_SCHEDULE_UPDATED PIPELINE_TEMPLATE_CREATED PIPELINE_TEMPLATE_DELETED PIPELINE_TEMPLATE_UPDATED PIPELINE_VISIBILITY_CHANGED ``` ###### Team management ``` TEAM_CREATED TEAM_DELETED TEAM_UPDATED TEAM_MEMBER_CREATED TEAM_MEMBER_DELETED TEAM_MEMBER_UPDATED ``` ###### For Buildkite Pipelines ``` TEAM_PIPELINE_CREATED TEAM_PIPELINE_DELETED TEAM_PIPELINE_UPDATED ``` ###### For Buildkite Package Registries ``` TEAM_REGISTRY_CREATED TEAM_REGISTRY_UPDATED TEAM_REGISTRY_DELETED ``` ###### For Buildkite Test Engine ``` TEAM_SUITE_CREATED TEAM_SUITE_UPDATED TEAM_SUITE_DELETED ``` ###### Single-sign on provider ``` SSO_PROVIDER_CREATED SSO_PROVIDER_DELETED SSO_PROVIDER_DISABLED SSO_PROVIDER_ENABLED SSO_PROVIDER_UPDATED ``` ###### Source control management ``` SCM_SERVICE_CREATED SCM_SERVICE_DELETED SCM_SERVICE_UPDATED SCM_REPOSITORY_HOST_UPDATED SCM_REPOSITORY_HOST_CREATED SCM_REPOSITORY_HOST_DESTROYED SCM_PIPELINE_SETTINGS_CREATED SCM_PIPELINE_SETTINGS_DELETED SCM_PIPELINE_SETTINGS_UPDATED ``` ###### Test Engine ``` SUITE_API_TOKEN_REGENERATED_EVENT SUITE_CREATED SUITE_DELETED SUITE_UPDATED SUITE_VISIBILITY_CHANGED SUITE_MONITOR_CREATED SUITE_MONITOR_DELETED SUITE_MONITOR_UPDATED ``` ###### Buildkite secrets ``` SECRET_CREATED SECRET_DELETED SECRET_QUERIED SECRET_READ SECRET_UPDATED ``` ###### Cluster management ``` CLUSTER_CREATED CLUSTER_DELETED CLUSTER_UPDATED CLUSTER_QUEUE_CREATED CLUSTER_QUEUE_DELETED CLUSTER_QUEUE_UPDATED CLUSTER_TOKEN_CREATED CLUSTER_TOKEN_DELETED CLUSTER_TOKEN_UPDATED CLUSTER_QUEUE_TOKEN_CREATED CLUSTER_QUEUE_TOKEN_UPDATED CLUSTER_QUEUE_TOKEN_DELETED CLUSTER_PERMISSION_CREATED CLUSTER_PERMISSION_DELETED ``` ###### Buildkite Package Registries ``` REGISTRY_CREATED REGISTRY_UPDATED REGISTRY_DELETED ``` ###### Other systems You can also set up [Amazon EventBridge](/docs/pipelines/integrations/observability/amazon-eventbridge) to stream Audit Log events. --- ### Emojis URL: https://buildkite.com/docs/platform/emojis #### Emojis Buildkite supports over 300 custom emojis that you can use in your Buildkite [pipelines](/docs/pipelines/configure), including the terminal output of builds, as well as in [test suites](/docs/test-engine/test-suites) and [registries](/docs/package-registries/registries/manage). To use an emoji, write the name of the emoji in between colons, like `\:buildkite\:` which shows up as :buildkite:. Explore the full list of Buildkite-specific emojis below or at [emoji.buildkite.com](https://emoji.buildkite.com): emoji.buildkite.com ##### Adding custom emojis Add your own emoji by opening a [pull request](https://github.com/buildkite/emojis?tab=readme-ov-file#contributing-a-new-emoji) containing a 64x64 PNG image and a name to the emoji repository. > 🚧 Buildkite emojis in other tools > Buildkite loads custom emojis as [images](https://github.com/buildkite/emojis). Other tools, such as GitHub, might not display the images correctly, and will only show the `:text-form:`. --- ### Overview URL: https://buildkite.com/docs/platform/cli #### The Buildkite CLI The Buildkite CLI is a command-line interface (CLI) tool for interacting directly with the Buildkite platform itself. This tool provides command line/terminal access to work with a subset of the Buildkite platform's features, as you normally would through its web interface. Using the Buildkite CLI, you can manage Buildkite agents and their configuration, work with a Buildkite pipeline's builds, control job execution, and manipulate its artifacts, along with several other actions. ##### Installation The Buildkite CLI can be installed on all major platforms. Learn more about how to install the tool on your platform in [Buildkite CLI installation](/docs/platform/cli/installation). ##### Usage Once you've installed the Buildkite CLI, you can start using it by typing `bk`, followed by its specific command at your command-line prompt. To learn more about the `bk` command's own comprehensive set of command categories, enter `bk` at the command prompt to see its list of available commands, as well as consulting the [Command-line reference](/docs/platform/cli/reference) for more detailed information. ##### Configuration The Buildkite CLI requires an API access token to interact with Buildkite and your Buildkite organizations. Learn more about how to configure these API access tokens in [Buildkite CLI configuration](/docs/platform/cli/configuration). --- ### Installation URL: https://buildkite.com/docs/platform/cli/installation #### Buildkite CLI installation The Buildkite CLI can be installed on several platforms. ##### Debian/Ubuntu Ensure you have `curl` and `gpg` installed first: ```sh sudo apt update && sudo apt install curl gpg -y ``` Install the signing key: ```sh curl -fsSL "https://packages.buildkite.com/buildkite/cli-deb/gpgkey" | sudo gpg --dearmor -o /etc/apt/keyrings/buildkite_cli-deb-archive-keyring.gpg ``` Configure the registry: ```sh echo -e "deb [signed-by=/etc/apt/keyrings/buildkite_cli-deb-archive-keyring.gpg] https://packages.buildkite.com/buildkite/cli-deb/any/ any main\ndeb-src [signed-by=/etc/apt/keyrings/buildkite_cli-deb-archive-keyring.gpg] https://packages.buildkite.com/buildkite/cli-deb/any/ any main" | sudo tee /etc/apt/sources.list.d/buildkite-buildkite-cli-deb.list ``` Install the Buildkite CLI: ```sh sudo apt update && sudo apt install -y bk ``` ##### Red Hat/CentOS Configure the registry: ```sh echo -e "[cli-rpm]\nname=Buildkite CLI\nbaseurl=https://packages.buildkite.com/buildkite/cli-rpm/rpm_any/rpm_any/\$basearch\nenabled=1\nrepo_gpgcheck=1\ngpgcheck=0\ngpgkey=https://packages.buildkite.com/buildkite/cli-rpm/gpgkey\npriority=1" | sudo tee /etc/yum.repos.d/cli-rpm.repo ``` Then, install the Buildkite CLI: ```sh sudo dnf install -y bk ``` ##### macOS The Buildkite CLI is packaged into the Buildkite [Homebrew](http://brew.sh/) tap, which is the recommended approach for installing this CLI tool on macOS as it allows you to use the [Buildkite Homebrew formulae](https://github.com/buildkite/homebrew-buildkite) repository. To install the Buildkite CLI on macOS, run: ```sh brew install buildkite/buildkite/bk@3 ``` ##### Windows 1. Download the latest Windows release from the [Buildkite CLI releases](https://github.com/buildkite/cli/releases) page. 2. Extract the files to a folder of your choice. 3. Run `bk.exe` from a command prompt. > 📘 > The Buildkite CLI can also be installed into Windows Subsystem for Linux (WSL). ##### Manual installation If your system is not listed above, you can manually install a binary from the [Buildkite CLI releases](https://github.com/buildkite/cli/releases) page. --- ### Configuration URL: https://buildkite.com/docs/platform/cli/configuration #### Buildkite CLI configuration The Buildkite CLI uses both the [REST](/docs/apis/rest-api) and [GraphQL](/docs/apis/graphql-api) APIs to interact with Buildkite, and therefore, requires the configuration of an API access token. ##### Authenticate using OAuth You can authenticate the Buildkite CLI using OAuth with the [`bk auth login`](/docs/platform/cli/reference/auth#login-auth) command, which opens your browser to complete the authentication flow. By default, `bk auth login` requests all available REST API scopes. The Buildkite platform enforces server-side restrictions. The issued token only grants permissions that your Buildkite user account actually has. The `graphql` scope is excluded from this process due to its unscoped nature. To restrict the scopes requested during OAuth login, use the `--scopes` flag. For example, `--scopes "read_only"` requests only read access. You can also combine scope groups with individual scopes, such as `--scopes "read_only write_builds"`. Learn more about available scopes in [Token scopes](/docs/apis/managing-api-tokens#token-scopes). > 📘 Restricting CLI token scopes > For organizations that enforce the principle of least privilege, use `--scopes` to issue CLI tokens with only the minimum scopes required. Without `--scopes`, the token is issued with all scopes that your account has permission for. ##### Create an API access token for the Buildkite CLI To create a new API access token: 1. Select your user profile icon > [**Personal Settings**](https://buildkite.com/user/settings) in the global navigation. 1. Select **API Access Tokens** to access your [**API Access Tokens**](https://buildkite.com/user/api-access-tokens) page. 1. Select **New API Access Token** to open the [**New API Access Token**](https://buildkite.com/user/api-access-tokens/new) page. 1. Specify a **Description** and the **Organization Access** (that is, the specific Buildkite organization) for this token. 1. Once you have selected the required **REST API Scopes** and **Enable GraphQL API access** for the token, retain a copy of your API access token's value in a secure location. **Note:** You can also use the following **New API Access Token** page links with pre-set fields to create these API access tokens: - [New API access token with description](https://buildkite.com/user/api-access-tokens/new?description=Buildkite%20CLI)—pre-sets the **Description** field with `Buildkite CLI`. - [New API access token with description and API scopes](https://buildkite.com/user/api-access-tokens/new?description=Buildkite%20CLI&scopes%5B%5D=read_agents&scopes%5B%5D=write_agents&scopes%5B%5D=read_clusters&scopes%5B%5D=write_clusters&scopes%5B%5D=read_teams&scopes%5B%5D=write_teams&scopes%5B%5D=read_artifacts&scopes%5B%5D=write_artifacts&scopes%5B%5D=read_builds&scopes%5B%5D=write_builds&scopes%5B%5D=read_build_logs&scopes%5B%5D=read_organizations&scopes%5B%5D=read_pipelines&scopes%5B%5D=write_pipelines&scopes%5B%5D=read_user&scopes%5B%5D=read_suites&scopes%5B%5D=write_suites&scopes%5B%5D=read_registries&scopes%5B%5D=write_registries&scopes%5B%5D=delete_registries&scopes%5B%5D=read_packages&scopes%5B%5D=write_packages&scopes%5B%5D=delete_packages&scopes%5B%5D=graphql)—pre-sets the **Description** field with `Buildkite CLI`, along with all the required **REST API Scopes** and **Enable GraphQL API access** options already selected. If you use one of these links, you must still specify the Buildkite organization (in **Organization Access**) for this API access token. ##### Configure the Buildkite CLI with your API access token Once you have [created your API access token](#create-an-api-access-token-for-the-buildkite-cli), you'll need to configure the Buildkite CLI with this token. To do this: 1. Run the following command: ```bash bk configure ``` 1. When prompted for `Organization slug`, specify the slug for your Buildkite organization. 1. When prompted for `API Token`, specify the value for your configured API access token. **Note:** Upon successfully running this command for the first time, a new file is created at `$HOME/.config/bk.yaml`, which stores the Buildkite organization and its API access token configuration for your local Buildkite CLI. ###### Using command flags You can also run the `bk configure` command with the command flags, `--org` and `--token`, each of which can take either a literal or environment variable for the Buildkite organization slug and API access token, respectively. For example: ```bash bk configure --org my-buildkite-organization --token $BUILDKITE_API_TOKEN ``` ###### Command behavior and configuration files The `bk configure` command is directory-specific, and running this command also creates a file called `.bk.yaml` in your current directory, which records the current Buildkite organization that your `bk` command is configured to work with from this current directory. Attempting to run this command again in the same directory results in an error (due to the presence of a `.bk.yaml` file). Instead: - You can [configure your Buildkite CLI tool to work with other Buildkite organizations](#configure-the-buildkite-cli-with-multiple-organizations). - If your Buildkite CLI is already configured with multiple organizations, you can [choose a different Buildkite organization](#select-a-configured-organization) for it to work with. If you run this command in a new directory (without a `.bk.yaml` file), and you specify a different API access token value for a Buildkite organization which has already been configured in `$HOME/.config/bk.yaml`, then this new API access token replaces the existing one configured in this file for that Buildkite organization. ##### Configure the Buildkite CLI with multiple organizations Some users may have access to Buildkite organizations—one for their company, and others for open-source work, personal work, etc. The Buildkite CLI tool allows you to work with such multiple Buildkite organizations. To configure the Buildkite CLI tool with another Buildkite organization: 1. Ensure you have [created individual API access tokens](#create-an-api-access-token-for-the-buildkite-cli) for each Buildkite organization to configure in the Buildkite CLI tool. 1. Run the following command: ```bash bk configure add ``` 1. When prompted for `Organization slug`, specify the slug for the new Buildkite organization to add to the Buildkite CLI. 1. When prompted for `API Token`, specify the value for your configured API access token for this organization. **Note:** Upon success, a new Buildkite organization and corresponding API access token entry is added to your `$HOME/.config/bk.yaml`. This file stores all currently configured Buildkite organizations and their respective API access tokens for your local Buildkite CLI. ##### Select a configured organization If your Buildkite CLI tool has been [configured with multiple Buildkite organizations](#configure-the-buildkite-cli-with-multiple-organizations), you can switch from your current/active Buildkite organization to another. To do this: 1. Run the following command: ```bash bk use ``` 1. Use the cursor select another configured Buildkite organization and make it the current/active one. All subsequent `bk` commands will operate with the new active organization. **Notes:** * If you already know the slug of the other Buildkite organization you're switching to, you can specify this value immediately after the `bk use` command, for example, `bk use my-other-organization`. * Upon success, the `.bk.yaml` file in your current directory is updated with your current/active Buildkite organization. --- ### Preflight URL: https://buildkite.com/docs/platform/cli/preflight #### Preflight > 🚧 Experimental feature > The Preflight feature is currently in experimental stage. Its behavior is subject to change without notice. To provide feedback, please contact Buildkite's Support team at [support@buildkite.com](mailto:support@buildkite.com). Preflight is a subcommand of the Buildkite CLI (`bk preflight`) that runs your uncommitted local changes against Buildkite Pipelines and monitors failures as they happen. It is designed for use with a coding agent, triggering a build against the changes in your working tree and surfacing failures for the agent to iterate against. The Preflight (`bk preflight`) command: - Snapshots your uncommitted changes (staged, unstaged, and untracked files) as a temporary commit on a new branch, without touching your working tree. Files matched by `.gitignore` are excluded. - Pushes that commit to a branch prefixed with `bk/preflight/` on the repository's `origin` remote, then triggers a build on your chosen Buildkite pipeline. - Monitors failures in your terminal in real time and exits as soon as the build starts failing. - Cleans up the temporary branch automatically when the build finishes. ##### Before you begin You'll need: - The [Buildkite CLI](/docs/platform/cli/installation) version 3.40.0 or later. - A [configured API access token](/docs/platform/cli/configuration) with the `read_builds`, `write_builds`, and `read_pipelines` scopes. The `read_suites` scope is also required to use Preflight with Buildkite Test Engine. - Git commit and push access to the repository. ##### Install or upgrade the Buildkite CLI To check your current Buildkite CLI version, run: ```bash bk version ``` To upgrade using Homebrew: ```bash brew upgrade buildkite/buildkite/bk@3 ``` To upgrade using mise: ```bash mise use -g github:buildkite/cli@latest ``` ###### Install the Preflight skill To install the Preflight skill into your coding agent: ```bash bk skill add buildkite-preflight ``` Using the skill requires [Buildkite CLI](/docs/platform/cli/installation) version 3.40.0 or later. ##### Run a Preflight build To run a build with Preflight enabled: ```bash bk preflight --pipeline my-org/my-pipeline --watch ``` The `--pipeline` flag accepts either `{org-slug}/{pipeline-slug}` or just `{pipeline-slug}` if your Buildkite organization is already set in your `bk` config. In `--watch` mode, Preflight exits with code `0` if all jobs pass, `10` when the build first enters the failing state (the default), or `9` if the build completes with failures. See [exit codes](/docs/platform/cli/preflight#exit-codes) for the full list. The following examples show common variations: ```bash #### Start the build and exit immediately (don't wait) bk preflight --pipeline my-org/my-pipeline --no-watch #### Skip confirmation prompts bk preflight --pipeline my-org/my-pipeline --watch --yes #### Use plain text output in non-interactive environments bk preflight --pipeline my-org/my-pipeline --watch --text #### Use JSONL output when another tool needs structured events bk preflight --pipeline my-org/my-pipeline --watch --json #### Wait up to 30s for Test Engine results after build completion bk preflight --pipeline my-org/my-pipeline --watch --await-test-results #### Don't cancel the build or remove the branch on exit bk preflight --pipeline my-org/my-pipeline --watch --no-cleanup #### Wait for the build to reach a terminal state instead of exiting on the failing state bk preflight --pipeline my-org/my-pipeline --watch --exit-on build-terminal ``` ##### Build summary On exit, Preflight prints a summary of the jobs that failed. When integrated with Buildkite [Test Engine](/docs/test-engine), the summary also includes test results. This integration requires the `read_suites` scope on your [API access token](/docs/platform/cli/configuration). A test with at least one passed execution is treated as passed, and a test with only failed executions is treated as failed. Tests that pass on retry are not counted as failures. Tests with only pending, skipped, or unknown executions are excluded from the summary. Preflight reports up to 10 test failures in the terminal output, and up to 100 test failures in JSON events. ##### Customizing pipelines for Preflight Preflight sets the following environment variables on the build, so you can customize pipeline behavior for preflight runs: - `PREFLIGHT`: Set to `true`. - `PREFLIGHT_SOURCE_COMMIT`: The HEAD commit when Preflight was run. - `PREFLIGHT_SOURCE_BRANCH`: The current branch when Preflight was run. Use these with [conditionals](/docs/pipelines/configure/conditionals) and [dynamic pipelines](/docs/pipelines/configure/dynamic-pipelines) to run a subset of a pipeline or otherwise modify its behavior under Preflight. To skip linting on builds triggered by Preflight: ```yaml steps: - command: ./scripts/lint.sh label: lints if: build.env("PREFLIGHT") != "true" ``` To run a test suite with `--fast-fail` when Preflight is in use: ```yaml steps: - label: ":test_tube: Tests" command: | if [ "$PREFLIGHT" = "true" ]; then ./scripts/test.sh --fast-fail else ./scripts/test.sh fi ``` ##### Exit codes | Exit code | Meaning | |-----------|---------| | `0` | All jobs passed | | `1` | Generic error | | `9` | Build completed with failures | | `10` | Build incomplete — failures already detected | | `11` | Build incomplete — still running or blocked | | `12` | Unknown build state | | `130` | Aborted by user (Ctrl+C) | ##### Flags | Flag | Default | Description | |------|---------|-------------| | `--pipeline`, `-p` | — | Pipeline slug (`{slug}` or `{org}/{slug}`) — required | | `--watch` / `--no-watch` | — | Watch the build until completion | | `--interval` | `2` | Polling interval in seconds | | `--exit-on` | `build-failing` | Condition that triggers exit. `build-failing` exits when the build enters the failing state; `build-terminal` exits when the build reaches a terminal state. | | `--no-cleanup` | `false` | Keep the remote preflight branch after the build | | `--await-test-results` | — | Wait for Test Engine summaries after build completion | | `--text` | `false` | Use plain text output | | `--json` | `false` | Emit one JSON object per event (JSONL) | | `--yes`, `-y` | `false` | Skip confirmation prompts | | `--no-input` | `false` | Disable all interactive prompts | | `--quiet`, `-q` | `false` | Suppress progress output | | `--debug` | `false` | Enable debug output for API calls | --- ### Overview URL: https://buildkite.com/docs/platform/cli/reference #### Command-line reference overview The [Buildkite CLI](/docs/platform/cli) (`bk`) allows you to interact with the Buildkite platform through the command line. The comprehensive set of `bk` commands, along with categories of sub-commands, lets you manage Buildkite agents and their configuration, work with a Buildkite pipeline's builds, control job execution, and manipulate its artifacts. These command sets can be essential for managing your build infrastructure, automating tasks, and troubleshooting issues. The following pages describe how to use the `bk` command, organized by its command category: - [`agent`](/docs/platform/cli/reference/agent) - [`api`](/docs/platform/cli/reference/api) - [`artifacts`](/docs/platform/cli/reference/artifacts) - [`auth`](/docs/platform/cli/reference/auth) - [`build`](/docs/platform/cli/reference/build) - [`cluster`](/docs/platform/cli/reference/cluster) - [`config`](/docs/platform/cli/reference/config) - [`configure`](/docs/platform/cli/reference/configure) - [`init`](/docs/platform/cli/reference/init) - [`job`](/docs/platform/cli/reference/job) - [`maintainer`](/docs/platform/cli/reference/maintainer) - [`organization`](/docs/platform/cli/reference/organization) - [`package`](/docs/platform/cli/reference/package) - [`pipeline`](/docs/platform/cli/reference/pipeline) - [`secret`](/docs/platform/cli/reference/secret) - [`user`](/docs/platform/cli/reference/user) - [`version`](/docs/platform/cli/reference/version) --- ### agent URL: https://buildkite.com/docs/platform/cli/reference/agent #### Buildkite CLI agent command The `bk agent` command allows you to manage Buildkite agents from the command line. ##### Commands | Command | Description | | --- | --- | | `bk agent install` | Install the buildkite-agent binary locally. | | `bk agent run` | Run an ephemeral buildkite-agent locally. | | `bk agent pause` | Pause a Buildkite agent. | | `bk agent list` | List agents. | | `bk agent resume` | Resume a Buildkite agent. | | `bk agent stop` | Stop Buildkite agents. | | `bk agent view` | View details of an agent. | ##### Install agent Install the buildkite-agent binary locally. ```bash bk agent install [flags] ``` ###### Flags | Flag | Description | | --- | --- | | `--cluster-uuid=STRING` | Cluster UUID to create the agent token on (default: the "Default" cluster) | | `--config-path=STRING` | Path to write the agent config file | | `--debug` | Enable debug output for REST API calls | | `--dest=STRING` | Destination directory for the binary | | `--no-token` | Skip creating an agent token and config file | | `--version="latest"` | Specify an agent version to install | ###### Examples Install the latest version of the agent: ```bash bk agent install ``` Install a specific version: ```bash bk agent install --version "3.112.0" ``` Install to a custom location: ```bash bk agent install --dest ~/.local/bin ``` Install without creating a token/config: ```bash bk agent install --no-token ``` ##### Run agent Run an ephemeral buildkite-agent locally. ```bash bk agent run [flags] ``` ###### Flags | Flag | Description | | --- | --- | | `--cluster-uuid=STRING` | Cluster UUID to create the agent token on (default: the "Default" cluster) | | `--debug` | Enable debug output for REST API calls | | `--queue="default"` | Queue for the agent to listen on | | `--version="latest"` | Specify an agent version to run | ###### Examples Run the latest agent on the Default cluster: ```bash bk agent run ``` Run a specific version: ```bash bk agent run --version "3.112.0" ``` Run on a specific cluster: ```bash bk agent run --cluster-uuid "01234567-89ab-cdef-0123-456789abcdef" ``` Run on a specific queue: ```bash bk agent run --queue "deploy" ``` ##### Pause an agent Pause a Buildkite agent. ```bash bk agent pause [flags] ``` ###### Arguments | Argument | Description | | --- | --- | | `` | Agent ID to pause | ###### Flags | Flag | Description | | --- | --- | | `--debug` | Enable debug output for REST API calls | | `--note=STRING` | A descriptive note to record why the agent is paused | | `--timeout-in-minutes=5` | Timeout after which the agent is automatically resumed, in minutes | ###### Examples Pause an agent for 5 minutes (default): ```bash bk agent pause 0198d108-a532-4a62-9bd7-b2e744bf5c45 ``` Pause an agent with a note: ```bash bk agent pause 0198d108-a532-4a62-9bd7-b2e744bf5c45 --note "Maintenance scheduled" ``` Pause an agent with a note and 60 minute timeout: ```bash bk agent pause 0198d108-a532-4a62-9bd7-b2e744bf5c45 --note "too many llamas" --timeout-in-minutes 60 ``` Pause for a short time (15 minutes) during deployment: ```bash bk agent pause 0198d108-a532-4a62-9bd7-b2e744bf5c45 --note "Deploy in progress" --timeout-in-minutes 15 ``` ##### List agents List agents. ```bash bk agent list [flags] ``` ###### Flags | Flag | Description | | --- | --- | | `-o`, `--output=""` | Output format. One of: json, yaml, text | | `--debug` | Enable debug output for REST API calls | | `--hostname=STRING` | Filter agents by their hostname | | `--json` | Output as JSON | | `--limit=100` | Maximum number of agents to return | | `--name=STRING` | Filter agents by their name | | `--per-page=30` | Number of agents per page | | `--state=STRING` | Filter agents by state (running, idle, paused) | | `--tags=TAGS,...` | Filter agents by tags | | `--text` | Output as text | | `--version=STRING` | Filter agents by their version | | `--yaml` | Output as YAML | ###### Examples List all agents: ```bash bk agent list ``` List agents with JSON output: ```bash bk agent list --output json ``` List only running agents (currently executing jobs): ```bash bk agent list --state running ``` List only idle agents (connected but not running jobs): ```bash bk agent list --state idle ``` List only paused agents: ```bash bk agent list --state paused ``` Filter agents by hostname: ```bash bk agent list --hostname my-server-01 ``` Combine state and hostname filters: ```bash bk agent list --state idle --hostname my-server-01 ``` Filter agents by tags: ```bash bk agent list --tags queue=default ``` Filter agents by multiple tags (all must match): ```bash bk agent list --tags queue=default --tags os=linux ``` Multiple filters with output format: ```bash bk agent list --state running --version 3.107.2 --output json ``` ##### Resume an agent Resume a Buildkite agent. ```bash bk agent resume ``` ###### Arguments | Argument | Description | | --- | --- | | `` | Agent ID to resume | ###### Flags | Flag | Description | | --- | --- | | `--debug` | Enable debug output for REST API calls | ###### Examples Resume an agent: ```bash bk agent resume 0198d108-a532-4a62-9bd7-b2e744bf5c45 ``` ##### Stop agents Stop Buildkite agents. ```bash bk agent stop [ ...] [flags] ``` ###### Flags | Flag | Description | | --- | --- | | `-l`, `--limit=5` | Limit parallel API requests | | `--debug` | Enable debug output for REST API calls | | `--force` | Force stop the agent. Terminating any jobs in progress | ###### Examples Stop a single agent: ```bash bk agent stop 0198d108-a532-4a62-9bd7-b2e744bf5c45 ``` Stop multiple agents: ```bash bk agent stop agent-1 agent-2 agent-3 ``` Force stop an agent: ```bash bk agent stop 0198d108-a532-4a62-9bd7-b2e744bf5c45 --force ``` Stop agents from STDIN: ```bash cat agent-ids.txt | bk agent stop ``` ##### View an agent View details of an agent. ```bash bk agent view [flags] ``` ###### Arguments | Argument | Description | | --- | --- | | `` | Agent ID to view | ###### Flags | Flag | Description | | --- | --- | | `-o`, `--output=""` | Output format. One of: json, yaml, text | | `-w`, `--web` | Open agent in a browser | | `--debug` | Enable debug output for REST API calls | | `--json` | Output as JSON | | `--text` | Output as text | | `--yaml` | Output as YAML | ###### Examples View an agent: ```bash bk agent view 0198d108-a532-4a62-9bd7-b2e744bf5c45 ``` View an agent with organization slug: ```bash bk agent view my-org/0198d108-a532-4a62-9bd7-b2e744bf5c45 ``` Open agent in browser: ```bash bk agent view 0198d108-a532-4a62-9bd7-b2e744bf5c45 --web ``` View agent as JSON: ```bash bk agent view 0198d108-a532-4a62-9bd7-b2e744bf5c45 --output json ``` --- ### api URL: https://buildkite.com/docs/platform/cli/reference/api #### Buildkite CLI api command The `bk api` command allows you to interact with the Buildkite API from the command line. Interact with the Buildkite API Interact with either the REST or GraphQL Buildkite APIs. ```bash bk api [] [flags] ``` ##### Arguments | Argument | Description | | --- | --- | | `[]` | API endpoint to call | ##### Flags | Flag | Description | | --- | --- | | `-d`, `--data=STRING` | Data to send in the request body | | `-f`, `--file=STRING` | File containing GraphQL query | | `-H`, `--headers=HEADERS,...` | Headers to include in the request | | `-X`, `--method=STRING` | HTTP method to use | | `--analytics` | Use the Test Analytics endpoint | | `--debug` | Enable debug output for REST API calls | | `--verbose` | Enable verbose output (currently only provides information about rate limit exceeded retries) | ##### Examples To get a build: ```bash bk api /pipelines/example-pipeline/builds/420 ``` To create a pipeline: ```bash bk api --method POST /pipelines --data ' { "name": "My Cool Pipeline", "repository": "git@github.com:acme-inc/my-pipeline.git", "configuration": "steps:\n - command: env" } ' ``` To update a cluster: ```bash bk api --method PUT /clusters/CLUSTER_UUID --data ' { "name": "My Updated Cluster", } ' ``` To get all test suites: ```bash bk api --analytics /suites ``` Run GraphQL query from file: ```bash bk api --file get_build.graphql ``` --- ### artifacts URL: https://buildkite.com/docs/platform/cli/reference/artifacts #### Buildkite CLI artifacts command The `bk artifacts` command allows you to manage build artifacts from the command line. ##### Commands | Command | Description | | --- | --- | | `bk artifacts download` | Download artifacts from a build. | | `bk artifacts list` | List artifacts for a build or a job in a build. | ##### Download an artifact Download artifacts from a build. ```bash bk artifacts download [] [flags] ``` ###### Arguments | Argument | Description | | --- | --- | | `[]` | Artifact ID to download. If omitted, all artifacts are downloaded. Use 'bk artifacts list' to find IDs. | ###### Flags | Flag | Description | | --- | --- | | `-b`, `--build=STRING` | Build number containing the artifact. If omitted, the most recent build on the current branch will be used. | | `-j`, `--job-uuid=STRING` | The job UUID containing the artifact. | | `-p`, `--pipeline=STRING` | The pipeline containing the artifact. This can be a {pipeline slug} or in the format {org slug}/{pipeline slug}. If omitted, it will be resolved using the current directory. | | `--debug` | Enable debug output for REST API calls | ###### Examples Download all artifacts from the most recent build on the current branch: ```bash bk artifacts download ``` Download all artifacts from a specific build: ```bash bk artifacts download --build 429 ``` Download all artifacts from a specific job: ```bash bk artifacts download --build 429 --job-uuid 0193903e-ecd9-4c51-9156-0738da987e87 ``` Download a specific artifact: ```bash bk artifacts download 0191727d-b5ce-4576-b37d-477ae0ca830c --build 429 ``` Specify the pipeline explicitly: ```bash bk artifacts download --build 429 -p monolith ``` ##### List artifacts List artifacts for a build or a job in a build. ```bash bk artifacts list [] [flags] ``` ###### Arguments | Argument | Description | | --- | --- | | `[]` | Build number to list artifacts for | ###### Flags | Flag | Description | | --- | --- | | `-j`, `--job-uuid=STRING` | List artifacts for a specific job on the given build. | | `-o`, `--output=""` | Output format. One of: json, yaml, text | | `-p`, `--pipeline=STRING` | The pipeline to view. This can be a {pipeline slug} or in the format {org slug}/{pipeline slug}. If omitted, it will be resolved using the current directory. | | `--debug` | Enable debug output for REST API calls | | `--json` | Output as JSON | | `--text` | Output as text | | `--yaml` | Output as YAML | ###### Examples By default, artifacts of the most recent build for the current branch is shown: ```bash bk artifacts list ``` To list artifacts of a specific build: ```bash bk artifacts list 429 ``` To list artifacts of a specific job in a build: ```bash bk artifacts list 429 --job-uuid 0193903e-ecd9-4c51-9156-0738da987e87 ``` If not inside a repository or to use a specific pipeline, pass -p: ```bash bk artifacts list 429 -p monolith ``` --- ### auth URL: https://buildkite.com/docs/platform/cli/reference/auth #### Buildkite CLI auth command The `bk auth` command allows you to manage authorization from the command line. ##### Commands | Command | Description | | --- | --- | | `bk auth login` | Login to Buildkite using OAuth or an API token | | `bk auth logout` | Logout and remove stored credentials | | `bk auth status` | Print the current user auth status | | `bk auth switch` | Switch to a different organization | | `bk auth token` | Print the stored API token for the current organization | ##### Login auth Login to Buildkite using OAuth or an API token ```bash bk auth login [flags] ``` ###### Flags | Flag | Description | | --- | --- | | `--debug` | Enable debug output for REST API calls | | `--org=STRING` | Organization slug (required with --token) | | `--scopes=""` | OAuth scopes to request | | `--token=STRING` | API token to store (non-OAuth login) | ###### Examples Login with full permissions (inherits your account's scopes): ```bash bk auth login ``` Login non-interactively with an API token: ```bash bk auth login --org my-org --token my-token ``` Login with read-only access: ```bash bk auth login --scopes read_only ``` Login with read-only plus write access to builds: ```bash bk auth login --scopes "read_only write_builds" ``` Login with specific scopes: ```bash bk auth login --scopes "read_user read_organizations read_clusters write_clusters" ``` ##### Logout auth Logout and remove stored credentials ```bash bk auth logout [flags] ``` ###### Flags | Flag | Description | | --- | --- | | `--all` | Log out of all organizations | | `--debug` | Enable debug output for REST API calls | | `--org=STRING` | Organization slug (defaults to currently selected organization) | ##### Status auth Print the current user auth status ```bash bk auth status [flags] ``` ###### Flags | Flag | Description | | --- | --- | | `-o`, `--output=""` | Output format. One of: json, yaml, text | | `--debug` | Enable debug output for REST API calls | | `--json` | Output as JSON | | `--text` | Output as text | | `--yaml` | Output as YAML | ###### Examples List the current token session: ```bash bk auth status ``` ##### Switch auth Switch to a different organization ```bash bk auth switch [] ``` ###### Arguments | Argument | Description | | --- | --- | | `[]` | Organization slug to switch | ###### Flags | Flag | Description | | --- | --- | | `--debug` | Enable debug output for REST API calls | ###### Examples Switch the 'my-cool-org' configuration: ```bash bk auth switch my-cool-org ``` Interactively select an organization: ```bash bk auth switch ``` ##### Token auth Print the stored API token for the current organization ```bash bk auth token ``` ###### Flags | Flag | Description | | --- | --- | | `--debug` | Enable debug output for REST API calls | ###### Examples Print the current token: ```bash bk auth token ``` Use the token in a curl request: ```bash curl -H "Authorization: Bearer $(bk auth token)" https://api.buildkite.com/v2/user ``` --- ### build URL: https://buildkite.com/docs/platform/cli/reference/build #### Buildkite CLI build command The `bk build` command allows you to manage pipeline builds from the command line. ##### Commands | Command | Description | | --- | --- | | `bk build create` | Create a new build. | | `bk build cancel` | Cancel a build. | | `bk build view` | View build information. | | `bk build list` | List builds. | | `bk build download` | Download resources for a build. | | `bk build rebuild` | Rebuild a build. | | `bk build watch` | Watch a build's progress in real-time. | ##### Create a build Create a new build. ```bash bk build create [flags] ``` ###### Flags | Flag | Description | | --- | --- | | `-a`, `--author=STRING` | Author of the build. Supports: "Name ", "email@domain.com", "Full Name", or "username" | | `-b`, `--branch=STRING` | The branch to build. Defaults to the default branch of the pipeline. | | `-c`, `--commit="HEAD"` | The commit to build. | | `-e`, `--env=ENV` | Set environment variables for the build (KEY=VALUE) | | `-f`, `--env-file=STRING` | Set the environment variables for the build via an environment file | | `-i`, `--ignore-branch-filters` | Ignore branch filters for the pipeline | | `-m`, `--message=STRING` | Description of the build. If left blank, the commit message will be used once the build starts. | | `-M`, `--metadata=METADATA` | Set metadata for the build (KEY=VALUE) | | `-p`, `--pipeline=STRING` | The pipeline to use. This can be a {pipeline slug} or in the format {org slug}/{pipeline slug}. | | `-w`, `--web` | Open the build in a web browser after it has been created. | | `--debug` | Enable debug output for REST API calls | ###### Examples Create a new build: ```bash bk build create ``` Create a new build with environment variables set: ```bash bk build create -e "FOO=BAR" -e "BAR=BAZ" ``` Create a new build with metadata: ```bash bk build create -M "key=value" -M "foo=bar" ``` ##### Cancel a build Cancel a build. ```bash bk build cancel [flags] ``` ###### Arguments | Argument | Description | | --- | --- | | `` | Build number to cancel | ###### Flags | Flag | Description | | --- | --- | | `-p`, `--pipeline=STRING` | The pipeline to use. This can be a {pipeline slug} or in the format {org slug}/{pipeline slug}. | | `-w`, `--web` | Open the build in a web browser after it has been cancelled. | | `--debug` | Enable debug output for REST API calls | ###### Examples Cancel a build by number: ```bash bk build cancel 123 --pipeline my-pipeline ``` Cancel a build and open in browser: ```bash bk build cancel 123 -pipeline my-pipeline --web ``` ##### View a build View build information. ```bash bk build view [] [flags] ``` ###### Arguments | Argument | Description | | --- | --- | | `[]` | Build number to view (omit for most recent build) | ###### Flags | Flag | Description | | --- | --- | | `-b`, `--branch=STRING` | Filter builds to this branch. | | `-s`, `--job-states=JOB-STATES,...` | Filter jobs by state. Valid states: running, scheduled, passed, failed, canceled, skipped, not_run, broken. | | `-o`, `--output=""` | Output format. One of: json, yaml, text | | `-p`, `--pipeline=STRING` | The pipeline to use. This can be a {pipeline slug} or in the format {org slug}/{pipeline slug}. | | `-u`, `--user=STRING` | Filter builds to this user. You can use name or email. | | `-w`, `--web` | Open the build in a web browser. | | `--debug` | Enable debug output for REST API calls | | `--json` | Output as JSON | | `--mine` | Filter builds to only my user. | | `--text` | Output as text | | `--yaml` | Output as YAML | ###### Examples By default, the most recent build for the current branch is shown: ```bash bk build view ``` If not inside a repository or to use a specific pipeline, pass -p: ```bash bk build view -p monolith ``` To view a specific build: ```bash bk build view 429 ``` Add -w to any command to open the build in your web browser instead: ```bash bk build view -w 429 ``` To view the most recent build on feature-x branch: ```bash bk build view -b feature-y ``` You can filter by a user name or id: ```bash bk build view -u "alice" ``` A shortcut to view your builds is --mine: ```bash bk build view --mine ``` Filter to only show failed and broken jobs: ```bash bk build view -s failed,broken ``` To view most recent build by greg on the deploy-pipeline: ```bash bk build view -p deploy-pipeline -u "greg" ``` ##### List builds List builds. ```bash bk build list [flags] ``` ###### Flags | Flag | Description | | --- | --- | | `-o`, `--output=""` | Output format. One of: json, yaml, text | | `-p`, `--pipeline=STRING` | The pipeline to use. This can be a {pipeline slug} or in the format {org slug}/{pipeline slug}. | | `--branch=BRANCH,...` | Filter by branch name | | `--commit=STRING` | Filter by commit SHA | | `--creator=STRING` | Filter by creator (email address or user ID) | | `--debug` | Enable debug output for REST API calls | | `--duration=STRING` | Filter by duration (e.g. >5m, , =, 20m" ``` List builds that finished in under 5 minutes: ```bash bk build list --duration "30m) that failed on feature branches: ```bash bk build list --duration ">30m" --state failed --branch feature/ ``` ##### Download a build Download resources for a build. ```bash bk build download [] [flags] ``` ###### Arguments | Argument | Description | | --- | --- | | `[]` | Build number to download (omit for most recent build) | ###### Flags | Flag | Description | | --- | --- | | `-b`, `--branch=STRING` | Filter builds to this branch. | | `-m`, `--mine` | Filter builds to only my user. | | `-p`, `--pipeline=STRING` | The pipeline to use. This can be a {pipeline slug} or in the format {org slug}/{pipeline slug}. | | `-u`, `--user=STRING` | Filter builds to this user. You can use name or email. | | `--debug` | Enable debug output for REST API calls | ###### Examples Download build 123: ```bash bk build download 123 --pipeline my-pipeline ``` Download most recent build: ```bash bk build download --pipeline my-pipeline ``` Download most recent build on a branch: ```bash bk build download -b main --pipeline my-pipeline ``` Download most recent build by a user: ```bash bk build download --pipeline my-pipeline -u alice@hello.com ``` Download most recent build by yourself: ```bash bk build download --pipeline my-pipeline --mine ``` ##### Rebuild a build Rebuild a build. ```bash bk build rebuild [] [flags] ``` ###### Arguments | Argument | Description | | --- | --- | | `[]` | Build number to rebuild (omit for most recent build) | ###### Flags | Flag | Description | | --- | --- | | `-b`, `--branch=STRING` | Filter builds to this branch. | | `-m`, `--mine` | Filter builds to only my user. | | `-p`, `--pipeline=STRING` | The pipeline to use. This can be a {pipeline slug} or in the format {org slug}/{pipeline slug}. | | `-u`, `--user=STRING` | Filter builds to this user. You can use name or email. | | `-w`, `--web` | Open the build in a web browser after it has been created. | | `--debug` | Enable debug output for REST API calls | ###### Examples Rebuild a specific build by number: ```bash bk build rebuild 123 ``` Rebuild most recent build: ```bash bk build rebuild ``` Rebuild and open in browser: ```bash bk build rebuild 123 --web ``` Rebuild most recent build on a branch: ```bash bk build rebuild -b main ``` Rebuild most recent build by a user: ```bash bk build rebuild -u alice ``` Rebuild most recent build by yourself: ```bash bk build rebuild --mine ``` ##### Watch a build Watch a build's progress in real-time. ```bash bk build watch [] [flags] ``` ###### Arguments | Argument | Description | | --- | --- | | `[]` | Build number to watch (omit for most recent build) | ###### Flags | Flag | Description | | --- | --- | | `-b`, `--branch=STRING` | The branch to watch builds for. | | `-p`, `--pipeline=STRING` | The pipeline to use. This can be a {pipeline slug} or in the format {org slug}/{pipeline slug}. | | `--debug` | Enable debug output for REST API calls | | `--interval=1` | Polling interval in seconds | ###### Examples Watch the most recent build for the current branch: ```bash bk build watch --pipeline my-pipeline ``` Watch a specific build: ```bash bk build watch 429 --pipeline my-pipeline ``` Watch the most recent build on a specific branch: ```bash bk build watch -b feature-x --pipeline my-pipeline ``` Watch a build on a specific pipeline: ```bash bk build watch --pipeline my-pipeline ``` Set a custom polling interval (in seconds): ```bash bk build watch --interval 5 --pipeline my-pipeline ``` --- ### cluster URL: https://buildkite.com/docs/platform/cli/reference/cluster #### Buildkite CLI cluster command The `bk cluster` command allows you to manage Buildkite organization clusters from the command line. ##### Commands | Command | Description | | --- | --- | | `bk cluster list` | List clusters. | | `bk cluster view` | View cluster information. | | `bk cluster create` | Create a new cluster. | | `bk cluster update` | Update a cluster. | | `bk cluster delete` | Delete a cluster. | ##### List clusters List clusters. ```bash bk cluster list [flags] ``` ###### Flags | Flag | Description | | --- | --- | | `-o`, `--output=""` | Output format. One of: json, yaml, text | | `--debug` | Enable debug output for REST API calls | | `--json` | Output as JSON | | `--text` | Output as text | | `--yaml` | Output as YAML | ###### Examples List all clusters: ```bash bk cluster list ``` List clusters in JSON format: ```bash bk cluster list -o json ``` ##### View a cluster View cluster information. ```bash bk cluster view [flags] ``` ###### Arguments | Argument | Description | | --- | --- | | `` | Cluster UUID to view | ###### Flags | Flag | Description | | --- | --- | | `-o`, `--output=""` | Output format. One of: json, yaml, text | | `--debug` | Enable debug output for REST API calls | | `--json` | Output as JSON | | `--text` | Output as text | | `--yaml` | Output as YAML | ###### Examples View a cluster: ```bash bk cluster view my-cluster-uuid ``` View cluster in JSON format: ```bash bk cluster view my-cluster-uuid -o json ``` ##### Create a cluster Create a new cluster. ```bash bk cluster create --name=STRING [flags] ``` ###### Flags | Flag | Description | | --- | --- | | `-o`, `--output=""` | Output format. One of: json, yaml, text | | `--color=STRING` | A color hex code for the cluster (e.g. #FF0000) | | `--debug` | Enable debug output for REST API calls | | `--description=STRING` | A description of the cluster | | `--emoji=STRING` | An emoji for the cluster (e.g. :rocket:) | | `--json` | Output as JSON | | `--name=STRING` | The name of the cluster | | `--text` | Output as text | | `--yaml` | Output as YAML | ###### Examples Create a cluster with just a name: ```bash bk cluster create --name "My Cluster" ``` Create a cluster with all fields: ```bash bk cluster create --name "My Cluster" --description "Runs production workloads" --emoji :rocket: --color "#FF0000" ``` Create a cluster and output as JSON: ```bash bk cluster create --name "My Cluster" -o json ``` ##### Update cluster Update a cluster. ```bash bk cluster update [flags] ``` ###### Arguments | Argument | Description | | --- | --- | | `` | Cluster UUID to update | ###### Flags | Flag | Description | | --- | --- | | `-o`, `--output=""` | Output format. One of: json, yaml, text | | `--color=STRING` | New color hex code for the cluster (e.g. #FF0000) | | `--debug` | Enable debug output for REST API calls | | `--default-queue-id=STRING` | UUID of the queue to set as the default | | `--description=STRING` | New description for the cluster | | `--emoji=STRING` | New emoji for the cluster (e.g. :rocket:) | | `--json` | Output as JSON | | `--name=STRING` | New name for the cluster | | `--text` | Output as text | | `--yaml` | Output as YAML | ###### Examples Update a cluster's name: ```bash bk cluster update my-cluster-uuid --name "New Name" ``` Update description and color: ```bash bk cluster update my-cluster-uuid --description "Updated description" --color "#00FF00" ``` Set the default queue: ```bash bk cluster update my-cluster-uuid --default-queue-id my-queue-uuid ``` Output the updated cluster as JSON: ```bash bk cluster update my-cluster-uuid --name "New Name" -o json ``` ##### Delete cluster Delete a cluster. ```bash bk cluster delete ``` ###### Arguments | Argument | Description | | --- | --- | | `` | Cluster UUID to delete | ###### Flags | Flag | Description | | --- | --- | | `--debug` | Enable debug output for REST API calls | ###### Examples Delete a cluster (with confirmation prompt): ```bash bk cluster delete my-cluster-uuid ``` Delete a cluster without confirmation: ```bash bk cluster delete my-cluster-uuid --yes ``` --- ### config URL: https://buildkite.com/docs/platform/cli/reference/config #### Buildkite CLI config command The `bk config` command allows you to manage Buildkite CLI configurations from the command line. ##### Commands | Command | Description | | --- | --- | | `bk config list` | List configuration values. | | `bk config get` | Get a configuration value. | | `bk config set` | Set a configuration value. | | `bk config unset` | Remove a configuration value. | ##### List configs List configuration values. ```bash bk config list [flags] ``` ###### Flags | Flag | Description | | --- | --- | | `--debug` | Enable debug output for REST API calls | | `--global` | Only show global (user) configuration | | `--local` | Only show local configuration | ###### Examples ```bash bk config list ``` ```bash bk config list --local ``` ```bash bk config list --global ``` ##### Get config Get a configuration value. ```bash bk config get ``` ###### Arguments | Argument | Description | | --- | --- | | `` | Configuration key to get | ###### Flags | Flag | Description | | --- | --- | | `--debug` | Enable debug output for REST API calls | ###### Examples ```bash bk config get output_format ``` ```bash bk config get pager ``` ##### Set config Set a configuration value. ```bash bk config set [flags] ``` ###### Arguments | Argument | Description | | --- | --- | | `` | Configuration key to set | | `` | Value to set | ###### Flags | Flag | Description | | --- | --- | | `--debug` | Enable debug output for REST API calls | | `--local` | Save to local (.bk.yaml) instead of user config | ###### Examples Set default output format to YAML: ```bash bk config set output_format yaml ``` Disable pager globally: ```bash bk config set no_pager true ``` Set repo-specific output format: ```bash bk config set output_format text --local ``` Set a custom pager: ```bash bk config set pager "less -RS" ``` ##### Unset config Remove a configuration value. ```bash bk config unset [flags] ``` ###### Arguments | Argument | Description | | --- | --- | | `` | Configuration key to unset | ###### Flags | Flag | Description | | --- | --- | | `--debug` | Enable debug output for REST API calls | | `--local` | Unset from local (.bk.yaml) instead of user config | ###### Examples Reset output format to default (json): ```bash bk config unset output_format ``` Remove repo-specific setting: ```bash bk config unset output_format --local ``` Reset pager to default (less -R): ```bash bk config unset pager ``` --- ### configure URL: https://buildkite.com/docs/platform/cli/reference/configure #### Buildkite CLI configure command The `bk configure` command allows you to configure your Buildkite CLI settings from the command line. ##### Commands | Command | Description | | --- | --- | | `bk configure add` | Add configuration for a new organization | ##### Add a new organization Add configuration for a new organization ```bash bk configure add [flags] ``` ###### Flags | Flag | Description | | --- | --- | | `--debug` | Enable debug output for REST API calls | | `--force` | Force setting a new token | | `--org=STRING` | Organization slug | | `--token=STRING` | API token | ###### Examples Interactively configure a new organization: ```bash bk configure add ``` Configure a new organization non-interactively: ```bash bk configure add --org my-org --token my-token ``` --- ### init URL: https://buildkite.com/docs/platform/cli/reference/init #### Buildkite CLI init command The `bk init` command allows you to initialize a pipeline file with Buildkite Pipelines from the command line. Initialize a pipeline.yaml file ```bash bk init [flags] ``` ##### Flags | Flag | Description | | --- | --- | | `--debug` | Enable debug output for REST API calls | --- ### job URL: https://buildkite.com/docs/platform/cli/reference/job #### Buildkite CLI job command The `bk job` command allows you to manage jobs within builds from the command line. ##### Commands | Command | Description | | --- | --- | | `bk job cancel` | Cancel a job. | | `bk job list` | List jobs. | | `bk job log` | Get logs for a job. | | `bk job reprioritize` | Reprioritize a job. | | `bk job retry` | Retry a job. | | `bk job unblock` | Unblock a job. | ##### Cancel a job Cancel a job. ```bash bk job cancel [flags] ``` ###### Arguments | Argument | Description | | --- | --- | | `` | Job ID to cancel | ###### Flags | Flag | Description | | --- | --- | | `-w`, `--web` | Open the job in a web browser after it has been cancelled | | `--debug` | Enable debug output for REST API calls | ###### Examples Cancel a job (with confirmation prompt): ```bash bk job cancel 0190046e-e199-453b-a302-a21a4d649d31 ``` Cancel a job without confirmation (useful for automation): ```bash bk job --yes cancel 0190046e-e199-453b-a302-a21a4d649d31 ``` Cancel a job and open it in browser: ```bash bk job --yes cancel 0190046e-e199-453b-a302-a21a4d649d31 --web ``` ##### List jobs List jobs. ```bash bk job list [flags] ``` ###### Flags | Flag | Description | | --- | --- | | `-o`, `--output=""` | Output format. One of: json, yaml, text | | `-p`, `--pipeline=STRING` | Filter by pipeline slug | | `--debug` | Enable debug output for REST API calls | | `--duration=STRING` | Filter by duration (e.g. >10m, , =, 10m" ``` List jobs from the last hour: ```bash bk job list --since 1h ``` Combine filters: ```bash bk job list --queue test-queue --state running --duration ">10m" ``` Fetch all jobs matching filters (no limit): ```bash bk job list --duration ">10m" --no-limit ``` Order by duration (longest first): ```bash bk job list --order-by duration ``` Get JSON output for bulk operations: ```bash bk job list --queue test-queue -o json ``` ##### Log job Get logs for a job. ```bash bk job log [flags] ``` ###### Arguments | Argument | Description | | --- | --- | | `` | Job UUID to get logs for | ###### Flags | Flag | Description | | --- | --- | | `-b`, `--build-number=STRING` | The build number | | `-p`, `--pipeline=STRING` | The pipeline to use. This can be a {pipeline slug} or in the format {org slug}/{pipeline slug} | | `--debug` | Enable debug output for REST API calls | | `--no-timestamps` | Strip timestamp prefixes from log output | ###### Examples Get a job's logs by UUID (requires --pipeline and --build): ```bash bk job log 0190046e-e199-453b-a302-a21a4d649d31 -p my-pipeline -b 123 ``` If inside a git repository with a configured pipeline: ```bash bk job log 0190046e-e199-453b-a302-a21a4d649d31 -b 123 ``` Strip timestamp prefixes from output: ```bash bk job log 0190046e-e199-453b-a302-a21a4d649d31 -p my-pipeline -b 123 --no-timestamps ``` ##### Reprioritize job Reprioritize a job. ```bash bk job reprioritize [flags] ``` ###### Arguments | Argument | Description | | --- | --- | | `` | Job UUID to reprioritize | | `` | New priority value for the job | ###### Flags | Flag | Description | | --- | --- | | `-b`, `--build-number=STRING` | The build number | | `-p`, `--pipeline=STRING` | The pipeline to use. This can be a {pipeline slug} or in the format {org slug}/{pipeline slug} | | `--debug` | Enable debug output for REST API calls | ###### Examples Reprioritize a job (requires --pipeline and --build): ```bash bk job reprioritize 0190046e-e199-453b-a302-a21a4d649d31 1 -p my-pipeline -b 123 ``` If inside a git repository with a configured pipeline: ```bash bk job reprioritize 0190046e-e199-453b-a302-a21a4d649d31 1 -b 123 ``` ##### Retry a job Retry a job. ```bash bk job retry ``` ###### Arguments | Argument | Description | | --- | --- | | `` | Job UUID to retry | ###### Flags | Flag | Description | | --- | --- | | `--debug` | Enable debug output for REST API calls | ###### Examples Retry a job by UUID: ```bash bk job retry 0190046e-e199-453b-a302-a21a4d649d31 ``` ##### Unblock a job Unblock a job. ```bash bk job unblock [flags] ``` ###### Arguments | Argument | Description | | --- | --- | | `` | Job UUID to unblock | ###### Flags | Flag | Description | | --- | --- | | `--data=STRING` | JSON formatted data to unblock the job | | `--debug` | Enable debug output for REST API calls | ###### Examples Unblock a job by UUID: ```bash bk job unblock 0190046e-e199-453b-a302-a21a4d649d31 ``` Unblock with JSON data: ```bash bk job unblock 0190046e-e199-453b-a302-a21a4d649d31 --data '{"field": "value"}' ``` Unblock with data from stdin: ```bash echo '{"field": "value"}' | bk job unblock 0190046e-e199-453b-a302-a21a4d649d31 ``` --- ### maintainer URL: https://buildkite.com/docs/platform/cli/reference/maintainer #### Buildkite CLI maintainer command The `bk maintainer` command allows you to manage cluster maintainers from the command line. ##### Commands | Command | Description | | --- | --- | | `bk maintainer list` | List cluster maintainers. | | `bk maintainer create` | Create a cluster maintainer. | | `bk maintainer delete` | Delete a cluster maintainer. | ##### List maintainers List cluster maintainers. ```bash bk maintainer list [flags] ``` ###### Arguments | Argument | Description | | --- | --- | | `` | Cluster UUID to list maintainers for | ###### Flags | Flag | Description | | --- | --- | | `-o`, `--output=""` | Output format. One of: json, yaml, text | | `--debug` | Enable debug output for REST API calls | | `--json` | Output as JSON | | `--text` | Output as text | | `--yaml` | Output as YAML | ###### Examples List all maintainers for a cluster: ```bash bk maintainer list my-cluster-uuid ``` List in JSON format: ```bash bk maintainer list my-cluster-uuid -o json ``` ##### Create a maintainer Create a cluster maintainer. ```bash bk maintainer create [flags] ``` ###### Arguments | Argument | Description | | --- | --- | | `` | Cluster UUID to add maintainer to | ###### Flags | Flag | Description | | --- | --- | | `-o`, `--output=""` | Output format. One of: json, yaml, text | | `--debug` | Enable debug output for REST API calls | | `--json` | Output as JSON | | `--team=STRING` | Team UUID to add as maintainer | | `--text` | Output as text | | `--user=STRING` | User UUID to add as maintainer | | `--yaml` | Output as YAML | ###### Examples Create a user maintainer assignment: ```bash bk maintainer create my-cluster-uuid --user user-uuid ``` Create a team maintainer assignment: ```bash bk maintainer create my-cluster-uuid --team team-uuid ``` ##### Delete maintainer Delete a cluster maintainer. ```bash bk maintainer delete ``` ###### Arguments | Argument | Description | | --- | --- | | `` | Cluster UUID | | `` | Maintainer assignment ID to delete | ###### Flags | Flag | Description | | --- | --- | | `--debug` | Enable debug output for REST API calls | ###### Examples Delete a maintainer assignment (with confirmation prompt): ```bash bk maintainer delete my-cluster-uuid maintainer-id ``` Delete without confirmation: ```bash bk maintainer delete my-cluster-uuid maintainer-id --yes ``` Use list to find maintainer assignment IDs: ```bash bk maintainer list my-cluster-uuid ``` --- ### organization URL: https://buildkite.com/docs/platform/cli/reference/organization #### Buildkite CLI organization command The `bk organization` command allows you to manage Buildkite organizations from the command line. ##### Commands | Command | Description | | --- | --- | | `bk organization list` | List configured organizations. | ##### List organizations List configured organizations. ```bash bk organization list [flags] ``` ###### Flags | Flag | Description | | --- | --- | | `-o`, `--output=""` | Output format. One of: json, yaml, text | | `--debug` | Enable debug output for REST API calls | | `--json` | Output as JSON | | `--text` | Output as text | | `--yaml` | Output as YAML | ###### Examples List all configured organizations (JSON by default): ```bash bk organization list ``` List organizations in text format: ```bash bk organization list -o text ``` --- ### package URL: https://buildkite.com/docs/platform/cli/reference/package #### Buildkite CLI package command The `bk package` command allows you to manage packages from the command line. ##### Commands | Command | Description | | --- | --- | | `bk package push` | Push a new package to a Buildkite registry | ##### Push package Push a new package to a Buildkite registry ```bash bk package push [] [flags] ``` ###### Arguments | Argument | Description | | --- | --- | | `` | The slug of the registry to push the package to | | `[]` | Use '-' as value to pass package via stdin. Required if --stdin-file-name is used. | ###### Flags | Flag | Description | | --- | --- | | `-w`, `--web` | Open the pipeline in a web browser. | | `--debug` | Enable debug output for REST API calls | | `--file-path=STRING` | Path to the package file to push | | `--stdin-file-name=STRING` | The filename to use when reading the package from stdin | ###### Examples Push package from file: ```bash bk package push my-registry --file-path my-package.tar.gz ``` Push package via stdin: ```bash cat my-package.tar.gz | bk package push my-registry --stdin-file-name my-package.tar.gz - # Pass package via stdin, note hyphen as the argument ``` add -w to open the build in your web browser: ```bash bk package push my-registry --file-path my-package.tar.gz -w ``` --- ### pipeline URL: https://buildkite.com/docs/platform/cli/reference/pipeline #### Buildkite CLI pipeline command The `bk pipeline` command allows you to manage pipelines from the command line. ##### Commands | Command | Description | | --- | --- | | `bk pipeline copy` | Copy an existing pipeline. | | `bk pipeline create` | Create a new pipeline. | | `bk pipeline list` | List pipelines. | | `bk pipeline convert` | Convert a CI/CD pipeline configuration to Buildkite format. | | `bk pipeline validate` | Validate a pipeline YAML file. | | `bk pipeline view` | View a pipeline. | ##### Copy pipeline Copy an existing pipeline. ```bash bk pipeline copy [] [flags] ``` ###### Arguments | Argument | Description | | --- | --- | | `[]` | Source pipeline to copy (slug or org/slug). Uses current pipeline if not specified. | ###### Flags | Flag | Description | | --- | --- | | `-o`, `--output=""` | Output format. One of: json, yaml, text | | `-t`, `--target=STRING` | Name for the new pipeline, or org/name to copy to a different organization | | `--cluster-name=STRING` | Cluster name for the new pipeline (resolved to UUID) | | `--cluster-uuid=STRING` | Cluster UUID for the new pipeline | | `--debug` | Enable debug output for REST API calls | | `--dry-run` | Show what would be copied without creating the pipeline | | `--json` | Output as JSON | | `--org=STRING` | Organization slug | | `--text` | Output as text | | `--yaml` | Output as YAML | ###### Examples Copy the current pipeline to a new pipeline: ```bash bk pipeline cp --target "my-pipeline-v2" ``` Copy a specific pipeline: ```bash bk pipeline cp my-existing-pipeline --target "my-new-pipeline" ``` Copy a pipeline from another org (if you have access): ```bash bk pipeline cp other-org/their-pipeline --target "my-copy" ``` Copy to a different organization: ```bash bk pipeline cp my-pipeline --target "other-org/my-pipeline" --cluster-uuid "8302f0b-9b99-4663-23f3-2d64f88s693e" ``` Copy to a different organization using cluster name: ```bash bk pipeline cp my-pipeline --target "other-org/my-pipeline" --cluster-name "my-cluster" ``` Interactive mode - prompts for source and target: ```bash bk pipeline cp ``` Preview what would be copied without creating: ```bash bk pipeline cp my-pipeline --target "copy" --dry-run ``` Output the new pipeline details as JSON: ```bash bk pipeline cp my-pipeline -t "new-pipeline" -o json ``` ##### Create a pipeline Create a new pipeline. ```bash bk pipeline create [flags] ``` ###### Arguments | Argument | Description | | --- | --- | | `` | Name of the pipeline | ###### Flags | Flag | Description | | --- | --- | | `-W`, `--create-webhook` | Create an SCM webhook for the pipeline (GitHub and GitHub Enterprise only) | | `-d`, `--description=STRING` | Description of the pipeline | | `-o`, `--output=""` | Output format. One of: json, yaml, text | | `-r`, `--repository=STRING` | Repository URL | | `--cluster-name=STRING` | Cluster name to assign the pipeline to (resolved to UUID) | | `--cluster-uuid=STRING` | Cluster UUID to assign the pipeline to | | `--debug` | Enable debug output for REST API calls | | `--dry-run` | Simulate pipeline creation without actually creating it | | `--json` | Output as JSON | | `--org=STRING` | Organization slug. | | `--text` | Output as text | | `--yaml` | Output as YAML | ###### Examples Create a new pipeline: ```bash bk pipeline create "My Pipeline" --description "My pipeline description" --repository "git@github.com:org/repo.git" ``` Create a new pipeline and view the created pipeline in JSON format: ```bash bk pipeline create "My Pipeline" --description "My pipeline description" --repository "git@github.com:org/repo.git" --output json ``` Create a pipeline with a cluster (by UUID): ```bash bk pipeline create "My Pipeline" -d "Description" -r "git@github.com:org/repo.git" --cluster-uuid "cluster-uuid-123" ``` Create a pipeline with a cluster (by name): ```bash bk pipeline create "My Pipeline" -d "Description" -r "git@github.com:org/repo.git" --cluster-name "my-cluster" ``` Create a pipeline and set up a GitHub webhook: ```bash bk pipeline create "My Pipeline" -d "Description" -r "git@github.com:org/repo.git" --create-webhook ``` Simulate creating a pipeline and view the output in yaml format: ```bash bk pipeline create "My Pipeline" -d "Description" -r "git@github.com:org/repo.git" --dry-run --output yaml ``` ##### List pipelines List pipelines. ```bash bk pipeline list [flags] ``` ###### Flags | Flag | Description | | --- | --- | | `-l`, `--limit=100` | Maximum number of pipelines to return (max: 3000) | | `-n`, `--name=STRING` | Filter pipelines by name (supports partial matches, case insensitive) | | `-o`, `--output=""` | Output format. One of: json, yaml, text | | `-r`, `--repository=STRING` | Filter pipelines by repository URL (supports partial matches, case insensitive) | | `--debug` | Enable debug output for REST API calls | | `--json` | Output as JSON | | `--org=STRING` | Organization slug. | | `--text` | Output as text | | `--yaml` | Output as YAML | ###### Examples List all pipelines (default limit: 100): ```bash bk pipeline list ``` List pipelines matching a name pattern: ```bash bk pipeline list --name pipeline ``` List pipelines by repository: ```bash bk pipeline list --repo my-repo ``` Get more pipelines (automatically paginates): ```bash bk pipeline list --limit 500 ``` Output as JSON: ```bash bk pipeline list --name pipeline -o json ``` Use with other commands (e.g., get longest builds from matching pipelines): ```bash bk pipeline list --name pipeline | xargs -I {} bk build list --pipeline {} --since 48h --duration 1h ``` ##### Convert pipeline Convert a CI/CD pipeline configuration to Buildkite format. ```bash bk pipeline convert --file=STRING [flags] ``` ###### Flags | Flag | Description | | --- | --- | | `-F`, `--file=STRING` | Path to the pipeline file to convert (required) | | `-o`, `--output=STRING` | Custom path to save the converted pipeline (default: .buildkite/pipeline..yml) | | `-v`, `--vendor=STRING` | CI/CD vendor (auto-detected if the file name matches vendor path and name - otherwise, needs to be specified) | | `--debug` | Enable debug output for REST API calls | | `--timeout=300` | The time (in seconds) after which a conversion should be cancelled | ###### Examples Convert a GitHub Actions workflow: ```bash bk pipeline convert -F .github/workflows/ci.yml ``` Convert with explicit vendor specification: ```bash bk pipeline convert -F pipeline.yml --vendor circleci ``` Save output to a file: ```bash bk pipeline convert -F .github/workflows/ci.yml -o .buildkite/pipeline.yml ``` Read from stdin: ```bash cat .github/workflows/ci.yml | bk pipeline convert --vendor github ``` ```bash bk pipeline convert --vendor github < .github/workflows/ci.yml ``` ##### Validate a pipeline Validate a pipeline YAML file. ```bash bk pipeline validate [flags] ``` ###### Flags | Flag | Description | | --- | --- | | `-f`, `--file=FILE,...` | Path to the pipeline YAML file(s) to validate | | `--debug` | Enable debug output for REST API calls | ###### Examples Validate the default pipeline file: ```bash bk pipeline validate ``` Validate a specific pipeline file: ```bash bk pipeline validate --file path/to/pipeline.yaml ``` Validate multiple pipeline files: ```bash bk pipeline validate --file path/to/pipeline1.yaml --file path/to/pipeline2.yaml ``` ##### View a pipeline View a pipeline. ```bash bk pipeline view [] [flags] ``` ###### Arguments | Argument | Description | | --- | --- | | `[]` | The pipeline to view. This can be a {pipeline slug} or in the format {org slug}/{pipeline slug}. | ###### Flags | Flag | Description | | --- | --- | | `-o`, `--output=""` | Output format. One of: json, yaml, text | | `-p`, `--pipeline=STRING` | The pipeline to view. This can be a {pipeline slug} or in the format {org slug}/{pipeline slug}. | | `-w`, `--web` | Open the pipeline in a web browser. | | `--debug` | Enable debug output for REST API calls | | `--json` | Output as JSON | | `--org=STRING` | Organization slug. | | `--text` | Output as text | | `--yaml` | Output as YAML | ###### Examples View a pipeline: ```bash bk pipeline view my-pipeline ``` View a pipeline using flags: ```bash bk pipeline view --org my-org --pipeline my-pipeline ``` View a pipeline in a specific organization: ```bash bk pipeline view my-org/my-pipeline ``` Open pipeline in browser: ```bash bk pipeline view my-pipeline --web ``` Output as JSON: ```bash bk pipeline view my-pipeline -o json ``` --- ### secret URL: https://buildkite.com/docs/platform/cli/reference/secret #### Buildkite CLI secret command The `bk secret` command allows you to manage Buildkite secrets from the command line. ##### Commands | Command | Description | | --- | --- | | `bk secret list` | List secrets for a cluster. | | `bk secret get` | View a cluster secret. | | `bk secret create` | Create a new cluster secret. | | `bk secret update` | Update a cluster secret. | | `bk secret delete` | Delete a cluster secret. | ##### List secrets List secrets for a cluster. ```bash bk secret list --cluster-uuid=STRING [flags] ``` ###### Flags | Flag | Description | | --- | --- | | `-o`, `--output=""` | Output format. One of: json, yaml, text | | `--cluster-uuid=STRING` | The UUID of the cluster to list secrets for | | `--debug` | Enable debug output for REST API calls | | `--json` | Output as JSON | | `--text` | Output as text | | `--yaml` | Output as YAML | ###### Examples List all secrets in a cluster: ```bash bk secret list --cluster-uuid my-cluster-uuid ``` List secrets in JSON format: ```bash bk secret list --cluster-uuid my-cluster-uuid -o json ``` ##### Get secret View a cluster secret. ```bash bk secret get --cluster-uuid=STRING --secret-id=STRING [flags] ``` ###### Flags | Flag | Description | | --- | --- | | `-o`, `--output=""` | Output format. One of: json, yaml, text | | `--cluster-uuid=STRING` | The UUID of the cluster | | `--debug` | Enable debug output for REST API calls | | `--json` | Output as JSON | | `--secret-id=STRING` | The UUID of the secret to view | | `--text` | Output as text | | `--yaml` | Output as YAML | ###### Examples View a secret: ```bash bk secret get --cluster-uuid my-cluster-uuid --secret-id my-secret-id ``` View a secret in JSON format: ```bash bk secret get --cluster-uuid my-cluster-uuid --secret-id my-secret-id -o json ``` ##### Create a secret Create a new cluster secret. ```bash bk secret create --cluster-uuid=STRING --key=STRING [flags] ``` ###### Flags | Flag | Description | | --- | --- | | `-o`, `--output=""` | Output format. One of: json, yaml, text | | `--cluster-uuid=STRING` | The UUID of the cluster | | `--debug` | Enable debug output for REST API calls | | `--description=STRING` | A description of the secret | | `--json` | Output as JSON | | `--key=STRING` | The key name for the secret (e.g. MY_SECRET) | | `--policy=STRING` | The access policy for the secret (YAML format) | | `--text` | Output as text | | `--value=STRING` | The secret value. If not provided, you will be prompted to enter it. | | `--yaml` | Output as YAML | ###### Examples Create a secret with interactive value input: ```bash bk secret create --cluster-uuid my-cluster-uuid --key MY_SECRET ``` Create a secret with the value provided inline: ```bash bk secret create --cluster-uuid my-cluster-uuid --key MY_SECRET --value "s3cr3t" ``` Create a secret with a description: ```bash bk secret create --cluster-uuid my-cluster-uuid --key MY_SECRET --description "My secret description" ``` ##### Update secret Update a cluster secret. ```bash bk secret update --cluster-uuid=STRING --secret-id=STRING [flags] ``` ###### Flags | Flag | Description | | --- | --- | | `-o`, `--output=""` | Output format. One of: json, yaml, text | | `--cluster-uuid=STRING` | The UUID of the cluster | | `--debug` | Enable debug output for REST API calls | | `--description=STRING` | Update the description of the secret | | `--json` | Output as JSON | | `--policy=STRING` | Update the access policy for the secret (YAML format) | | `--secret-id=STRING` | The UUID of the secret to update | | `--text` | Output as text | | `--update-value` | Prompt to update the secret value | | `--yaml` | Output as YAML | ###### Examples Update a secret's description: ```bash bk secret update --cluster-uuid my-cluster-uuid --secret-id my-secret-id --description "New description" ``` Update a secret's value: ```bash bk secret update --cluster-uuid my-cluster-uuid --secret-id my-secret-id --update-value ``` Update both description and value: ```bash bk secret update --cluster-uuid my-cluster-uuid --secret-id my-secret-id --description "New description" --update-value ``` ##### Delete secret Delete a cluster secret. ```bash bk secret delete --cluster-uuid=STRING --secret-id=STRING ``` ###### Flags | Flag | Description | | --- | --- | | `--cluster-uuid=STRING` | The UUID of the cluster | | `--debug` | Enable debug output for REST API calls | | `--secret-id=STRING` | The UUID of the secret to delete | ###### Examples Delete a secret (with confirmation prompt): ```bash bk secret delete --cluster-uuid my-cluster-uuid --secret-id my-secret-id ``` Delete a secret without confirmation: ```bash bk secret delete --cluster-uuid my-cluster-uuid --secret-id my-secret-id --yes ``` --- ### user URL: https://buildkite.com/docs/platform/cli/reference/user #### Buildkite CLI user command The `bk user` command allows you to manage users in your Buildkite organization from the command line. ##### Commands | Command | Description | | --- | --- | | `bk user invite` | Invite users to your organization. | ##### Invite user Invite users to your organization. ```bash bk user invite ... ``` ###### Flags | Flag | Description | | --- | --- | | `--debug` | Enable debug output for REST API calls | ###### Examples Invite a single user to your organization: ```bash bk user invite bob@supercoolorg.com ``` Invite multiple users to your organization: ```bash bk user invite bob@supercoolorg.com bobs_mate@supercoolorg.com ``` --- ### version URL: https://buildkite.com/docs/platform/cli/reference/version #### Buildkite CLI version command The `bk version` command allows you to display which version of the Buildkite CLI you're using from the command line. Print the version of the CLI being used ```bash bk version [flags] ``` ##### Flags | Flag | Description | | --- | --- | | `--debug` | Enable debug output for REST API calls | --- ### Overview URL: https://buildkite.com/docs/platform/terraform-provider #### Terraform provider The [Buildkite provider for Terraform](https://registry.terraform.io/providers/buildkite/buildkite/latest/docs) lets you manage your Buildkite organization's resources using [Terraform](https://www.terraform.io/) infrastructure-as-code workflows. With this provider, you can define and version-control your pipelines, teams, clusters, and other Buildkite resources alongside your application infrastructure. The [Buildkite Terraform Provider](https://github.com/buildkite/terraform-provider-buildkite) is open source repository available on GitHub, is listed on the [Terraform Registry](https://registry.terraform.io/providers/buildkite/buildkite/latest), and supports Terraform 1.0 and later. ##### Managed resources Once you have met the prerequisites (see [Before you start](#before-you-start)) and have [defined the Buildkite provider for your Terraform configuration](#define-the-buildkite-provider-for-your-terraform-configuration), you can then use the Buildkite Terraform provider for the following supported resource types: - **Pipelines**: Create and configure [pipelines](/docs/pipelines/create-your-own), including their [steps](/docs/pipelines/configure/defining-steps) in a [pipeline template](/docs/pipelines/governance/templates), [repository settings](/docs/pipelines/source-control), repository webhooks (for [GitHub](/docs/pipelines/configure/defining-steps#getting-started-webhooks-for-github) or [other repository providers](/docs/pipelines/configure/defining-steps#getting-started-webhooks-for-other-repository-providers)), [team access](/docs/pipelines/security/permissions#manage-teams-and-permissions), and [schedules](/docs/pipelines/configure/workflows/scheduled-builds). See [Getting started with managing pipelines](/docs/platform/terraform-provider/getting-started-with-managing-pipelines) for more information. - **Clusters and queues**: Manage [clusters](/docs/pipelines/security/clusters), [queues](/docs/agent/queues), [agent tokens](/docs/agent/self-hosted/tokens), default queues, [cluster maintainers](/docs/pipelines/security/clusters/manage#manage-maintainers-on-a-cluster), and [Buildkite secrets](/docs/pipelines/security/secrets/buildkite-secrets). See [Manage clusters and queues](/docs/platform/terraform-provider/manage-clusters-and-queues) for more information. - **Teams**: Create and manage [teams](/docs/platform/team-management/permissions) and their members. See [Manage teams](/docs/platform/terraform-provider/manage-teams) for more information. - **Organizations**: Configure organization-level settings (such as [two-factor authentication](/docs/platform/team-management/enforce-2fa) and [restricting API access by IP address](/docs/apis/managing-api-tokens#restricting-api-access-by-ip-address)), and [system banners](/docs/platform/team-management/system-banners). See [Manage Buildkite organizations](/docs/platform/terraform-provider/manage-buildkite-organizations) for more information. - **Test suites**: Set up [Test Engine](/docs/test-engine) test suites and manage team access. See the [`buildkite_test_suite`](https://registry.terraform.io/providers/buildkite/buildkite/latest/docs/resources/test_suite) and [`buildkite_test_suite_team`](https://registry.terraform.io/providers/buildkite/buildkite/latest/docs/resources/test_suite_team) resources in the Terraform provider docs for details. - **Package registries**: Manage [Package Registries](/docs/package-registries) resources. See the [`buildkite_registry`](https://registry.terraform.io/providers/buildkite/buildkite/latest/docs/resources/registry) resource in the Terraform provider docs for details. ##### Before you start The Terraform provider requires the following Buildkite configuration values: - **API access token**: A [Buildkite API access token](/docs/apis/managing-api-tokens) (`BUILDKITE_API_TOKEN`) with `write_pipelines` and `read_pipelines` [REST API scopes and _GraphQL API access_](/docs/apis/managing-api-tokens#token-scopes) enabled. You can generate a token from your [API Access Tokens](https://buildkite.com/user/api-access-tokens) page. **Note:** You can also add the `write_suites` REST API scope to this token, although this is only required if you plan to manage [Buildkite Test Engine](/docs/test-engine) test suites using the Terraform provider. - **Buildkite organization slug**: Your Buildkite organization slug, which you can find in your Buildkite URL: `https://buildkite.com/`. ##### Define the Buildkite provider for your Terraform configuration To start using the Buildkite Terraform provider to manage your pipelines in Terraform: 1. Define the Buildkite provider for your Terraform configuration, along with your Buildkite API access token configuration, as a file written in HashiCorp Configuration Language (HCL) (for example, `provider.tf`): ```hcl terraform { required_providers { buildkite = { source = "buildkite/buildkite" version = "~> 1.0" } } } provider "buildkite" { api_token = "BUILDKITE_API_TOKEN" organization = "your-buildkite-org-slug" } ``` **Warning:** Avoid storing your Buildkite API access token directly in Terraform configuration files. Use an environment variable for `BUILDKITE_API_TOKEN` or manage it through a secrets manager instead, which is the recommended approach if you're using a Buildkite pipeline to orchestrate this process. If you're running this process at the command line, and you wish to use your Terraform configuration to temporarily store your token's value for this procedure, you can do so by creating the following files, although _ensure you delete them_ at the end of this procedure: 1. Configure the following additional HCL configuration file to define a variable for your API access token (for example, `variables.tf`): ```hcl variable "buildkite_api_token" { type = string sensitive = true } ``` 1. Create the Terraform variable file to hold your API access token value (`terraform.tfvars`): ```hcl buildkite_api_token = "your-api-access-token-value" ``` 1. Change the value of `BUILDKITE_API_TOKEN` to `var.buildkite_api_token` in your `provider.tf` file. 1. Initialize the provider: ```bash terraform init ``` ##### Next steps You can now proceed to [start managing your pipelines in Terraform](/docs/platform/terraform-provider/getting-started-with-managing-pipelines). You can also start managing your [clusters and queues](/docs/platform/terraform-provider/manage-clusters-and-queues), [teams](/docs/platform/terraform-provider/manage-teams) and [Buildkite organization's settings](/docs/platform/terraform-provider/manage-buildkite-organizations) in Terraform too. ##### Further reference For the full list of supported resources, data sources, and their configuration options, see the [Buildkite provider documentation](https://registry.terraform.io/providers/buildkite/buildkite/latest/docs) on the Terraform Registry. --- ### Getting started with managing pipelines URL: https://buildkite.com/docs/platform/terraform-provider/getting-started-with-managing-pipelines #### Getting started with managing pipelines The [Buildkite Terraform provider](/docs/platform/terraform-provider) supports managing [pipelines](/docs/pipelines/create-your-own), including their [steps](/docs/pipelines/configure/defining-steps), [pipeline templates](/docs/pipelines/governance/templates), [repository settings](/docs/pipelines/source-control), repository webhooks, [team access](/docs/pipelines/security/permissions#manage-teams-and-permissions), and [schedules](/docs/pipelines/configure/workflows/scheduled-builds) as Terraform resources. This page covers how to define and configure these resources in your Terraform configuration files. This process assumes that you already have the required Buildkite clusters and teams configured in your Buildkite organization, so that you can start configuring and managing your pipelines in Terraform. Before proceeding, ensure you have the following: - **Cluster name/s**: Required so that Terraform can determine which [Buildkite cluster/s](/docs/pipelines/security/clusters) your pipelines are associated with. - **Team name/s** (_optional_): Required if [teams is enabled in your Buildkite organization](/docs/platform/team-management/permissions), and so that Terraform can determine which teams should be granted access to your pipelines, along with each team's permissions. Be aware that you'll be able to later modify the configurations you'll create on this page, by bringing your [cluster-related](/docs/platform/terraform-provider/manage-clusters-and-queues) and [team](/docs/platform/terraform-provider/manage-teams) resources into Terraform. ##### Define your initial pipeline resources Define Buildkite pipeline resources for the pipelines in your Buildkite organization that you want to manage in Terraform, again in HCL (for example, `pipelines.tf`). In the following example, two pipelines are defined (**Frontend pipeline** and **Backend pipeline**), which will be part of the pre-existing Buildkite cluster (**Default cluster**), and a pre-existing team, whose name is **Engineering** (along with all of its members) will be made the initial owner of these pipelines. The steps for both of these pipelines use those from a pipeline template definition named **Standard pipeline**. The configuration settings for all pipeline-related resources in this example are accessible in the Buildkite interface through the URL path portions (appended to `https://buildkite.com//`), indicated in the comments of the pipeline template (**Standard pipeline**) and first pipeline (named **Frontend pipeline**) of this `pipelines.tf` example. ```hcl #### Data source for existing cluster (name) to assign pipelines to data "buildkite_cluster" "default" { name = "Default cluster" } #### Data source for existing team (name) to assign as the initial pipeline owner data "buildkite_team" "engineering" { slug = "engineering" } #### Define a reusable pipeline template (through 'pipeline-templates') resource "buildkite_pipeline_template" "standard" { name = "Standard pipeline" description = "Default step configuration for all pipelines." configuration = 📘 > In the pipeline examples above, the actual pipeline YAML steps for each pipeline are uploaded to Buildkite Pipelines from the `.buildkite/pipeline.yml` file in each pipeline's respective repository, which is the recommended approach for storing and managing your pipeline steps as code. > If you did want to manage some of these pipeline steps through your pipelines' `https://buildkite.com///settings/steps` pages in the Buildkite interface, you'd need to include these steps in `steps` definition blocks (containing your `YAML` steps) of the respective pipelines in your `pipelines.tf` file. However, this approach is not recommended. ##### Add your repository provider settings Add the required `provider_settings` blocks for each pipeline definition in this file. For example, assuming both pipelines are configured to build a repository in GitHub with the following **GitHub Settings** accessed through `https://buildkite.com///settings/repository`, of which the last two are not shown in this screenshot: Add the following `repository_provider` blocks to each pipeline of your `pipelines.tf` file: ```hcl ... #### Define the frontend pipeline resource "buildkite_pipeline" "frontend" { ... # Repository (through 'frontend-pipeline/settings/repository') repository = "git@github.com:my-org/frontend.git" provider_settings = { trigger_mode = "code" build_pull_requests = true skip_pull_request_builds_for_existing_commits = true ignore_default_branch_pull_requests = true build_pull_request_ready_for_review = true build_branches = true publish_commit_status = true } ... } #### Define the backend pipeline resource "buildkite_pipeline" "backend" { ... # Repository repository = "git@github.com:my-org/backend.git" provider_settings = { trigger_mode = "code" build_pull_requests = true skip_pull_request_builds_for_existing_commits = true ignore_default_branch_pull_requests = true build_pull_request_ready_for_review = true build_branches = true publish_commit_status = true } ... } ``` Learn more about each available `provider_settings` configuration in the Buildkite Terraform provider's [Nested Schema for `provider_settings`](https://registry.terraform.io/providers/buildkite/buildkite/latest/docs/resources/pipeline#nested-schema-for-provider_settings) documentation. ##### Add required repository webhooks Add the required repository webhooks to trigger builds of these pipelines automatically (that is, when changes are pushed to these repositories). This is done using `buildkite_pipeline_webhook` resource blocks. In this example, add the following `buildkite_pipeline_webhook` resource blocks to each pipeline of your `pipelines.tf` file, bearing in mind that the Terraform identifiers you use in these blocks (that is, `frontend` and `backend`) must match their respective `buildkite_pipeline` pipeline Terraform identifiers: ```hcl ... #### Define the frontend pipeline resource "buildkite_pipeline" "frontend" { ... } #### Repository webhook to trigger frontend pipeline builds automatically resource "buildkite_pipeline_webhook" "frontend" { pipeline_id = buildkite_pipeline.frontend.id repository = buildkite_pipeline.frontend.repository } #### Define the backend pipeline resource "buildkite_pipeline" "backend" { ... } #### Repository webhook to trigger backend pipeline builds automatically resource "buildkite_pipeline_webhook" "backend" { pipeline_id = buildkite_pipeline.backend.id repository = buildkite_pipeline.backend.repository } ``` Learn more about this Terraform provider resource in the [`buildkite_pipeline_webhook` resource](https://registry.terraform.io/providers/buildkite/buildkite/latest/docs/resources/pipeline_webhook) documentation. ##### Add any other teams to your pipelines Add any other teams who need access to these pipelines and define their permissions on these pipelines. This is done using `buildkite_pipeline_team` resource blocks. In this example, the pre-existing **Design team** in your Buildkite organization is granted full access to **Frontend pipeline**, which is the same level of access as the pipeline's initial owner team (**Engineering**). To do this, add the following `buildkite_team` data source and `buildkite_pipeline_team` resource blocks for this team, and apply it to the `frontend` pipeline in your `pipelines.tf` file. Bear in mind that the Terraform identifiers for the `buildkite_pipeline` resource and `buildkite_team` data source blocks (that is, `frontend` and `design_team`, respectively) must match those you use for the `pipeline_id` and `team_id` argument values in your `buildkite_pipeline_team` resource block. Therefore, the syntax for referencing these values would be `data.buildkite_team.design_team.id` and `buildkite_pipeline.frontend.id`, respectively, where the team's `access_level` of `MANAGE_BUILD_AND_READ` grants full access to the pipeline: ```hcl ... #### Data source for existing team (name) to assign pipeline access data "buildkite_team" "design_team" { slug = "design-team" } ... #### Define the frontend pipeline resource "buildkite_pipeline" "frontend" { ... } ... #### Additional team with full access to 'frontend' resource "buildkite_pipeline_team" "design" { pipeline_id = buildkite_pipeline.frontend.id team_id = data.buildkite_team.design_team.id access_level = "MANAGE_BUILD_AND_READ" } #### Define the backend pipeline resource "buildkite_pipeline" "backend" { ... } ... ``` Learn more about this Terraform provider resource in the [`buildkite_pipeline_team` resource](https://registry.terraform.io/providers/buildkite/buildkite/latest/docs/resources/pipeline_team) documentation. ##### Add appropriate schedules to your pipelines It might be sufficient that your pipelines are built using [repository webhooks](#add-required-repository-webhooks) only. However, you may wish to run a regular scheduled build of your pipeline, for example, to ensure the project's resources are kept up to date, with dynamically run steps that create a new pull- or merge-request with updated resources. In this example, add a daily re-build of the **Backend pipeline** that runs at midnight on the backend project's default branch (that is, `main`, which can be accessed through `default_branch` of the pipeline's Terraform resource). To do this, add the following `buildkite_pipeline_schedule` resource block for this schedule, and apply it to the `backend` pipeline in your `pipelines.tf` file. Bear in mind that the Terraform identifier for the `buildkite_pipeline` resource block (that is, `backend`) must match that of the `pipeline_id` argument value in your `buildkite_pipeline_schedule` resource block. Therefore, the syntax for referencing this value would be `buildkite_pipeline.backend.id`. ```hcl ... #### Define the frontend pipeline resource "buildkite_pipeline" "frontend" { ... } ... #### Define the backend pipeline resource "buildkite_pipeline" "backend" { ... } ... #### Schedule a build of the 'backend' pipeline at midnight every day resource "buildkite_pipeline_schedule" "nightly" { pipeline_id = buildkite_pipeline.backend.id label = "Nightly build" cronline = "@midnight" branch = buildkite_pipeline.backend.default_branch } ``` Learn more about this Terraform provider resource in the [`buildkite_pipeline_schedule` resource](https://registry.terraform.io/providers/buildkite/buildkite/latest/docs/resources/pipeline_schedule) documentation. ##### Verify your completed pipelines.tf file Confirm that your Terraform pipeline resources configuration (`pipelines.tf`) file is now complete: ```hcl #### Data source for existing cluster (name) to assign pipelines to data "buildkite_cluster" "default" { name = "Default cluster" } #### Data source for existing team (name) to assign as the initial pipeline owner data "buildkite_team" "engineering" { slug = "engineering" } #### Data source for existing team (name) to assign access to pipelines data "buildkite_team" "design_team" { slug = "design-team" } #### Define a reusable pipeline template (through 'pipeline-templates') resource "buildkite_pipeline_template" "standard" { name = "Standard pipeline" description = "Default step configuration for all pipelines." configuration = **Security** > **Pipelines** tab, and clear the **Delete Pipelines** checkbox. - If you're a Buildkite customer on the [Enterprise](https://buildkite.com/pricing) plan, create a child Buildkite organization to test your Terraform configuration first before applying them into production. Once your `pipelines.tf` file is completed (including `clusters.tf`, `teams.tf`, and `organization.tf` if you've configured these too), you can apply all of these configurations to your [configured Buildkite organization](/docs/platform/terraform-provider#define-the-buildkite-provider-for-your-terraform-configuration): ```bash terraform plan terraform apply ``` Terraform will apply all the resources you've configured in all of your `.tf` files to your Buildkite organization. > 📘 Managing secrets and improving maintainability > Once you have securely stored you secrets' values and Terraform has successfully applied these configurations to your Buildkite organization, delete your Terraform variable file `terraform.tfvars` which has been temporarily storing these values, such as those of your [API access token](/docs/platform/terraform-provider#before-you-start) (and if so, [agent token](/docs/platform/terraform-provider/manage-clusters-and-queues#define-your-agent-tokens)). > You can maintain a copy of these `.tf` files in source control, should you wish to reapply these pipelines and other resources to the same or any other Buildkite organization again in future, bearing in mind that you'll need to manually keep any configuration changes you make to these pipelines through the Buildkite interface or APIs in sync with your `pipelines.tf` (including the other `.tf`) file/s. > To improve maintainability, however, you can import your existing pipeline configurations from the Buildkite platform into Terraform, which will account for almost all current updates made to these pipeline configurations. See [Import existing Buildkite resources to Terraform](/docs/platform/terraform-provider/import-existing-resources) for details. > For greater visibility across your organization, it is strongly recommended that you create a Buildkite pipeline to manage the application of your Buildkite organization's resources from Terraform to your Buildkite organization itself. To do this, manage your Terraform Buildkite resources in source control, store your secrets in a secrets manager, and to access their values, use a secrets manager resource within your Terraform configuration, such as [AWS Secrets Manager](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/secretsmanager_secret_version) or [HashiCorp Vault](https://registry.terraform.io/providers/hashicorp/vault/latest/docs/resources/generic_secret). ##### Further reference For the full list of supported resources, data sources, and their configuration options, see the [Buildkite provider documentation](https://registry.terraform.io/providers/buildkite/buildkite/latest/docs) on the Terraform Registry. --- ### Manage clusters and queues URL: https://buildkite.com/docs/platform/terraform-provider/manage-clusters-and-queues #### Manage clusters and queues The [Buildkite Terraform provider](/docs/platform/terraform-provider) supports managing [clusters](/docs/pipelines/security/clusters), [queues](/docs/agent/queues), [agent tokens](/docs/agent/self-hosted/tokens), default queues, [cluster maintainers](/docs/pipelines/security/clusters/manage#manage-maintainers-on-a-cluster), and [Buildkite secrets](/docs/pipelines/security/secrets/buildkite-secrets) as Terraform resources. This page covers how to define and configure these resources in your Terraform configuration files. ##### Define your cluster resources Define resources for the [clusters](/docs/pipelines/security/clusters) in your Buildkite organization that you want to manage in Terraform, in HCL (for example, `clusters.tf`). The `buildkite_cluster` resource is used to define, create and manage clusters. Each cluster requires a `name` argument and can optionally include `description`, `emoji`, and `color` arguments. In the following example, the **Primary cluster** will be created with `terraform plan` and `terraform apply`. ```hcl resource "buildkite_cluster" "primary" { name = "Primary cluster" description = "Runs monolith builds and deployments." emoji = "\:rocket\:" color = "#BADA55" } ``` The optional arguments for each cluster are: - `description`: A description for the cluster that helps identify its purpose, such as its usage or region. - `emoji`: An emoji to display with the cluster, set using either `\:buildkite\:` notation or the emoji character itself (for example, 🚀). - `color`: A color for the cluster, specified as a hex code (for example, `#BADA55`). If you don't have a pre-existing cluster in your Buildkite organization but want to associate a pipeline in your [pipeline resources (`pipelines.tf` file)](/docs/platform/terraform-provider/getting-started-with-managing-pipelines#define-your-initial-pipeline-resources) with a new cluster managed by the Terraform provider, you can define the new cluster in your cluster resources (`clusters.tf`) file and reference it from the pipeline resource's `cluster_id` argument. Following on from the [pipeline resources example](/docs/platform/terraform-provider/getting-started-with-managing-pipelines#define-your-initial-pipeline-resources), if you wanted to make the **Frontend pipeline** use this **Primary cluster** instead of **Default cluster**, you would change this pipeline resource's `cluster_id` argument's value to `data.buildkite_cluster.primary.id`. Furthermore, if no pipelines under Terraform management use **Default cluster**, you could remove its data source from your pipeline resources `pipelines.tf` file. Learn more about this resource in the [`buildkite_cluster` resource](https://registry.terraform.io/providers/buildkite/buildkite/latest/docs/resources/cluster) documentation. ##### Define your queue resources Define resources for the [queues](/docs/agent/queues) of Buildkite [clusters](#define-your-cluster-resources) that you want to manage in Terraform, within your cluster resources HCL file (for example, `clusters.tf`). The `buildkite_cluster_queue` resource is used to define, create and manage queues within a cluster. Each queue requires a `cluster_id` and a `key` argument to uniquely identify the queue, and can optionally include a `description` argument. Learn more about this resource in the [`buildkite_cluster_queue` resource](https://registry.terraform.io/providers/buildkite/buildkite/latest/docs/resources/cluster_queue) documentation. ###### Self-hosted queues If your Buildkite organization uses [self-hosted agents](/docs/agent/self-hosted), you can configure [self-hosted queues](/docs/agent/queues/managing#create-a-self-hosted-queue) for these agents. In the following example, the [**Primary cluster**](#define-your-cluster-resources)'s **default** and **deployment** queues will be created with `terraform plan` and `terraform apply`. ```hcl resource "buildkite_cluster_queue" "default" { cluster_id = buildkite_cluster.primary.id key = "default" } resource "buildkite_cluster_queue" "deployment" { cluster_id = buildkite_cluster.primary.id key = "deployment" description = "Queue for deployment jobs." } ``` You can also optionally set the following arguments for self-hosted queues: - `dispatch_paused` with a value of `true` to pause job dispatch on the queue after creation. This is useful when you want to set up agents before the queue starts accepting jobs. See [Pause and resume an agent](/docs/agent/self-hosted/pausing-and-resuming) for more information about this feature. - `retry_agent_affinity` with a value of `prefer-warmest` (default) to prefer agents that recently finished jobs, or `prefer-different` to prefer a different agent on retry. See [Retry agent affinity](/docs/agent/self-hosted/prioritization#retry-agent-affinity) for more information about this feature. ###### Buildkite hosted queues If your Buildkite organization uses [Buildkite hosted agents](/docs/agent/buildkite-hosted), you can configure [Buildkite hosted queues](/docs/agent/queues/managing#create-a-buildkite-hosted-queue) for these agents by including the `hosted_agents` attribute with an `instance_shape` value. ###### Linux hosted agents In the following example, the [**Primary cluster**](#define-your-cluster-resources)'s **hosted-linux** queue for a [Linux hosted agent](/docs/agent/buildkite-hosted/linux) will be created with `terraform plan` and `terraform apply`. ```hcl resource "buildkite_cluster_queue" "hosted_linux" { cluster_id = buildkite_cluster.primary.id key = "hosted-linux" hosted_agents = { instance_shape = "LINUX_AMD64_2X4" linux = { agent_image_ref = "ubuntu:24.04" } } } ``` When defining Buildkite hosted queues for Linux hosted agents: - See the [Sizes section of Linux hosted agents](/docs/agent/buildkite-hosted/linux#sizes) for the available `instance_shape` argument values. - The optional `linux` argument and its required `agent_image_ref` value relates to the [custom image feature](/docs/agent/buildkite-hosted/linux/custom-agent-images#use-an-agent-image-specify-a-custom-image-for-a-queue) for this queue. ###### macOS hosted agents In the following example, the [**Primary cluster**](#define-your-cluster-resources)'s **hosted-macos** queue for a [macos hosted agent](/docs/agent/buildkite-hosted/macos) will be created with `terraform plan` and `terraform apply`. ```hcl resource "buildkite_cluster_queue" "hosted_macos" { cluster_id = buildkite_cluster.primary.id key = "hosted-macos" hosted_agents = { instance_shape = "MACOS_ARM64_M4_6X28" mac = { xcode_version = "16.2" } } } ``` When defining Buildkite hosted queues for macOS hosted agents: - See the [Sizes section of macos hosted agents](/docs/agent/buildkite-hosted/macos#sizes) for the available `instance_shape` argument values. - The optional `mac` argument and its required `xcode_version` value relates to the experimental feature to select macOS agents based on the [Xcode version](/docs/agent/buildkite-hosted/macos#macos-instance-software-support) they support. ##### Define your default queue resources For each of your Buildkite [clusters](#define-your-cluster-resources) (managed in Terraform) with more than one queue, define the default queue as a resource—one for each of these clusters, within your cluster resources HCL file (for example, `clusters.tf`). Use the `buildkite_cluster_default_queue` resource to determine which queue (referenced by its `queue_id` argument) in a cluster (referenced by `cluster_id`) receives jobs whose pipeline steps don't specify a queue. In the following example, the [**Primary cluster**](#define-your-cluster-resources)'s [self-hosted queue with the key **default**](#define-your-queue-resources-self-hosted-queues) will be made the default queue with `terraform plan` and `terraform apply`. ```hcl resource "buildkite_cluster_default_queue" "primary" { cluster_id = buildkite_cluster.primary.id queue_id = buildkite_cluster_queue.default.id } ``` Learn more about this resource in the [`buildkite_cluster_default_queue` resource](https://registry.terraform.io/providers/buildkite/buildkite/latest/docs/resources/cluster_default_queue) documentation. ##### Define your agent tokens For each of your Buildkite [clusters](#define-your-cluster-resources) managed in Terraform, define and create an [agent token](/docs/agent/self-hosted/tokens)—at least one for each of these clusters with [self-hosted agents](/docs/agent/self-hosted), within your cluster resources HCL file (for example, `clusters.tf`). Use the `buildkite_cluster_agent_token` resource to define, create and manage an agent token (named by its `description` argument) that a self-hosted agent uses to connect to a cluster (referenced by `cluster_id`). In the following example, the [**Primary cluster**](#define-your-cluster-resources)'s **Default agent token** will be created with `terraform plan` and `terraform apply`. ```hcl resource "buildkite_cluster_agent_token" "default" { description = "Default agent token" cluster_id = buildkite_cluster.primary.id } ``` You can optionally restrict which IP addresses are allowed to use a token by specifying `allowed_ip_addresses` with a list of CIDR-notation IPv4 addresses: ```hcl resource "buildkite_cluster_agent_token" "restricted" { description = "Token restricted to internal network" cluster_id = buildkite_cluster.primary.id allowed_ip_addresses = ["192.0.2.0/24"] } ``` The generated agent token value is stored in Terraform state and can be accessed through the resource's `token` attribute. To retrieve this value, you can either: - Define a sensitive [Terraform output](https://developer.hashicorp.com/terraform/language/values/outputs), using the **Default agent token** example above: ```hcl output "agent_token" { value = buildkite_cluster_agent_token.default.token sensitive = true } ``` and retrieve the agent token's value from the command line: ```bash terraform output -raw agent_token ``` - Pass the agent token's value directly to a secrets manager resource within your Terraform configuration, such as [AWS Secrets Manager](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/secretsmanager_secret_version) or [HashiCorp Vault](https://registry.terraform.io/providers/hashicorp/vault/latest/docs/resources/generic_secret). Learn more about this resource in the [`buildkite_cluster_agent_token` resource](https://registry.terraform.io/providers/buildkite/buildkite/latest/docs/resources/cluster_agent_token) documentation. ##### Define cluster maintainers For each of your Buildkite [clusters](#define-your-cluster-resources) managed in Terraform, define a [cluster maintainer](/docs/pipelines/security/clusters/manage#manage-maintainers-on-a-cluster)—aim for at least one for each of these clusters, within your cluster resources HCL file (for example, `clusters.tf`). Otherwise, a cluster with no cluster maintainers can only be administered by a Buildkite organization administrator. Use the `buildkite_cluster_maintainer` resource to grant users or teams permission to manage a cluster (referenced by its `cluster_uuid` argument). Specify either a Buildkite [user (referenced by `user_uuid`)](#define-cluster-maintainers-obtain-a-user-uuid) or [team (referenced by `team_uuid`)](#define-cluster-maintainers-obtain-a-team-uuid), but not both. In the following example, the Buildkite team with UUID `01234567-89ab-cdef-0123-456789abcdef` will be made a maintainer of the [**Primary cluster**](#define-your-cluster-resources), with `terraform plan` and `terraform apply`. ```hcl #### Add a team as a cluster maintainer resource "buildkite_cluster_maintainer" "platform_team" { cluster_uuid = buildkite_cluster.primary.uuid team_uuid = "01234567-89ab-cdef-0123-456789abcdef" } ``` Learn more about this resource in the [`buildkite_cluster_maintainer` resource](https://registry.terraform.io/providers/buildkite/buildkite/latest/docs/resources/cluster_maintainer) documentation. ###### Obtain a user UUID To find the `user_uuid` for use in a `buildkite_cluster_maintainer` resource, run the following [GraphQL](/docs/apis/graphql) query, replacing `your-buildkite-org-slug` with your Buildkite organization's slug: ```graphql query { organization(slug: "your-buildkite-org-slug") { members(first: 100) { edges { node { user { name uuid } } } } } } ``` ###### Obtain a team UUID To find the `team_uuid` for use in a `buildkite_cluster_maintainer` resource, run the following [GraphQL](/docs/apis/graphql) query, replacing `your-buildkite-org-slug` with your Buildkite organization's slug: ```graphql query { organization(slug: "your-buildkite-org-slug") { teams(first: 100) { edges { node { name uuid } } } } } ``` For more GraphQL queries related to teams, see the [Teams cookbook](/docs/apis/graphql/cookbooks/teams). ##### Define Buildkite secrets For each of your Buildkite [clusters](#define-your-cluster-resources) managed in Terraform, define a [Buildkite secrets](/docs/pipelines/security/secrets/buildkite-secrets) for the pipelines that require them, within your cluster resources HCL file (for example, `clusters.tf`). Use the `buildkite_cluster_secret` resource to define, create and manage an encrypted key-value pair accessible by agents within a [Buildkite cluster](/docs/pipelines/security/clusters) (referenced by its `cluster_id` argument, which actually requires a cluster UUID value). This resource requires the following arguments: - `key`: This value is what you use to reference this secret from within your pipeline configurations. See [Create a secret](/docs/pipelines/security/secrets/buildkite-secrets#create-a-secret) for more information. - `value`: The secret's actual value. You could also implement the secret's value in a temporary `terraform.tfvars` file and define its variable in `variables.tf`, similar to your Buildkite API access token when [defining the Buildkite provider for your Terraform configuration](/docs/platform/terraform-provider#define-the-buildkite-provider-for-your-terraform-configuration). This resource also accepts the following optional arguments: - `description`: The secret's description, which appears just under the secret's key value on the main **Secrets** page. - `policy`: The access policy for the Buildkite secret, use this argument to define an access policy in YAML, to control which pipelines and branches can access the secret. See [Access policies for Buildkite secrets](/docs/pipelines/security/secrets/buildkite-secrets/access-policies) for more information. In the following example, the [**Primary cluster**](#define-your-cluster-resources)'s `DATABASE_PASSWORD` Buildkite secret (with description **Production database password**) will be created with `terraform plan` and `terraform apply`, where this secret can only be used by the `backend` pipeline on the `main` branch of its repository. ```hcl resource "buildkite_cluster_secret" "database_password" { cluster_id = buildkite_cluster.primary.uuid key = "DATABASE_PASSWORD" value = var.database_password description = "Production database password" policy = 🚧 Secret values are write-only to the Buildkite platform > Secret values cannot be retrieved using the Buildkite API. If you import an existing Buildkite secret resource to Terraform, you must manually set its `value` attribute in your configuration to match the actual secret value, as Terraform will not be able to read this value from the Buildkite API. Learn more about this resource in the [`buildkite_cluster_secret` resource](https://registry.terraform.io/providers/buildkite/buildkite/latest/docs/resources/cluster_secret) documentation. ##### Verify your completed clusters.tf file configuration The following example shows a complete cluster configuration with a single Buildkite cluster, two self-hosted queues (including a default), an agent token, a team maintainer, and a Buildkite secret: ```hcl #### Define the 'primary' cluster resource "buildkite_cluster" "primary" { name = "Primary cluster" description = "Runs the monolith build and deploy." emoji = "\:rocket\:" color = "#BADA55" } #### Define its self-hosted queues resource "buildkite_cluster_queue" "default" { cluster_id = buildkite_cluster.primary.id key = "default" } resource "buildkite_cluster_queue" "deployment" { cluster_id = buildkite_cluster.primary.id key = "deployment" description = "Queue for deployment jobs." } #### Set the default queue resource "buildkite_cluster_default_queue" "primary" { cluster_id = buildkite_cluster.primary.id queue_id = buildkite_cluster_queue.default.id } #### Create an agent token resource "buildkite_cluster_agent_token" "default" { description = "Default agent token" cluster_id = buildkite_cluster.primary.id } #### Add a team as a cluster maintainer resource "buildkite_cluster_maintainer" "platform_team" { cluster_uuid = buildkite_cluster.primary.uuid team_uuid = "01234567-89ab-cdef-0123-456789abcdef" } #### Define a cluster secret resource "buildkite_cluster_secret" "database_password" { cluster_id = buildkite_cluster.primary.uuid key = "DATABASE_PASSWORD" value = var.database_password description = "Production database password" policy = <<-EOT - pipeline_slug: backend-pipeline build_branch: main EOT } ``` ##### Applying the configuration Once your `clusters.tf` file is complete, it is ready to be [applied to your Buildkite organization](/docs/platform/terraform-provider/getting-started-with-managing-pipelines#applying-the-configuration). ##### Further reference For the full list of cluster and queue resources, data sources, and their configuration options, see the [Buildkite provider documentation](https://registry.terraform.io/providers/buildkite/buildkite/latest/docs) on the Terraform Registry. --- ### Manage teams URL: https://buildkite.com/docs/platform/terraform-provider/manage-teams #### Manage teams The [Buildkite Terraform provider](/docs/platform/terraform-provider) supports managing [teams](/docs/platform/team-management/permissions) and their members as Terraform resources. This page covers how to define and configure these resources in your Terraform configuration files. ##### Define your team resources Define resources for the [teams](/docs/platform/team-management/permissions) in your Buildkite organization that you want to manage in Terraform, in HCL (for example, `teams.tf`). The `buildkite_team` resource is used to create and manage teams. Each team requires a `name`, `privacy`, `default_team`, and `default_member_role` argument, and can optionally include a `description`. In the following example, the **Platform** and **Frontend** teams will be created with `terraform plan` and `terraform apply`. ```hcl #### Define the platform team resource "buildkite_team" "platform" { name = "Platform" description = "Platform team responsible for infrastructure." privacy = "VISIBLE" default_team = false default_member_role = "MEMBER" members_can_create_pipelines = true } #### Define the frontend team resource "buildkite_team" "frontend" { name = "Frontend" description = "Frontend team responsible for frontend development projects." privacy = "VISIBLE" default_team = false default_member_role = "MEMBER" } ``` The required arguments for each team are: - `name`: The name of the team. - `privacy`: The visibility of the team, either `VISIBLE` (all organization members can see the team) or `SECRET` (only team members and organization administrators can see the team). - `default_team`: Makes this team the default for new Buildkite organization members. Set to `true` to automatically add new users to this team, or `false` if otherwise. - `default_member_role`: The role assigned to new team members, either `MEMBER` or `MAINTAINER`. You can also optionally set the following arguments to control what team members can do: - `members_can_create_pipelines` with a value of `true` to allow team members to create pipelines. - `members_can_create_suites` with a value of `true` to allow team members to create test suites. - `members_can_create_registries` with a value of `true` to allow team members to create package registries. - `members_can_destroy_registries` with a value of `true` to allow team members to destroy package registries. - `members_can_destroy_packages` with a value of `true` to allow team members to destroy packages. If you don't have any pre-existing teams in your Buildkite organization but want to associate a pipeline in your [pipeline resources (`pipelines.tf` file)](/docs/platform/terraform-provider/getting-started-with-managing-pipelines#define-your-initial-pipeline-resources) with new teams managed by the Terraform provider, you can define the new teams in your team resources (`teams.tf`) file and reference it from the pipeline resource's `default_team_id` argument, along with [any other teams to add to your pipelines](/docs/platform/terraform-provider/getting-started-with-managing-pipelines#add-any-other-teams-to-your-pipelines). Following on from the [pipeline resources example](/docs/platform/terraform-provider/getting-started-with-managing-pipelines#define-your-initial-pipeline-resources), if you wanted to make the **Frontend** team the initial owner of **Frontend pipeline** instead of **Engineering**, you would change this pipeline resource's `default_team_id` argument's value to `data.buildkite_team.frontend.id`. If no pipelines under Terraform management are accessed by the **Engineering** team, you could remove its data source from your pipeline resources `pipelines.tf` file. You can also add more teams to pipelines—see [Add any other teams to your pipelines](/docs/platform/terraform-provider/getting-started-with-managing-pipelines#add-any-other-teams-to-your-pipelines) for details. Learn more about this resource in the [`buildkite_team` resource](https://registry.terraform.io/providers/buildkite/buildkite/latest/docs/resources/team) documentation. ##### Add members to a team Use the `buildkite_team_member` resource to add existing organization users to a team. Each team member requires a `team_id`, `user_id`, and `role`. The `user_id` is the GraphQL ID of the user, which you can obtain using the following GraphQL query: ```graphql query { organization(slug: "your-buildkite-org-slug") { members(first: 100) { edges { node { user { id name email } } } } } } ``` In the following example, two users are added to the **Frontend** team defined above: ```hcl #### Add a user as a team member resource "buildkite_team_member" "alice" { team_id = buildkite_team.frontend.id user_id = "user-graphql-id-for-alice" role = "MEMBER" } #### Add a user as a team maintainer resource "buildkite_team_member" "bob" { team_id = buildkite_team.frontend.id user_id = "user-graphql-id-for-bob" role = "MAINTAINER" } ``` The `role` argument can be either `MEMBER` (standard team member) or `MAINTAINER` (can manage team settings and membership). Learn more about this resource in the [`buildkite_team_member` resource](https://registry.terraform.io/providers/buildkite/buildkite/latest/docs/resources/team_member) documentation. ##### Verify your completed configuration The following example shows a complete team configuration with two teams, member permissions, and team membership: ```hcl #### Define the platform team resource "buildkite_team" "platform" { name = "Platform" description = "Platform team responsible for infrastructure." privacy = "VISIBLE" default_team = false default_member_role = "MEMBER" members_can_create_pipelines = true } #### Define the frontend team resource "buildkite_team" "frontend" { name = "Frontend" description = "Frontend team responsible for frontend development projects." privacy = "VISIBLE" default_team = false default_member_role = "MEMBER" } #### Add members to the frontend team resource "buildkite_team_member" "alice" { team_id = buildkite_team.frontend.id user_id = "user-graphql-id-for-alice" role = "MEMBER" } resource "buildkite_team_member" "bob" { team_id = buildkite_team.frontend.id user_id = "user-graphql-id-for-bob" role = "MAINTAINER" } #### Add a member to the platform team resource "buildkite_team_member" "charlie" { team_id = buildkite_team.platform.id user_id = "user-graphql-id-for-charlie" role = "MEMBER" } ``` ##### Applying the configuration Once your `teams.tf` file is complete, it is ready to be [applied to your Buildkite organization](/docs/platform/terraform-provider/getting-started-with-managing-pipelines#applying-the-configuration). ##### Further reference For the full list of team resources, data sources, and their configuration options, see the [Buildkite provider documentation](https://registry.terraform.io/providers/buildkite/buildkite/latest/docs) on the Terraform Registry. --- ### Manage Buildkite organizations URL: https://buildkite.com/docs/platform/terraform-provider/manage-buildkite-organizations #### Manage Buildkite organizations The [Buildkite Terraform provider](/docs/platform/terraform-provider) supports managing Buildkite organization-level settings, and [system banners](/docs/platform/team-management/system-banners) as Terraform resources. This page covers how to define and configure these resources in your Terraform configuration files. > 📘 > The user of your [API access token](/docs/platform/terraform-provider#before-you-start) must be a Buildkite organization administrator to manage organization settings. ##### Configure Buildkite organization settings Define a resource for the Buildkite organization settings you want to manage in Terraform, in HCL (for example, `organization.tf`). These settings include [enforcing two-factor authentication (2FA)](/docs/platform/team-management/enforce-2fa), or [restricting API access to specific IP addresses](/docs/apis/managing-api-tokens#restricting-api-access-by-ip-address), or both. The `buildkite_organization` resource is used to manage these organization-level settings for 2FA (referenced by the `enforce_2fa` argument) and restricting API access to specific IP addresses (referenced by `allowed_api_ip_addresses`). In the following example, 2FA is enforced for all organization members and API access is restricted to a range of IP addresses between `192.0.2.0` to `192.0.2.255`, with `terraform plan` and `terraform apply`. ```hcl resource "buildkite_organization" "settings" { enforce_2fa = true allowed_api_ip_addresses = ["192.0.2.0/24"] } ``` The optional arguments for this resource are: - `enforce_2fa` with a value of `true` to require [two-factor authentication](/docs/platform/team-management/enforce-2fa) for all organization members. - `allowed_api_ip_addresses` with a list of space-separated IP addresses or [CIDR notation](https://en.wikipedia.org/wiki/Classless_Inter-Domain_Routing) for a range of IP addresses, or a combination of both, to restrict which IP addresses can access the Buildkite API for your organization. **Note:** [Restricting API access by IP address](/docs/apis/managing-api-tokens#restricting-api-access-by-ip-address) is only available to Buildkite customers on the [Enterprise](https://buildkite.com/pricing) plan. Learn more about this resource in the [`buildkite_organization` resource](https://registry.terraform.io/providers/buildkite/buildkite/latest/docs/resources/organization) documentation. ##### Define a system banner A system banner (also known as organization banner) is not typically managed in Terraform, and is usually [configured through the Buildkite interface](/docs/platform/team-management/system-banners). The system banner is displayed to all members of your organization, at the top of each page in the Buildkite interface. > 📘 Enterprise plan feature > System banners are only available to Buildkite customers on the [Enterprise](https://buildkite.com/pricing) plan. If you do want to manage a system banner in Terraform, define a resource for the system banner, within your organization resources HCL file (for example, `organization.tf`). The `buildkite_organization_banner` resource to create and manage a system banner, whose `message` argument contains the Markdown content for this banner. In the following example, a maintenance notification banner will be created with `terraform plan` and `terraform apply`. ```hcl resource "buildkite_organization_banner" "maintenance" { message = "Scheduled maintenance this Saturday 02:00–04:00 UTC." } ``` Learn more about this resource in the [`buildkite_organization_banner` resource](https://registry.terraform.io/providers/buildkite/buildkite/latest/docs/resources/organization_banner) documentation. ##### Applying the configuration Once your `organization.tf` file is complete, it is ready to be [applied to your Buildkite organization](/docs/platform/terraform-provider/getting-started-with-managing-pipelines#applying-the-configuration). ##### Further reference For the full list of organization resources, data sources, and their configuration options, see the [Buildkite provider documentation](https://registry.terraform.io/providers/buildkite/buildkite/latest/docs) on the Terraform Registry. --- ### Import existing Buildkite resources to Terraform URL: https://buildkite.com/docs/platform/terraform-provider/import-existing-resources #### Import existing Buildkite resources to Terraform ##### Import existing pipeline resources You can bring the resources for your existing Buildkite pipelines under Terraform management by defining a series of [import blocks](https://developer.hashicorp.com/terraform/language/import) for these resources in a single file (for example, `pipeline-imports.tf`), and then using `terraform plan` on this file to generate a single `pipeline.tf` file containing the configurations for all of these pipelines. This is the same as _exporting_ your pipeline resources from Buildkite Pipelines to Terraform. All Buildkite Pipelines resources in these import blocks are defined using their GraphQL IDs in your Buildkite organization. To import existing pipelines to Terraform: 1. Get all pipeline GraphQL IDs for all the pipelines you want to import to Terraform, for example: ```graphql query { organization(slug: "your-buildkite-org-slug") { pipelines(first: 100) { edges { node { id name slug } } } } } ``` 1. Create a `pipeline-imports.tf` file with a set of `import` blocks, one for each pipeline you want to manage in Terraform. Within each `import` block, define a `to` argument, whose value after `buildkite_pipeline.` is the Terraform identifier for the pipeline, and an `id` argument, whose value is the pipeline's GraphQL ID obtained from the query above. ```hcl import { to = buildkite_pipeline.frontend id = "graphql-id-for-this-pipeline" } import { to = buildkite_pipeline.backend id = "graphql-id-for-this-pipeline" } import { to = buildkite_pipeline.another_pipeline id = "graphql-id-for-this-pipeline" } ``` 1. Next, generate the Terraform configuration file (`pipelines.tf`): ```bash terraform plan -generate-config-out=pipelines.tf ``` **Note:** The `pipelines.tf` file generated will have many of the arguments and values set for each pipeline resource (`resource "buildkite_pipeline"`) which you would have if you'd [defined this file manually](/docs/platform/terraform-provider/getting-started-with-managing-pipelines#define-your-initial-pipeline-resources). However, some of these arguments' values are not imported to the generated file and others may need modification. See [Finalizing your `pipelines.tf` configurations](#finalize-your-pipelines-dot-tf-configurations) for more information. 1. Delete the `pipeline-imports.tf` file you created earlier. If you are [finalizing your `pipelines.tf` file](#finalize-your-pipelines-dot-tf-configurations), deleting this import file is recommended to avoid accidentally running `terraform plan ...` again, which could overwrite your updates to this file. 1. Once you are satisfied with your `pipelines.tf` file, commit it to source control. ##### Finalize your pipelines.tf configurations If you [imported existing pipeline resources to Terraform](#import-existing-pipeline-resources), there are some differences in the resulting `pipelines.tf` file, compared to ones you would [prepare manually](/docs/platform/terraform-provider/getting-started-with-managing-pipelines#define-your-initial-pipeline-resources). ###### Add missing arguments The `pipelines.tf` file generated using `terraform plan ...` does not include the following arguments: - **The repository's `provider_settings`**: To include these settings, for each pipeline resource, replace its `provider_settings` argument's `null` value with a map of keys, similar to those in the [manually defined examples](/docs/platform/terraform-provider/getting-started-with-managing-pipelines#add-your-repository-provider-settings). See the Buildkite Terraform provider's [Nested Schema for `provider_settings`](https://registry.terraform.io/providers/buildkite/buildkite/latest/docs/resources/pipeline#nested-schema-for-provider_settings) documentation for more information about these keys. - **The initial pipeline owner (`default_team_id`)**: To include these settings, for each pipeline resource, replace its `default_team_id` argument's `null` value with team ID of the data source, similar to those in the [manually defined examples](/docs/platform/terraform-provider/getting-started-with-managing-pipelines#define-your-initial-pipeline-resources). ###### Amend arguments with GraphQL ID values if required The values of following arguments in the generated `pipelines.tf` file reference actual GraphQL IDs as opposed to other Terraform identifiers, which is typically the case for those [defined manually](/docs/platform/terraform-provider/getting-started-with-managing-pipelines#define-your-initial-pipeline-resources). - `cluster_id` - `pipeline_template_id` If you [imported existing pipelines from a Buildkite organization to Terraform](#import-existing-pipeline-resources), and you intend to use `terraform apply` on the resulting `pipelines.tf` to import these back to: - The _same_ Buildkite organization (for example, for disaster recovery purposes), then there is no need to update these arguments' values in `pipelines.tf`, on the assumption that you retain and intend to reuse the same Buildkite cluster/s and pipeline template/s. - A _different_ Buildkite organization, or _different_ Buildkite cluster/s or pipeline template/s in the _same_ Buildkite organization, then you'll need to amend these arguments' values in `pipelines.tf` to those of the IDs for the new cluster/s or pipeline templates/s associated with these pipelines. Otherwise, you can implement the alternative syntax used when [defining the `pipelines.tf` file manually](/docs/platform/terraform-provider/getting-started-with-managing-pipelines#define-your-initial-pipeline-resources). --- ### Overview URL: https://buildkite.com/docs/platform/sso #### Single sign-on support You can use a single sign-on (SSO) provider to protect access to your organization's data in Buildkite. Buildkite supports many different SSO providers, and you can configure multiple SSO providers for a single Buildkite organization. > 📘 Pro and Enterprise plan feature > SSO capabilities are only available to Buildkite customers on the [Pro or Enterprise](https://buildkite.com/pricing) plan. You can enforce SSO authentication for your entire Buildkite organization by ensuring that [2FA authentication](/docs/platform/team-management/enforce-2fa) has been disabled for your Buildkite organization. Doing so ensures that all users must log in using SSO when accessing your Buildkite organization. ##### Supported providers Buildkite supports the following SSO providers: * [Okta](/docs/platform/sso/okta) * [ADFS](/docs/platform/sso/adfs) * [GitHub](/docs/platform/sso/github-sso) * [Google Workspace](/docs/platform/sso/google-workspace) * [Google Workspace (SAML)](/docs/platform/sso/google-workspace-saml) * [Azure Active Directory](/docs/platform/sso/azure-ad) * [OneLogin](/docs/platform/sso/onelogin) * [Custom SAML](/docs/platform/sso/custom-saml) ##### Adding SSO Many of the SSO providers can be configured by an organization admin using [Organization Settings → SSO Settings](https://buildkite.com/organizations/-/sso): You can also [configure SSO manually using the GraphQL API](/docs/platform/sso/sso-setup-with-graphql). Once configured, all access to organization data requires signing into your SSO provider: ##### Disabling and removing SSO If you need to edit your SSO settings, temporarily stop logins using SSO, or want to delete your SSO provider, you'll first need to disable it. There are two ways to disable a provider: 1. Using the 'Disable' button in your SSO provider Settings, or 1. Using the [GraphQL API](/docs/platform/sso/sso-setup-with-graphql#disabling-an-sso-provider) If you have switched off all of your SSO providers, users will be required to log in using a username and password. If users don't have a password, and need access while SSO is switched off, they can perform a 'Forgotten Password' reset. ##### Migrating from one SSO provider to another SSO provider If you are the administrator of an organization within Buildkite with an existing SSO provider set up, and you want to switch to a different SSO provider, these are the steps you need to take: 1. [Add](/docs/platform/sso#adding-sso) a new SSO provider, verify it, and allow login from both SSO providers. The users in your organization can continue to sign in and use the same user accounts within Buildkite as long as the emails stay the same. 1. [Disable and remove](/docs/platform/sso#disabling-and-removing-sso) the SSO provider you no longer need. If the user credentials (email) stay the same, this is all you need to migrate from one SSO provider to another. >📘 > If you are also changing the email provider, make sure that Buildkite users in your organization sign in to their existing accounts when performing single sign-on through the new provider to prevent your organization being billed twice for the same users. If you'd like to have some help with the migration, contact support@buildkite.com. ##### SSO session duration You can configure the SSO Session Duration to timeout after a predetermined time. When the specified duration elapses, the user will be signed out of the session. To set the Session Duration you can either use the [GraphQL API](/docs/apis/graphql/cookbooks/organizations#update-the-default-sso-provider-session-duration) or complete the following steps via the settings interface. First select the SSO Provider you would like to configure. Then click **Update Session Duration** from the **Session Duration** section of the SSO Provider settings page. You can configure the session duration to any timeout between 6 hours and 8,760 hours (1 year). ##### SSO session IP address pinning > 📘 Enterprise plan feature > Pinning SSO sessions to IP addresses is only available to Buildkite customers on the [Enterprise](https://buildkite.com/pricing) plan. Session IP address pinning prompts users to re-authenticate when their IP address changes. This prevents session hijacking by restricting authorized sessions to only originate from the IP address used to create the session. If any attempt is made to access Buildkite from a different IP address, the session is instantly revoked and the user must re-authenticate. Users must be required to use SSO in the [organization's user settings](https://buildkite.com/organizations/~/users) for SSO session IP address pinning to work for them. To set up SSO session IP address pinning, use the [GraphQL API](/docs/apis/graphql/cookbooks/organizations#pin-sso-sessions-to-ip-addresses) or complete the following steps in the Buildkite dashboard: 1. Navigate to the [organization's **Single Sign On** settings](https://buildkite.com/organizations/~/sso). 1. In the **Configured SSO Providers** section, select the provider. 1. In the **Session IP Address Pinning** section, select **Update Session IP Address Pinning**. 1. In the resulting dialog, select the **Session IP Address Pinning** checkbox. 1. Select **Save Session IP Address Pinning**. ##### Frequently asked questions ###### Can some people in the organization use SSO and others not? Yes, organization admins can select whether a user is 'required' to use SSO or whether it is 'optional'. You can find this setting in the [organization's user settings](https://buildkite.com/organizations/~/users). ###### Do you support JIT provisioning? Yes, we do. Just-in-time user provisioning (JIT provisioning) creates accounts only when needed. You can grant a user access to Buildkite through your SSO provider, but their account won't be created until it's required—typically upon their first login attempt. For billing purposes, the user doesn't exist until their account is created. ###### What happens if a person leaves our company? You will need to manually remove them from your Buildkite organization. This will not affect access to the user's personal account or any other organizations they are a member of. For Buildkite customers on an Enterprise plan, you can use [SCIM deprovisioning](/docs/platform/sso/okta#user-deprovisioning) to automate this removal. ###### Can I use different SSO providers for my Buildkite organization at the same time? Yes, as an admin you need to [add and verify](/docs/platform/sso#adding-sso) a new SSO provider. Next, you need to allow login from both SSO providers in the [Organization settings](https://buildkite.com/organizations/-/sso). As long as the sign-in emails stay the same, the users in your organization can continue to sign in and use the same user accounts within Buildkite. ###### Can we enable SSO on multiple domains for one organization? Yes, by adding multiple SSO providers. You can enable as many different identity providers for your organization as you need. ###### Will enabling SSO disrupt my team? No, SSO must be verified before being enabled, and can easily be switched off if required. Once enabled, users will see a new "SSO" badge on the organization and will be required to authorise with your SSO provider to access organization data. ###### Will enabling SSO affect builds, agents or pipelines? No, all of your builds, agents, and pipelines will continue to run as normal. ###### Does enabling SSO affect billing? No, enabling SSO will not affect how much you are billed. However, whenever a new user signs in to Buildkite using SSO, they will be added to your organization as if you had invited them. ###### Can I sync my identity provider's groups with my Buildkite teams? Yes, if you are able to associate your provider's groups with your Buildkite team UUIDs, you can adjust the SAML assertion to send 'teams' as an additional [SAML User Attribute](/docs/platform/sso/custom-saml#saml-user-attributes). ###### I want to rename my Buildkite organization. Will it affect my SSO provider(s)? No, SSO providers are setup using a unique identifier and are unaffected when a Buildkite organization is renamed. ###### Can I merge two organizations that use different SSO providers? In short, yes, you can. However, merging Buildkite organizations that already have SSO providers might be a tricky scenario, and it's highly recommended that you contact support@buildkite.com for help or guidance before you attempt such migration. ###### Why am I being asked for my password in the "Authorization Required" screen when signing in using SSO? Signing in to your Buildkite organization requires authentication and authorization with both Buildkite and your SSO provider. Authentication determines if you are who you claim to be. Authorization determines if you have the correct permissions within the Buildkite organization you're trying to access. Both authentication and authorization are necessary because SSO using one Buildkite organization shouldn't provide access to your other Buildkite organizations. Confirming your password is Buildkite's way to ensure that you are who you say you are. Once you've authenticated with Buildkite, it determines which organizations your account is authorized to access. ###### I'm already a member of a Buildkite organization. Should I create a new Buildkite user account if I want to work within a different organization (pet project, open source work, etc.)? Some people choose to have multiple user accounts, one per Buildkite organization. It's fine to do this, but it can be slightly inconvenient as such an approach does not provide easy tools for switching between accounts. You will need to use different browsers or log in and out quite often. It's recommended to have a single Buildkite user account and join multiple organizations when required. ###### Why do I get the error "this email is already being used by another user" when logging in? There are two common reasons. The first is that you are using shared accounts, so the email is associated with another account. To resolve that, you need to remove the association from your Email Personal Settings. The second is that the account already exists in Buildkite. If you have access to the old account, delete it before continuing. You may also need to clean up any SSO authorization records on Buildkite for the old account. If that doesn't resolve the issue or you don't have access to the account, please reach out to support@buildkite.com for assistance. ###### Why do I get the error "we couldn't find an account with that email address" when logging in? This is likely caused by trying to log in from the wrong place. You need to log in from https://buildkite.com/sso and follow the link from the email you receive. If the issue persists, please reach out to support@buildkite.com for assistance. ###### Will setting the session duration affect all current sessions or only the new sessions? When you [update the session duration](/docs/apis/graphql/cookbooks/organizations#update-the-default-sso-provider-session-duration), it affects both new and old SSO sessions. ###### When is an SSO session considered to start? An SSO session starts for a user from the moment they sign in using SSO. --- ### Okta URL: https://buildkite.com/docs/platform/sso/okta #### Single sign-on with Okta To add Okta as an SSO provider for your Buildkite organization, you need admin privileges for both Okta and Buildkite. ##### Setting up SSO with SAML To set up single sign-on, follow the [SAML configuration guide](https://saml-doc.okta.com/SAML_Docs/How-to-Configure-SAML-2.0-for-Buildkite.html). ##### Using SCIM to provision and manage users Buildkite customers on the [Enterprise plan](https://buildkite.com/pricing/) can optionally enable automatic deprovisioning for their Buildkite users. ###### Supported SCIM features * Create users * Deactivate users (deprovisioning) > 📘 > Buildkite does not bill you for users that you add to your Okta Buildkite app until they sign in to your Buildkite organization. ###### Configuration instructions Using the SCIM provisioning settings in Okta, Buildkite customers on the [Enterprise](https://buildkite.com/pricing) plan can automatically remove user accounts from your Buildkite organization. In Okta this feature is called 'Deactivating' a user. You need an enabled Okta SSO Provider before you can set up SCIM. > 📘 User deprovisioning > User deprovisioning is an Enterprise plan-only feature and automatically enabled. As an Enterprise plan customer, if you are using a [custom provider](/docs/platform/sso/custom-saml), please contact support@buildkite.com to have the 'SCIM for Custom SAML' feature flag enabled. After creating your SSO Provider in Buildkite, you will need the **Base URL** and **API Token** from your Okta SSO Provider Settings: Go to your Buildkite application in Okta to set up deprovisioning: 1. On the **Sign On** tab in the Okta Buildkite application, edit the **Credentials Details** settings, select **Email** for the **Application username format** and click **Save**. 1. On the **Provisioning** tab, select **Integration** from the left side menu. 1. Click **Configure API Integration**. 1. Select the **Enable API integration** option and enter the URL and API token copied from your Buildkite SSO Provider settings. 1. Click **Test API Credentials** and then **Save** once successfully verified. 1. Select **To App** from the left side menu. 1. Edit the **Provisioning to App** settings, and enable **Create Users** and **Deactivate Users**. 1. Save and test your settings. ###### Provisioning existing users Buildkite creates accounts for existing Okta users with just-in-time user provisioning (JIT provisioning). To deprovision users, you need to sync them. This can be done one of two ways: 1. Removing and re-assigning the users and groups to the Okta Buildkite app, or 1. If your Okta tenant has [Lifecycle Management] enabled, then you can use the **Provision User** function on the **Assignments** tab of the Okta Buildkite app. [Lifecycle Management]: https://www.okta.com/products/lifecycle-management/ ##### SAML user attributes Buildkite accepts a subset of the SAML attributes from identity providers. The accepted attributes are: | Attribute | Description | `admin` | A boolean value that describes whether the user should be provisioned with admin permissions _Example:_ true | `email` | A string of the user's email address _Example:_ "person@company.com" | `name` | A string of the user's full name _Example:_ "Han Solo" | `teams` | A comma separated list of team UUIDs. A team's UUID can be found on the _Team Settings_ page in Buildkite. _Example:_ `a1aaaa1a-b2bb-cccc-d4dd-aa2aaa6aaaaa,b5bbbbbb-3aaa-dd1d-aaa1-eee4eee6eeee` When using the `teams` attribute, you can also specify roles. The `maintainer` or `member` role can be appended to the team UUID. For example, the following code will specify the member role for the first team and the maintainer role for the second team: ``` teams="b5bbbbbb-3aaa-dd1d-aaa1-eee4eee6eeee/member, a1aaaa1a-b2bb-cccc-d4dd-aa2aaa6aaaaa/maintainer" ``` ##### Troubleshooting Resolve common issues with using Okta and Buildkite. ###### Unexplained permission changes for users If you notice a user's permissions changing unexpectedly and have SSO set up with Okta, it's likely because permissions are overwritten at login. When a user logs in to their Buildkite account through Okta, Okta sends the user attributes, and Buildkite updates the user's permissions to match. For example, consider a situation where you grant a user admin permission in Buildkite (for example, Buildkite organization administrator permissions) but not in Okta. When the user next logs in, they lose this admin permission because Buildkite updates the user's permissions to match the attributes sent from Okta. --- ### ADFS URL: https://buildkite.com/docs/platform/sso/adfs #### Single sign-on with ADFS You can use Active Directory Federation Services (ADFS) for your Buildkite organization. To complete this tutorial, you need admin privileges for both your ADFS server and Buildkite. > 📘 Enterprise plan feature and setting up with GraphQL > ADFS capabilities for SSO are only available to Buildkite customers on the [Enterprise](https://buildkite.com/pricing) plan. > You can also set up SSO providers manually with GraphQL. See the [SSO setup with GraphQL guide](/docs/platform/sso/sso-setup-with-graphql) for detailed instructions and code samples. ##### Step 1. Create a Buildkite SSO provider Click the [Buildkite organization **Settings**](https://buildkite.com/organizations/~/settings)' **Single Sign On** menu item, then choose the ADFS provider from the available options: On the following page, copy the ACS URL for use in Step 2. ##### Step 2. Set up Buildkite in the ADFS management console The instructions below guide you through using a series of wizards to: + Add a Relying Party Trust + Add an Issuance Transform Rule, a type of Claim Rule + Export the Token-signing Certificate + Update the Authentication Policy With these wizards, you'll set up your domain for SSO and retrieve the information the Buildkite team requires to complete the setup process. >📘 This guide was written for, and tested using, Windows Server 2016 > Some of the wizard pages and dialog tab names have changed across versions of Windows Server. > For a guide written for Windows Server 2012, the [PagerDuty SSO integration guide](https://www.pagerduty.com/docs/guides/adfs-sso-guide/) is very similar to Buildkite. Follow the PagerDuty instructions, and substitute in the Buildkite values from the instructions below. ###### Step 2.1 Add a relying party trust From the **Actions** sidebar, click **Add relying party trust...** to start the wizard: 1. **Welcome**: Select **Claims aware**. 1. **Select data source**: Select **Enter data about the relying party manually**. 1. **Specify display name**: Call your relying party `Buildkite`. 1. **Choose profile**: Select **ADFS profile**. 1. **Configure certificate**: Skip this step, as you don't need a token encryption certificate. 1. **Configure URL**: Select **Enable support for the SAML 2.0 WebSSO protocol**. Enter the ACS URL from Buildkite as your **Relying party SAML 2.0 SSO service URL**. 1. **Configure identifiers**: Enter `https:///adfs/services/trust` into the **Relying party trust identifier** field. Click **Add** to add it to the **Relying party trust identifiers** list. 1. **Choose Access Control Policy**: Choose **Permit everyone**. You can choose to select specific users, but that involves further steps that aren't covered by this guide. 1. **Ready to add trust**: Review your settings to make sure all the URLs are correct. 1. **Finish**: Leave the **Configure claims issuance policy for this application** box checked. Click **Close** to close the wizard and save your setup. In the **Actions** sidebar, you should now have a subheading **Buildkite**. ###### Step 2.2 Add an issuance transform rule From the **Buildkite** section of the **Actions** sidebar, click **Edit claim issuance policy...**. From this point, add three rules, where each one begins with using the **Add Rule** button on the **Issuance transform rules** tab: Rule 1 1. **Choose rule type**: **Send LDAP Attributes as claims** 1. **Configure claim rule**: * **Claim Rule Name**: Get Attributes * **Attribute Store**: Active Directory * **Mapping of LDAP Attributes to outgoing claim types**: - **LDAP Attribute**: Email Addresses, Outgoing claim type: Email address - **LDAP Attribute**: Display-Name, Outgoing claim type: Name 1. Click **Finish** to add the rule. Rule 2 1. **Choose rule type**: **Transform an incoming claim** 1. **Configure claim rule**: * **Claim Rule Name**: Name ID Transform * **Incoming Claim Type**: Email address * **Outgoing Claim Type**: Name ID * **Outgoing Name ID Format**: Email * Select **Pass through all claim values** 1. Click **Finish** to add the rule. Rule 3 1. **Choose rule type**: **Send claims using a custom rule** 1. **Configure claim rule**: * **Claim Rule Name**: Attribute Name Transform * **Custom Rule**: `c:[Type == "https://schemas.xmlsoap.org/ws/2005/05/identity/claims/name "] => issue(Type = "Name", Issuer = c.Issuer, OriginalIssuer = c.OriginalIssuer, Value = c.Value, ValueType = c.ValueType);` 1. Click **Finish** to add the rule. 1. Click **OK** to save and exit the **Claim Issuance Policy** dialog. For more information on what other attributes Buildkite accepts, see the [SAML user attributes](#saml-user-attributes) table. ###### Step 2.3 Export the token signing certificate From the **Service** section of the **ADFS** console tree, select the **Certificates** subsection. 1. Click on the certificate listed under the heading **Token-signing**. 1. In the **CN=ADFS Signing** section of the **Actions** sidebar, click **View Certificate...**. 1. In the **Certificate** dialog, select the **Details** tab. 1. Click the **Copy to File...** button. 1. Start the **Certificate Export Wizard**. 1. **Export File Format**: select **Base-64 encoded X.509 (.CER)**. 1. **File to Export**: name your file, and choose where you'd like to export the file 1. Check the settings are correct, and click **Finish**. ###### Step 2.4 Update the authentication policy From the **Service** section of the **ADFS** console tree, select the **Authentication Methods** subsection. 1. Under the **Primary Authentication Methods** header, click the **Edit** link. 1. In the **Intranet** section, ensure that the **Forms Authentication** box is checked. 1. Click **OK** to exit the dialog. ##### Step 3. Update your Buildkite SSO provider On your Buildkite organization settings' **Single Sign On** page, select your ADFS provider from the list of **Configured SSO Providers**. Click the **Edit Settings** button, choose the **Manual data** option, and enter the IdP data you saved during the previous step: | Login URL | The URL where you can log in to your ADFS service. Usually your domain name or IP, with `/adfs/ls` appended. | Federation Service Identifier | The URL that identifies your ADFS service. Usually your domain name or IP, with `/adfs/services/trust` appended. | X.509 certificate | Attach the X.509 certificate that you downloaded during setup ##### Step 4. Perform a test login Follow the instructions on the provider page to perform a test login. Performing a test login verifies that SSO is working correctly before you activate it for your organization members. ##### Step 5. Enable the new SSO provider Once you've performed a test login you can enable your provider using the **Enable** button. Activating SSO will not force a log out of existing users, but will cause all new or expired sessions to authorize through ADFS before organization data can be accessed. If you need to edit or update your ADFS provider settings at any time, you will need to disable the provider first. For more information on disabling a provider, see the [disabling SSO](/docs/platform/sso#disabling-and-removing-sso) section of the SSO overview. ##### SAML user attributes Buildkite accepts a subset of the SAML attributes from identity providers. The accepted attributes are: | Attribute | Description | `admin` | A boolean value that describes whether the user should be provisioned with admin permissions _Example:_ true | `email` | A string of the user's email address _Example:_ "person@company.com" | `name` | A string of the user's full name _Example:_ "Han Solo" | `teams` | A comma separated list of team UUIDs. A team's UUID can be found on the _Team Settings_ page in Buildkite. _Example:_ `a1aaaa1a-b2bb-cccc-d4dd-aa2aaa6aaaaa,b5bbbbbb-3aaa-dd1d-aaa1-eee4eee6eeee` When using the `teams` attribute, you can also specify roles. The `maintainer` or `member` role can be appended to the team UUID. For example, the following code will specify the member role for the first team and the maintainer role for the second team: ``` teams="b5bbbbbb-3aaa-dd1d-aaa1-eee4eee6eeee/member, a1aaaa1a-b2bb-cccc-d4dd-aa2aaa6aaaaa/maintainer" ``` --- ### Google Workspace URL: https://buildkite.com/docs/platform/sso/google-workspace #### Single sign-on with Google Workspace Google Workspace (previously G Suite and Google Apps) can be used as an SSO provider for your Buildkite organization. To complete this tutorial, you will need admin privileges for Buildkite. ##### Step 1. Create an SSO provider In your [Buildkite organization **Settings**](https://buildkite.com/organizations/~/settings)' **Single Sign On** menu item, choose the Google G Suite provider: > 📘 You can also set up SSO providers manually with GraphQL. > See the [SSO Setup with GraphQL Guide](/docs/platform/sso/sso-setup-with-graphql) for detailed instructions and code samples. ##### Step 2. Perform a test login Follow the instructions to perform a test login. Performing a test login will verify that SSO is working correctly before you activate it for your organization members. ##### Step 3. Enable the new SSO provider Once you've performed a test login you can enable your provider. Activating SSO will not force a log out of existing users, but will cause all new or expired sessions to authorize through G Suite before organization data can be accessed. If you need to edit or update your G Suite provider settings at any time, you will need to disable the provider first. For more information on disabling a provider, see the [disabling SSO](/docs/platform/sso#disabling-and-removing-sso) section of the SSO overview. ##### SAML user attributes Buildkite accepts a subset of the SAML attributes from identity providers. The accepted attributes are: | Attribute | Description | `admin` | A boolean value that describes whether the user should be provisioned with admin permissions _Example:_ true | `email` | A string of the user's email address _Example:_ "person@company.com" | `name` | A string of the user's full name _Example:_ "Han Solo" | `teams` | A comma separated list of team UUIDs. A team's UUID can be found on the _Team Settings_ page in Buildkite. _Example:_ `a1aaaa1a-b2bb-cccc-d4dd-aa2aaa6aaaaa,b5bbbbbb-3aaa-dd1d-aaa1-eee4eee6eeee` When using the `teams` attribute, you can also specify roles. The `maintainer` or `member` role can be appended to the team UUID. For example, the following code will specify the member role for the first team and the maintainer role for the second team: ``` teams="b5bbbbbb-3aaa-dd1d-aaa1-eee4eee6eeee/member, a1aaaa1a-b2bb-cccc-d4dd-aa2aaa6aaaaa/maintainer" ``` --- ### Google Workspace (SAML) URL: https://buildkite.com/docs/platform/sso/google-workspace-saml #### Single sign-on with Google Workspace (SAML) As an alternative to [Google Workspace SSO using OpenID](/docs/platform/sso/google-workspace), you can use Google Workspace as an SSO provider for your Buildkite organization using SAML. To complete this tutorial, you need admin privileges for both Google Workspace and Buildkite. >📘 You can also set up SSO providers manually with GraphQL. > See the [SSO setup with GraphQL guide](/docs/platform/sso/sso-setup-with-graphql) for detailed instructions and code samples. After following these steps, your Google Workspace users can sign in to Buildkite using their Google account. ##### Step 1. Create a Buildkite SSO provider Click the [Buildkite organization **Settings**](https://buildkite.com/organizations/~/settings)' **Single Sign On** menu item, then choose the custom SAML provider from the available options: Choose the **Provide IdP Metadata Later** option when configuring your Custom SAML provider. On the following page, copy the ACS URL for use in Step 2. ##### Step 2. Add Buildkite in Google Workspace Log into your [Google Admin Console](https://admin.google.com), and follow these instructions: 1. In the **Apps** area of the console, select the **Web and mobile apps** submenu. 2. Click the **Add App** menu at the top of the table and choose **Search for apps**. 3. Search for **Buildkite**, and select **Buildkite Web (SAML)**. 4. Copy the SSO URL and Entity ID, and download the Certificate. You'll need these in Step 3. 5. Enter the following service provider details: * ACS URL: the URL you copied in Step 1. Replace any existing value suggested by Google. * Entity ID: https://buildkite.com 6. You can add attribute mapping after the initial setup and testing. Click **Finish** to complete the setup. ##### Step 3. Update your Buildkite SSO provider On your [Buildkite organization **Settings**](https://buildkite.com/organizations/~/settings)' **Single Sign On** page, select your Custom SAML provider from the list of **Configured SSO Providers**. Click the **Edit Settings** button, choose the Manual data option, and enter the IdP data you saved in Step 2: | SAML 2.0 Endpoint (HTTP) | The SSO URL you copied down during the previous step. | Issuer URL | The Entity ID that you copied down during the previous step. | X.509 certificate | The public key certificate generated for you by Google Workspace that you downloaded during the previous step. You need the whole file, not just a link to the file. Save your settings. Your provider page opens. ##### Step 4. Perform a test login Follow the instructions on the provider page to perform a test login. Performing a test login verifies that SSO is working correctly before you activate it for your organization members. > 🚧 > According to Google, "Changes may take up to 24 hours to propagate to all user". Some changes may take at least several hours, so if the test login fails, it is worth waiting and trying again. ##### Step 5. Enable the new SSO provider Once you've performed a test login, you can enable your provider using the **Enable** button. Activating SSO will not force a log out of existing users, but will cause all new or expired sessions to authorize through Google Workspace before organization data can be accessed. The users will need to sign into Buildkite by clicking the Buildkite icon in the Google Apps menu. You can find this menu by clicking the 9-dot "waffle" icon. If you need to edit or update your Google Workspace (SAML) provider settings at any time, you will need to disable the provider first. For more information on disabling a provider, see the [disabling SSO](/docs/platform/sso#disabling-and-removing-sso) section of the SSO overview. ##### SAML user attributes Buildkite accepts a subset of the SAML attributes from identity providers. The accepted attributes are: | Attribute | Description | `admin` | A boolean value that describes whether the user should be provisioned with admin permissions _Example:_ true | `email` | A string of the user's email address _Example:_ "person@company.com" | `name` | A string of the user's full name _Example:_ "Han Solo" | `teams` | A comma separated list of team UUIDs. A team's UUID can be found on the _Team Settings_ page in Buildkite. _Example:_ `a1aaaa1a-b2bb-cccc-d4dd-aa2aaa6aaaaa,b5bbbbbb-3aaa-dd1d-aaa1-eee4eee6eeee` When using the `teams` attribute, you can also specify roles. The `maintainer` or `member` role can be appended to the team UUID. For example, the following code will specify the member role for the first team and the maintainer role for the second team: ``` teams="b5bbbbbb-3aaa-dd1d-aaa1-eee4eee6eeee/member, a1aaaa1a-b2bb-cccc-d4dd-aa2aaa6aaaaa/maintainer" ``` --- ### GitHub URL: https://buildkite.com/docs/platform/sso/github-sso #### Single sign-on with GitHub You can use GitHub as an SSO provider for your Buildkite organization. To complete this tutorial, you need admin privileges for both the Buildkite organization and your GitHub organization. ##### Step 1. Link your Buildkite organization to your GitHub organization Set up the [Buildkite GitHub Application](https://github.com/apps/buildkite) for your GitHub organization. You need to install Buildkite for the GitHub organization that you want to connect to Buildkite as an SSO provider. In your [Buildkite organization **Settings**](https://buildkite.com/organizations/~/settings)' **Repository Providers** menu item, connect your GitHub user account to Buildkite. Grant Buildkite the permission to verify your GitHub identity. ##### Step 2. Create an SSO provider 1. In your [Buildkite organization **Settings**](https://buildkite.com/organizations/~/settings)' **Single Sign On** menu item, choose the GitHub provider: 1. Enter the name of your GitHub organization. 1. Click **Create Provider**. ##### Step 3. Perform a test login Follow the instructions on the provider page to perform a test login. Performing a test login verifies that SSO is working correctly before you activate it for your organization members. ##### Step 4. Enable the new SSO provider Once you've performed a test login you can enable your provider. Activating SSO will not force a log out of existing users, but will cause all new or expired sessions to authorize through GitHub before organization data can be accessed. If you need to edit or update your GitHub provider settings at any time, you will need to [disable the SSO provider](/docs/platform/sso#disabling-and-removing-sso) first. After you've enabled GitHub as the SSO provider for your Buildkite organization, new and expired users will need to log in through GitHub by visiting `buildkite.com/sso/your-organization-name`. They will be asked to provide their email address, and a sign-in link will be emailed to them. Sending the sign-in link by email is an additional security and privacy measure, as a user can be a member of several Buildkite organizations. If the names of such Buildkite organizations themselves contain information – for example, `buildkite.com/sso/flyingcar` or `buildkite.com/sso/aliens`, disclosing a list of such organizations to somebody who only knows an email address could leak sensitive information. ##### SAML user attributes Buildkite accepts a subset of the SAML attributes from identity providers. The accepted attributes are: | Attribute | Description | `admin` | A boolean value that describes whether the user should be provisioned with admin permissions _Example:_ true | `email` | A string of the user's email address _Example:_ "person@company.com" | `name` | A string of the user's full name _Example:_ "Han Solo" | `teams` | A comma separated list of team UUIDs. A team's UUID can be found on the _Team Settings_ page in Buildkite. _Example:_ `a1aaaa1a-b2bb-cccc-d4dd-aa2aaa6aaaaa,b5bbbbbb-3aaa-dd1d-aaa1-eee4eee6eeee` When using the `teams` attribute, you can also specify roles. The `maintainer` or `member` role can be appended to the team UUID. For example, the following code will specify the member role for the first team and the maintainer role for the second team: ``` teams="b5bbbbbb-3aaa-dd1d-aaa1-eee4eee6eeee/member, a1aaaa1a-b2bb-cccc-d4dd-aa2aaa6aaaaa/maintainer" ``` --- ### OneLogin URL: https://buildkite.com/docs/platform/sso/onelogin #### Single sign-on with OneLogin You can use OneLogin as an SSO provider for your Buildkite organization. To complete this tutorial, you need admin privileges for both OneLogin and Buildkite. ##### Step 1. Add the Buildkite app to your OneLogin account Log into your OneLogin account, and follow these steps: 1. In the **Apps** tab of your OneLogin organization's **Admin** area, select the **Add App** button to search the OneLogin directory. 1. Search for 'Buildkite'. 1. Add the Buildkite app to your OneLogin account. 1. Click on the **Configuration** tab of your new Buildkite application. 1. Enter your Buildkite organization slug. 1. Click the **Save** button in the top right to save your configuration. ##### Step 2. Create an SSO provider In your [Buildkite organization **Settings**](https://buildkite.com/organizations/~/settings)' **Single Sign On** menu item, choose the OneLogin provider: On the following screen in the setup form, enter your IdP data. The following three required fields can be found in the **SSO** tab on the Buildkite app page in OneLogin: | SAML 2.0 Endpoint (HTTP) | The URL where you can log in to OneLogin's SAML service. | Issuer URL | The URL that identifies your OneLogin service. | X.509 certificate | The public key certificate generated for you by OneLogin. You need the whole file, not just a link to the file. >📘 You can also set up SSO providers manually with GraphQL. > See the [SSO setup with GraphQL guide](/docs/platform/sso/sso-setup-with-graphql) for detailed instructions and code samples. ##### Step 3. Perform a test login Follow the instructions on the provider page to perform a test login. Performing a test login verifies that SSO is working correctly before you activate it for your organization members. ##### Step 4. Enable the new SSO provider Once you've performed a test login you can enable your provider. Activating SSO will not force a log out of existing users, but will cause all new or expired sessions to authorize through OneLogin before organization data can be accessed. If you need to edit or update your OneLogin provider settings at any time, you will need to disable the provider first. For more information on disabling a provider, see the [disabling SSO](/docs/platform/sso#disabling-and-removing-sso) section of the SSO overview. ##### SAML user attributes Buildkite accepts a subset of the SAML attributes from identity providers. The accepted attributes are: | Attribute | Description | `admin` | A boolean value that describes whether the user should be provisioned with admin permissions _Example:_ true | `email` | A string of the user's email address _Example:_ "person@company.com" | `name` | A string of the user's full name _Example:_ "Han Solo" | `teams` | A comma separated list of team UUIDs. A team's UUID can be found on the _Team Settings_ page in Buildkite. _Example:_ `a1aaaa1a-b2bb-cccc-d4dd-aa2aaa6aaaaa,b5bbbbbb-3aaa-dd1d-aaa1-eee4eee6eeee` When using the `teams` attribute, you can also specify roles. The `maintainer` or `member` role can be appended to the team UUID. For example, the following code will specify the member role for the first team and the maintainer role for the second team: ``` teams="b5bbbbbb-3aaa-dd1d-aaa1-eee4eee6eeee/member, a1aaaa1a-b2bb-cccc-d4dd-aa2aaa6aaaaa/maintainer" ``` --- ### Azure AD URL: https://buildkite.com/docs/platform/sso/azure-ad #### Single sign-on with Microsoft Entra ID (Azure AD) You can use [Microsoft Entra ID](https://www.microsoft.com/en-us/security/business/identity-access/microsoft-entra-id#Overview) (formerly known as Azure Active Directory) as an SSO provider for your Buildkite organization. To complete this tutorial, you need admin privileges for both Azure and Buildkite. > 📘 You can also set up SSO providers manually with GraphQL. > See the [SSO setup with GraphQL guide](/docs/platform/sso/sso-setup-with-graphql) for detailed instructions and code samples. ##### Step 1. Create a Buildkite SSO provider In your [Buildkite organization **Settings**](https://buildkite.com/organizations/~/settings), click **Single Sign On**, then choose the custom SAML provider from the available options: 1. Choose the **Provide IdP Metadata Later** option when configuring your custom SAML provider 2. Copy the Assertion Consumer Service (ACS) URL for use in [Step 2](#step-2-add-buildkite-in-azure-ad) ##### Step 2. Add Buildkite in Azure AD In your [Azure Admin Console](https://portal.azure.com/), follow these instructions: 1. Choose **Azure Active Directory**, and under **Quick actions**, choose **Add enterprise application**. 1. Click **+ Create your own application**. 1. Give your application a name, for example 'Buildkite'. 1. Choose **Integrate any other application you don't find in the gallery (Non-gallery)** and click **Create**. 1. Choose **Set up single sign on**, then **SAML**, then in the **Basic SAML Configuration** box, choose **Edit**. 1. Enter the following configuration and save: * **Identifier (Entity ID)**: `https://buildkite.com` * **Reply URL (Assertion Consumer Service URL)**: the ACS URL you copied in [Step 1](#step-1-create-a-buildkite-sso-provider) 1. Copy the **App Federation Metadata Url** value from the **SAML Signing Certificate** box for use in [Step 3](#step-3-update-your-buildkite-sso-provider). ##### Step 3. Update your Buildkite SSO provider On your [Buildkite organization **Settings**](https://buildkite.com/organizations/~/settings)' **Single Sign On** page, select the custom SAML provider from the list of **Configured SSO Providers**. 1. Click the **Edit Provider** button, choose the **Configure Using IdP Meta Data URL** option, and enter the **App Federation Metadata Url** you copied in Step 2. 1. Save your new settings. Buildkite returns you to your custom SAML provider page. ##### Step 4. Perform a test login On your Custom SAML provider page, click **Perform Test Login** to verify that SSO is working correctly before you activate it for your organization members. If you receive an error from Microsoft about the user not being assigned to the application, you can assign an initial user: 1. In your Azure Admin Console, select the new **Buildkite** enterprise app. 1. Choose **Users and groups** from the navigation sidebar. 1. Click **Add user/group**. 1. Select the user and click **Assign**. Then, on your [Buildkite organization **Settings**](https://buildkite.com/organizations/~/settings)' **Single Sign On** page, select the custom SAML provider from the list of **Configured SSO Providers**, and retry the test login. ##### Step 5. Enable the new SSO provider Once you've [performed a test login](#step-4-perform-a-test-login) you can enable your SSO provider using the **Enable** button. Enabling the SSO provider will not force a log out of any signed in users, but will cause all new or expired sessions to authorize through Azure AD before accessing any organization data. > 🚧 >If you need to edit or update your Azure Active Directory provider settings, you will need to [disable the SSO provider](/docs/platform/sso#disabling-and-removing-sso) first. ##### Using SCIM to provision and manage users Buildkite customers on the [Enterprise plan](https://buildkite.com/pricing/) can automatically add and remove user accounts from their Buildkite organization using the SCIM provisioning settings in Azure AD. ###### Supported SCIM features * Create users * Deactivate users (deprovisioning) > 📘 > Buildkite does not bill you for users that you add to Azure AD until they sign in to your Buildkite organization. ###### Configuration instructions Adding and removing users accounts in Azure AD is called provisioning. You need an enabled Azure AD SSO Provider for your Buildkite Organization before you can set up SCIM provisioning. > 📘 > User deprovisioning is an Enterprise plan-only feature and is automatically enabled. As an Enterprise plan customer, if you are using a [custom provider](/docs/platform/sso/custom-saml), please contact support@buildite.com to have this feature enabled. After enabling your Azure AD SSO provider in Buildkite, get the **Base URL** and **API Token** from your Azure AD SSO provider settings: Then go to your [Azure Admin Console](https://portal.azure.com/) and select the new Buildkite enterprise app to set up provisioning: 1. Choose **Provisioning** from the navigation sidebar, then click **Get started**. 1. Select **Automatic** provisioning mode and enter the following details: * **Tenant URL**: the Base URL from your Buildkite SSO Provider settings * **Secret Token**: the API Token from your Buildkite SSO Provider settings 1. Click **Test Connection**, and when you receive confirmation the settings are valid, save. 1. Disable group synchronization: 1. Expand **Mappings**, then click **Provision Azure Active Directory Groups**. 1. Toggle **Enabled** to **No** and click **Save**. 1. Customize the User mappings: 1. Expand **Mappings**, then click **Provision Azure Active Directory Users**. 1. Keep the following four mappings, and delete any others: - `userPrincipalName` to `userName` - `Switch([IsSoftDeleted], , "False", "True", "True", "False")` to `active` - `givenName` to `name.givenName` - `surname` to `name.familyName` 1. Toggle **Provisioning Status** to **On** and save. 1. Return to the **Provisioning** menu of your Azure AD enterprise app and view the **Current cycle status** section: * If provisioning is working, this will say **Initial cycle completed**. * If errors are displayed, click **View provisioning logs** for more details on what went wrong. ##### SAML user attributes Buildkite accepts a subset of the SAML attributes from identity providers. The accepted attributes are: | Attribute | Description | `admin` | A boolean value that describes whether the user should be provisioned with admin permissions _Example:_ true | `email` | A string of the user's email address _Example:_ "person@company.com" | `name` | A string of the user's full name _Example:_ "Han Solo" | `teams` | A comma separated list of team UUIDs. A team's UUID can be found on the _Team Settings_ page in Buildkite. _Example:_ `a1aaaa1a-b2bb-cccc-d4dd-aa2aaa6aaaaa,b5bbbbbb-3aaa-dd1d-aaa1-eee4eee6eeee` When using the `teams` attribute, you can also specify roles. The `maintainer` or `member` role can be appended to the team UUID. For example, the following code will specify the member role for the first team and the maintainer role for the second team: ``` teams="b5bbbbbb-3aaa-dd1d-aaa1-eee4eee6eeee/member, a1aaaa1a-b2bb-cccc-d4dd-aa2aaa6aaaaa/maintainer" ``` --- ### Custom SAML URL: https://buildkite.com/docs/platform/sso/custom-saml #### Single sign-on with a SAML provider You can use any identify provider that supports SAML 2.0 to authorize access to your Buildkite organization. If there isn't a Buildkite guide for your chosen provider, you can set up SAML using the instructions below. > 📘 Enterprise plan feature > Custom SAML capabilities for SSO are only available to Buildkite customers on [Enterprise](https://buildkite.com/pricing) plans. There are two workflows for setting up a new SAML provider, depending on your IdP's setup process: if you require an ACS URL from Buildkite to complete your IdP's setup, or if you can complete the setup without anything from Buildkite. > 📘 You can also set up SSO providers manually with GraphQL > See the [SSO setup with GraphQL guide](/docs/platform/sso/sso-setup-with-graphql) for detailed instructions and code samples. ##### Set up with an ACS URL If your IdP requires information from Buildkite as part of the setup process, generate your unique Buildkite URLs first and enter the rest of your IdP information later. ###### Get your ACS URL and configure your IdP Click the [Buildkite organization **Settings**](https://buildkite.com/organizations/~/settings)' **Single Sign On** menu item, then choose the Custom SAML provider from the available options: Choose the **Provide IdP Metadata Later** option when configuring your Custom SAML Provider. On the following screen you'll find the ACS URL in the **Service Provider** section: If you IdP supports meta-data URL setup, you can find your unique Buildkite organization meta-data URL below the ACS URL. ###### Update your Buildkite SAML provider In your Buildkite **Single Sign On** menu, select your custom SAML provider from the list of **Configured SSO Providers**. Click the **Edit Settings** button, and choose an option for entering your IdP's information: a meta-data URL from your IdP, an XML file from your IdP, or by entering the data manually. Manual data entry requires the following three fields: | SAML 2.0 Endpoint (HTTP) | The SAML endpoint for your chosen provider. | Issuer URL | The identifying URL of your chosen provider. | X.509 certificate | The public key certificate for your chosen provider. After completing your chosen option, [Perform a test login](#perform-a-test-login), then [Enable the new SSO provider](#enable-the-new-sso-provider). ##### Set up manually There are two ways to set up your custom provider: using your Buildkite meta-data XML URL, or manually adding your Buildkite data into your identity provider. ###### Set up your IdP Manual setup is different for each provider, however it usually requires the following fields: | Single sign-on URL | Your unique SSO service URL from Buildkite that will be sending requests to your identity provider. | Entity Identifier | https://buildkite.com | Name ID | The field used to identify users. Email Address. If your IdP requires an ACS URL before it will provide the above information, follow the instructions in the [Set up with an ACS URL](#set-up-with-an-acs-url) section to generate one. If your custom provider needs further information, please email [support@buildkite.com](mailto:support@buildkite.com). ###### Create a Buildkite SAML provider Click the [Buildkite organization **Settings**](https://buildkite.com/organizations/~/settings)' **Single Sign On** menu item, then choose the custom SAML provider from the available options: Choose an option for entering your IdP's information: a meta-data URL from your IdP, an XML file from your IdP, or by entering the data manually. Manual data entry requires the following three fields: | SAML 2.0 Endpoint (HTTP) | The SAML endpoint for your chosen provider. | Issuer URL | The identifying URL of your chosen provider. | X.509 certificate | The public key certificate for your chosen provider. After completing your chosen option, [Perform a test login](#perform-a-test-login), then [Enable the new SSO provider](#enable-the-new-sso-provider). ##### Perform a test login Follow the instructions on the provider page to perform a test login. Performing a test login will verify that SSO is working correctly before you activate it for your organization members. ##### Enable the new SSO provider Once you've performed a test login you can enable your provider. Activating SSO will not force a log out of existing users, but will cause all new or expired sessions to authorize through your provider before organization data can be accessed. If you need to edit or update your provider settings at any time, you will need to disable the provider first. For more information on disabling a provider, see the [disabling SSO](/docs/platform/sso#disabling-and-removing-sso) section of the SSO overview. ##### SAML user attributes Buildkite accepts a subset of the SAML attributes from identity providers. The accepted attributes are: | Attribute | Description | `admin` | A boolean value that describes whether the user should be provisioned with admin permissions _Example:_ true | `email` | A string of the user's email address _Example:_ "person@company.com" | `name` | A string of the user's full name _Example:_ "Han Solo" | `teams` | A comma separated list of team UUIDs. A team's UUID can be found on the _Team Settings_ page in Buildkite. _Example:_ `a1aaaa1a-b2bb-cccc-d4dd-aa2aaa6aaaaa,b5bbbbbb-3aaa-dd1d-aaa1-eee4eee6eeee` When using the `teams` attribute, you can also specify roles. The `maintainer` or `member` role can be appended to the team UUID. For example, the following code will specify the member role for the first team and the maintainer role for the second team: ``` teams="b5bbbbbb-3aaa-dd1d-aaa1-eee4eee6eeee/member, a1aaaa1a-b2bb-cccc-d4dd-aa2aaa6aaaaa/maintainer" ``` --- ### Set up with GraphQL URL: https://buildkite.com/docs/platform/sso/sso-setup-with-graphql #### Setting up single sign-on with GraphQL Buildkite's single sign-on (SSO) can be set up by emailing support, or you can set it up manually using our [GraphQL APIs](/docs/apis/graphql-api). This tutorial covers how to set up SSO manually using GraphQL. >📘 > For details on every available option available in the GraphQL APIs, please use the documentation sidebar built into the [GraphQL explorer](/docs/graphql-api#getting-started). ##### Finding your organization's GraphQL ID For every type of SSO provider, you need your Buildkite organization's [GraphQL ID](/docs/apis/graphql-api#graphql-ids). You can find your organization's GraphQL ID using the following GraphQL query: ```graphql query OrganizationId { organization(slug: "myorg") { id } } ``` ##### Setting up G Suite ###### Step 1 The first step in setting up a G Suite SSO provider is to use the `ssoProviderCreate` mutation to create a new provider in your Buildkite organization: ```graphql mutation CreateProvider { ssoProviderCreate(input: { organizationId: "", type: GOOGLE_GSUITE, googleHostedDomain: "myorg.com", discloseGoogleHostedDomain: true, emailDomain: "myorg.com", emailDomainVerificationAddress: "admin@myorg.com" }) { ssoProvider { id state url } } } ``` You need the provider's `id` from the output of this mutation in step three. ###### Step 2 The second step is to use the `url` that was returned above to perform a test login. Open the `url` in a browser, and perform a test login using G Suite. ###### Step 3 Once you complete the test login, you can do the final step: enabling the provider using the `ssoProviderEnable` mutation. Running this mutation will require all your users to authorize using your G Suite provider before they can access your organization on Buildkite. ```graphql mutation EnableProvider { ssoProviderEnable( input: { id: "" } ) { ssoProvider { state url } } } ``` You should now see that the provider's state is enabled. >🚧 > See the `SSOProviderUpdatePayload` documentation for other properties that can be configured on your SSO provider, such as `sessionDurationInHours` and `note`. ##### Setting up SAML (Google Cloud Identity, Okta, OneLogin, ADFS and others) ###### Step 1 The first step in setting up a SAML-based provider is to use the `ssoProviderCreate` mutation to create a new provider in Buildkite. This mutation returns the details you'll need for your SSO provider's system. The `emailDomainVerificationAddress` requires the same domain as `emailDomain`, and must be one of `admin@`, `administrator@`, `postmaster@`, or `webmaster@`. ```graphql mutation CreateProvider { ssoProviderCreate(input: { organizationId: "", type: SAML, emailDomain: "myorg.com", emailDomainVerificationAddress: "admin@myorg.com" }) { ssoProvider { id state ... on SSOProviderSAML { serviceProvider { metadata { url } ssoURL issuer } } } } } ``` You need the provider's `id` from the output of this mutation in steps three and four. ###### Step 2 The next step is to log into your SSO provider's system and set up Buildkite using the details from the GraphQL response above. If the SSO provider supports a metadata URL, you can use the `url` property of the `metadata` object. If your SSO provider does not support metadata URL, use the `ssoURL` property for the ACS URL or SSO URL, and the `issuer` property for the Issuer or Entity ID. ###### Step 3 If your provider shows a metadata URL to complete the setup, you can use that with the `ssoProviderUpdate` mutation to have Buildkite automatically complete the setup. This will not yet affect any of your Buildkite users. ```graphql mutation UpdateProviderMetaData { ssoProviderUpdate(input: { id: "", identityProvider: { metadata: { url: "https://myssoprovider.com/metadata/..." } } }) { ssoProvider { state url ... on SSOProviderSAML { serviceProvider { ssoURL issuer } } } } } ``` If your SSO provider didn't provide a metadata URL, then copy SSO URL, Issuer (also known as Entity ID), and Certificate into the `ssoProviderUpdate` mutation: ```graphql mutation UpdateProviderMetaData { ssoProviderUpdate(input: { id: "", identityProvider: { ssoURL: "https://myssoprovider.com/...", issuer: "https://myssoprovider.com/...", certificate: "---BEGIN CERT---..." } }) { ssoProvider { state url ... on SSOProviderSAML { serviceProvider { ssoURL issuer } } } } } ``` If your SSO provider requests additional info, use the `ssoURL` property for the ACS URL or SSO URL, and the `issuer` property for the Issuer or Entity ID. ###### Step 4 You can now perform a test login from your SSO provider's web interface, or using the `url` returned from the update mutation. Once you complete the test login, you can do the final step: enabling the provider using the `ssoProviderEnable` mutation. Running this mutation will require all your users to authorize using your SSO provider before they can access your organization on Buildkite. ```graphql mutation EnableProvider { ssoProviderEnable( input: { id: "" } ) { ssoProvider { state } } } ``` You should now see that the provider's state is enabled. >🚧 > See the `SSOProviderUpdatePayload` documentation for other properties that can be configured on your SSO provider, such as `sessionDurationInHours` and `note`. ##### Finding an SSO provider's details If you need to find the ID of a particular SSO provider, you can query the `ssoProviders` field of your organization: ```graphql query FindProviders { organization(slug: "") { ssoProviders(first: 100) { edges { node { id type state createdAt enabledAt url emailDomain emailDomainVerificationAddress emailDomainVerifiedAt } } } } } ``` Some provider types have additional fields which you can query using GraphQL's [Inline Fragment](https://graphql.org/learn/queries/#inline-fragments) syntax, for example: ```graphql query FindProviders { organization(slug: "") { ssoProviders(first: 100) { edges { node { id url ... on SSOProviderSAML { identityProvider { ssoURL issuer certificate metadata { xml url } } } ... on SSOProviderGoogleGSuite { googleHostedDomain discloseGoogleHostedDomain } } } } } } ``` ##### Disabling an SSO provider If you need to disable an SSO provider, you can do so using the `ssoProviderDisable` mutation. ```graphql mutation DisableProvider { ssoProviderDisable(input:{ id: "", disabledReason: "Disabled because..." }) { ssoProvider { state url } } } ``` --- ### Token security URL: https://buildkite.com/docs/platform/security/tokens #### Token security Buildkite is a member of the [GitHub secret scanning program ](https://docs.github.com/en/code-security/secret-scanning/secret-scanning-partnership-program/secret-scanning-partner-program). If you have enabled [GitHub Secret Protection](https://docs.github.com/en/get-started/learning-about-github/about-github-advanced-security#github-secret-protection) for [repositories](https://docs.github.com/en/code-security/secret-scanning/enabling-secret-scanning-features/enabling-secret-scanning-for-your-repository) in your GitHub organization, GitHub will automatically scan these _private_ or _public_ repositories within your GitHub organization for Buildkite tokens and notify you if any are found. In the case of [Buildkite API access tokens](#supported-buildkite-tokens-api-access-tokens) (`bkua_`) leaked on _public_ repositories, GitHub will notify Buildkite directly and any valid tokens will be automatically revoked and their owner's and associated organizations notified. If you are notified of any other tokens, please contact Buildkite support. ##### Supported Buildkite tokens The following Buildkite tokens are supported by this program. ###### API access tokens Buildkite [API access tokens](/docs/apis/managing-api-tokens) are also known as _Buildkite user access_ tokens, whose acronym forms the prefix for these types of tokens. - Prefix: `bkua_` - Example: `bkua_*****************************************************` _Applies to API access tokens created after: March, 2023_ ###### Agent session tokens Buildkite agent [session tokens](/docs/agent/self-hosted/tokens#additional-agent-tokens-session-tokens) are also known as _Buildkite agent access_ tokens, whose acronym forms the prefix for these types of tokens. - Prefix: `bkaa_` - Example: `bkaa_***************************************************************************` _Applies to agent session tokens created after: January, 2025_ ###### Agent job tokens Buildkite agent [job tokens](/docs/agent/self-hosted/tokens#additional-agent-tokens-job-tokens) form the acronym for the prefix of their values. - Prefix: `bkaj_` - Example: `bkaj_*********************************************************************************************************************************************************************************************************************************************************************************************************************************************` ###### Unclustered agent tokens Buildkite [unclustered agent tokens](/docs/agent/self-hosted/tokens#working-with-unclustered-agent-tokens) are also known as _Buildkite agent registration_ tokens, whose acronym forms the prefix for these types of tokens. - Prefix: `bkar_` - Example: `bkar_*************************************************************************` _Applies to unclustered agent tokens created after: April, 2025_ ###### Agent tokens Buildkite [agent tokens](/docs/agent/self-hosted/tokens) are also known as _Buildkite cluster tokens_, whose acronym forms the prefix for these types of tokens. - Prefix: `bkct_` - Example: `bkct_*************************************************************************` _Applies to agent tokens created after: April, 2025_ ###### Registry tokens Buildkite [registry tokens](/docs/package-registries/registries/manage#configure-registry-tokens), are a type of Buildkite Package (Registries) token, whose acronym forms the prefix for these tokens. - Prefix: `bkpt_` - Example: `bkpt_*******************************************************************************************************************************************************************************************************` ###### Package Registries temporary tokens Buildkite Package Registries temporary tokens, which are presented on a registry's pages for either publishing packages to the registry or installing specific packages from them. See the relevant [Package ecosystem](/docs/package-registries/ecosystems) pages to learn more about these types of tokens, which are a type of Buildkite Package (Registries) token, whose acronym forms the prefix for these tokens. - Prefix: `bkpt_` - Example: `bkpt_*******************************************************************************************************************************************************************************************************` ###### Portal tokens Buildkite portal tokens cover the following types of tokens: - _Long-lived service tokens_, generated when a [new portal is created](/docs/apis/graphql/portals#creating-a-portal), as well as [through the portal's **Security** page](/docs/apis/graphql/portals#authentication). - [Ephemeral portal tokens](/docs/apis/graphql/portals/ephemeral-portal-tokens), which requires a [portal secret](#supported-buildkite-tokens-portal-secrets) to be [generated](/docs/apis/graphql/portals/ephemeral-portal-tokens#requesting-an-ephemeral-portal-token). - [Portal tokens](/docs/apis/graphql/portals/user-invoked-portals#short-lived-portal-token-generating-a-portal-token) that are [user-invoked and scoped](/docs/apis/graphql/portals/user-invoked-portals). These types of tokens are also known as _Buildkite portal access tokens_, whose acronym forms the prefix for these types of tokens. - Prefix: `bkpat_` - Example: `bkpat_******************************************************` ###### Portal secrets Buildkite [portal secrets](/docs/apis/graphql/portals/ephemeral-portal-tokens#generating-a-secret), whose acronym forms the prefix to their values, are used to generate [ephemeral portal tokens](/docs/apis/graphql/portals/ephemeral-portal-tokens#requesting-an-ephemeral-portal-token), which are a type of [portal token](#supported-buildkite-tokens-portal-tokens). - Prefix: `bkps_` - Example: `bkps_****************************************************************` --- ### Slack Workspace URL: https://buildkite.com/docs/platform/integrations/slack-workspace #### Slack Workspace The Slack Workspace integration lets you receive notifications in your [Slack](https://slack.com/) workspace. This integration supports: - [Pipelines build notifications](/docs/pipelines/integrations/notifications/slack-workspace) - [Test Engine workflow Slack notification](/docs/test-engine/workflows/actions#send-slack-notification) [Adding a **Slack Workspace** notification service](https://buildkite.com/organizations/-/services/slack_workspace/new) will authorize access for your entire Slack app for a given Slack workspace. You only need to set up this integration once per Slack workspace, after which, you can then configure notifications to be sent to any Slack channels or users. > 📘 > Setting up a Workspace requires Buildkite organization admin access. ##### Connect Slack workspace 1. Select **Settings** in the global navigation and select **Notification Services** in the left sidebar. 1. Select the **Add** button on **Slack Workspace**. 1. Select the **Add to Slack** button: This action redirects you to Slack. 1. Log in to Slack and grant Buildkite permission to post across your workspace. 1. After granting access, you can then configure [Pipeline build notifications](/docs/pipelines/integrations/notifications/slack-workspace) and [Test Engine workflow Slack notifications](/docs/test-engine/workflows/actions#send-slack-notification). ##### Privacy policy For details on how Buildkite handles your information, please see Buildkite's [Privacy Policy](https://buildkite.com/about/legal/privacy-policy/). --- ### Accessibility URL: https://buildkite.com/docs/platform/accessibility #### Accessibility Buildkite is committed to making its web application usable for everyone, including people who rely on assistive technologies, keyboard navigation, or adjusted visual settings. This page documents the accessibility features currently available across the Buildkite platform. ##### Theme and display options Buildkite offers three display theme options, accessible from the top global navigation bar. While these are primarily comfort and usability features rather than accessibility-specific accommodations, they can help users adjust the interface to suit their visual preferences: - **Light**: the default Buildkite theme - **Dark**: an experimental dark mode that inverts the interface colors - **System**: automatically matches your operating system's light or dark preference The theme selection persists across sessions. When set to **System**, Buildkite responds to changes in your operating system's display settings in real time. > 📘 Experimental dark mode > Dark mode is currently experimental and uses a CSS color inversion technique. Some visual elements may not be displayed perfectly in dark mode. Buildkite is working toward native dark mode support in newer interface components. ###### Job log themes The job log viewer offers an additional theme toggle, allowing you to switch between a default dark theme and a light theme for improved readability based on your preference. ##### Keyboard navigation The Buildkite web application supports keyboard navigation, largely through standard browser behavior. Some areas of the application have more intentional keyboard support than others. ###### Skip navigation Some layouts include a **Skip to main content** link that becomes visible when focused. This allows keyboard and screen reader users to bypass the navigation and jump directly to the page content. ###### Focus indicators Interactive elements display visible focus rings when navigated using the keyboard. Buildkite uses the `:focus-visible` CSS pseudo-class, so that focus indicators appear during keyboard navigation and don't interfere with mouse interactions. Coverage may vary across some custom components. ###### Keyboard shortcuts Several areas of the application support keyboard shortcuts: - **Build page**: has dedicated keyboard shortcuts for navigating builds, jumping to failures, and searching steps. See [build page keyboard shortcuts](/docs/pipelines/build-page#keyboard-shortcuts) for the full list. - **Job log search**: type `s` to focus the search input, and `Escape` to close it. - **Dialogs**: type `Escape` to close any open dialog, and focus is trapped within the dialog while it is open. - **Dropdowns and autocomplete**: arrow keys navigate options, `Enter` selects, and `Escape` closes. ###### Interactive components Custom interactive components such as dropdowns, combo boxes, tree views, and toggle switches all support keyboard operation, including arrow key navigation and enter/escape key handling. ##### Screen reader support Buildkite uses semantic HTML and ARIA attributes to support screen readers. The depth of support varies across the application—key components have intentional ARIA labeling, while others rely on browser and platform defaults. ###### Semantic structure - Pages use the `` landmark element with an `id` anchor for skip navigation. - The `lang="en"` attribute is set on the root `` element. - Navigation, headers, and content areas use semantic HTML elements in many areas. ###### ARIA attributes Key interface components include ARIA attributes to convey their purpose and state to assistive technologies: - **Build status icons**: include `aria-label` attributes describing the current state, for example, **Build state: PASSED**. - **Dialogs**: use `role="dialog"` with appropriate labeling. - **Tree views**: in the build sidebar use `role="tree"` and `role="treeitem"` with `aria-expanded` state. - **Combo boxes**: use `role="listbox"` and `role="option"` with `aria-selected` state. - **Toggle switches**: use `role="switch"` with `aria-checked` and `aria-labelledby`. - **Tab interfaces**: use `role="tablist"` for tabbed navigation. - **Status updates**: use `role="status"` and `role="alert"` to announce changes to screen readers. - **Decorative elements**: mark with `aria-hidden="true"` to prevent screen reader noise. ###### Visually hidden content Buildkite uses visually hidden text (hidden from the screen but available to screen readers) to provide additional context where the visual interface relies on icons or layout for meaning. ##### Color and contrast ###### Status indicators Build and job status indicators use both color and distinct icon shapes to convey state. For example, a passed build uses a green checkmark, while a failed build uses a red cross. This means status information doesn't rely on color alone and remains accessible to color-blind users. ###### Color token system Buildkite uses a semantic color token system that maps status concepts (success, warning, error, neutral) to specific color palettes with defined foreground, background, and stroke values. This system enables consistent color usage across the application. ###### Focus ring colors Keyboard focus indicators use high-visibility colors (lime green and purple) that contrast with the surrounding interface elements. ##### Typography and text scaling - The base font size is set to 16px, matching the browser default. - The browser view is configured with `width=device-width, initial-scale=1.0` without restricting user zoom, allowing browser-level text scaling and zoom to work as expected. - Buildkite uses a defined typography hierarchy for headings and body text. ##### Form accessibility - Form inputs include associated `` elements. - Required fields are indicated with a **Required** text suffix. - Related form controls are grouped using `` and `` elements where appropriate. ##### Voice input and text-to-speech The Buildkite web application doesn't include custom voice input or text-to-speech features. The application relies on operating system and browser-level assistive technologies for these capabilities. The ARIA attributes and semantic HTML described above support the correct functioning of these platform-level tools. ##### Mobile accessibility The Buildkite web application uses responsive design, adapting to different screen sizes and orientations. Standard browser accessibility features, including text scaling and screen reader support, function on mobile devices. ##### Known limitations While Buildkite continues to improve accessibility, there are some known limitations: - Dark mode uses a CSS color inversion technique, which can occasionally affect color contrast ratios or cause visual artifacts in some components. - High contrast mode (`prefers-contrast`) is not currently detected or supported with custom styles. - Form error messages are not consistently associated with their inputs using `aria-describedby`, which may affect screen reader users in some forms. - Not all dynamic content updates use `aria-live` regions, and therefore, some real-time updates may not be immediately announced by screen readers. - There is no built-in font size adjustment feature in the Buildkite interface (browser zoom can be used instead). - No keyboard shortcut reference or help panel is currently available. ##### Feedback If you encounter accessibility issues or have suggestions for improvement, contact the Buildkite support team at [support@buildkite.com](mailto:support@buildkite.com). --- ### Pricing and plans URL: https://buildkite.com/docs/platform/pricing-and-plans #### Pricing and plans Buildkite offers a range of plans designed to suit teams of all sizes, from individual developers to large enterprises. ##### Available plans Buildkite offers the following plans: - **Personal**: For individual developers exploring the Buildkite platform. - **Pro**: For growing teams that need more capacity and features. - **Enterprise**: For organizations that require advanced security, compliance, and dedicated support. Plans differ in the features available, usage limits, and level of support provided. For a detailed comparison of what each plan includes, visit the [Buildkite pricing page](https://buildkite.com/pricing/). To find out which plan your organization is on or to manage your subscription, select **Settings** in the global navigation to access the [**Organization Settings**](https://buildkite.com/organizations/~/settings) page. ##### Usage and limits Each plan has default [service quotas](/docs/platform/limits) that define usage limits across the Buildkite platform. Buildkite organization administrators can view the quotas that apply to their organization in the [**Organization Settings**](https://buildkite.com/organizations/~/settings) page in the **Quotas** tab > **Service Quotas** section. --- ### Limits URL: https://buildkite.com/docs/platform/limits #### Limits This page outlines usage limits designed to protect your builds from unintentional resource issues and ensure reliable service for all customers. Limits vary by subscription tier: - Personal plan - Trial plan - Pro plan - Enterprise plan You can find out more about the available plans and what is included in them in [Pricing](https://buildkite.com/pricing/). The [**Usage** page](https://buildkite.com/organizations/~/usage) is available on every Buildkite plan and shows a breakdown of usage metrics across the Buildkite platform and all products for your Buildkite organization. ##### Viewing your organization's service quotas Buildkite organization administrators can view the service quotas that apply to their organization on the **Service Quotas** page in **Organization Settings**. To access your organization's service quotas: 1. Select **Settings** in the global navigation to access the [**Organization Settings**](https://buildkite.com/organizations/~/settings) page. 1. Select **Quotas** to open the **Service Quotas** page. The **Service Quotas** page displays your organization's current limits grouped by product area. Each quota shows the limit that applies to your organization, which may differ from the defaults listed on this page. A **Custom** badge next to a quota indicates that your organization has a limit that differs from the default for your plan. An **Exceeded in last 24h** badge indicates that your organization reached this limit within the past 24 hours. > 📘 Adjusting limits > Some organization-level limits can be increased on request depending on your plan. Contact Buildkite support at support@buildkite.com with details about your use case, or contact your Technical Account Manager if you have one. ##### Platform and organization-level limits Platform and organization-level limits apply to all Buildkite products. These limits protect your organization from unintentional resource exhaustion and ensure reliable service for all customers. These limits are scoped to your organization. | Service limit type | Description and default limit | **** | Default: **** ##### Pipelines limits The following table lists the default service limits for [Pipelines](/docs/pipelines). | Service limit type | Description and default limit | **** | Default: **** ###### Hosted agents limits The following limits apply to the [Buildkite hosted agents](/docs/agent/buildkite-hosted) used in Buildkite Pipelines. | Limit type | Trial | Personal | Pro | Enterprise | | --- | --- | --- | --- | --- | | **Linux concurrency** | 10 | 3 | 20 | Custom | | **macOS concurrency** | 3 | - | 5 | Custom | | **Linux minutes, per month** | 2,000 | 550 | usage-based | usage-based | | **macOS minutes, per month** | 3,000 | not available | usage-based | usage-based | | **Container cache volume** | 50 GB | 50 GB | 50 GB | 50 GB | | **Git mirror volume** | 5 GB | 5 GB | 5 GB | 5 GB | ##### Test Engine limits The following table lists the default service limits for [Test Engine](/docs/test-engine). | Service limit type | Description and default limit | **** | Default: **** ##### Package Registries limits The following table lists the default service limits for [Package Registries](/docs/package-registries). The limits in Package Registries are based on the [subscription tier](https://buildkite.com/pricing/): | Service limit type | Description and default limit | **** | Default: **** --- ### Service level agreement URL: https://buildkite.com/docs/platform/service-level-agreement #### Service level agreement Buildkite provides a service level agreement (SLA) that defines the availability and uptime commitments for the Buildkite platform. ##### Uptime and availability Buildkite is committed to maintaining high availability for the platform. Real-time and historical uptime data is available on the [Buildkite status page](https://www.buildkitestatus.com/). The status page provides: - Current operational status of Buildkite services - Historical uptime data - Incident reports and updates - Scheduled maintenance notifications ##### SLA details For full details on the availability commitments provided by Buildkite, response times, and support levels, refer to the [Buildkite Service Level Agreement](https://buildkite.com/about/legal/service-level-agreement/). SLA terms may vary depending on your [plan](/docs/platform/pricing-and-plans). --- ### Legal and policies URL: https://buildkite.com/docs/platform/legal-and-policies #### Legal and policies Buildkite's legal documents outline the terms and conditions that govern the use of the Buildkite platform. ##### Terms of service The terms of service define the rights and responsibilities of Buildkite and its customers when using the platform. For full details, refer to the [Buildkite Terms of Service](https://buildkite.com/about/legal/terms-of-service/). ##### All legal documents For a complete list of Buildkite's legal documents, including privacy policies, data processing agreements, and acceptable use policies, visit the [Buildkite legal page](https://buildkite.com/about/legal/). --- ## APIs ### APIs URL: https://buildkite.com/docs/apis #### Buildkite APIs The Buildkite APIs documentation contains docs for all API-related features of Buildkite available across Buildkite [Pipelines](/docs/pipelines), [Test Engine](/docs/test-engine), and [Package Registries](/docs/package-registries). ##### Authentication The Buildkite [REST](#rest-api) and [GraphQL](#graphql) APIs expect an access token to be provided using the `Authorization` HTTP header: ```bash curl -H "Authorization: Bearer $TOKEN" https://api.buildkite.com/v2/user ``` Generate an [access token](https://buildkite.com/user/api-access-tokens). ###### Managing API access tokens Learn more about Buildkite's API access tokens and how to manage them in [Managing API access tokens](/docs/apis/managing-api-tokens), which covers the following topics: - The [scopes](/docs/apis/managing-api-tokens#token-scopes) which can be assigned to API access tokens. - [Auditing](/docs/apis/managing-api-tokens#auditing-tokens) token usage. - [Removing](/docs/apis/managing-api-tokens#auditing-tokens-removing-an-organization-from-a-token) Buildkite organization access to tokens. - [Limiting](/docs/apis/managing-api-tokens#restricting-api-access-by-ip-address) a token's access by IP address. - A token's [lifecycle](/docs/apis/managing-api-tokens#api-token-lifecycle) characteristics. - Managing a token's [security](/docs/apis/managing-api-tokens#api-token-security), including [token rotation](/docs/apis/managing-api-tokens#api-token-security-rotation) and [GitHub's secret scanning program](/docs/apis/managing-api-tokens#api-token-security-github-secret-scanning-program). ###### Webhook authentication If you are implementing [Buildkite webhooks](#webhooks), all webhooks for [Pipelines](/docs/apis/webhooks/pipelines#http-headers) and [Package Registries](/docs/apis/webhooks/package-registries#http-headers) contain an `X-Buildkite-Token` header which allows you to verify the authenticity of the request. ##### REST API The Buildkite REST API aims to give you complete programmatic access and control of Buildkite to extend, integrate and automate anything to suit your particular needs. Using the Buildkite REST API is as easy as: 1. Ensuring you have generated an [API access token](/docs/apis/managing-api-tokens) with as many [scopes](/docs/apis/managing-api-tokens#token-scopes) as you require. 2. Making requests to https://api.buildkite.com using the token you generated in the `Authorization` header, for example: ```bash curl -H "Authorization: Bearer $TOKEN" https://api.buildkite.com/v2/user ``` Learn more about Buildkite's REST API in the [REST API overview](/docs/apis/rest-api). ##### GraphQL The Buildkite GraphQL API provides an alternative to the REST API. The GraphQL API allows for more efficient retrieval of data by enabling you to fetch multiple, nested resources in a single request. You can access the GraphQL API through the _GraphQL console_ (see the [GraphQL overview](/docs/apis/graphql-api) page > [Getting started](/docs/apis/graphql-api#getting-started) section for more information), as well as at the command line (see the [Console and CLI tutorial](/docs/apis/graphql/graphql-tutorial) page for more information). For command line access, you'll need a Buildkite [API access token](/docs/apis/managing-api-tokens) with the **Enable GraphQL API Access** permission selected. Learn more about: - Buildkite's GraphQL API in the [GraphQL API overview](/docs/apis/graphql-api) and [Console and CLI tutorial](/docs/apis/graphql/graphql-tutorial) pages. - The differences between Buildkite's REST and GraphQL APIs in [API differences](/docs/apis/api-differences). ###### Portals In the absence of configurable [scope](/docs/apis/managing-api-tokens#token-scopes) restrictions on API access tokens for the GraphQL API, the _portals_ feature provides a mechanism to restrict access to the Buildkite platform through the GraphQL API. Portals are GraphQL-based operations, which are stored by Buildkite, and are made accessible through authenticated URL endpoints. Learn more about the portals feature in [Portals](/docs/apis/graphql/portals). ##### MCP server Buildkite provides both remote and local [MCP servers](https://modelcontextprotocol.io/docs/learn/server-concepts), which provide your AI tools with access to Buildkite's REST API features. Learn more about the Buildkite MCP server from the [MCP server overview](/docs/apis/mcp-server) page, along with its configurable [tools](/docs/apis/mcp-server/tools#available-mcp-tools) and [toolsets](/docs/apis/mcp-server/tools/toolsets). ##### Webhooks Buildkite's webhooks allow your third-party applications and systems to monitor and respond to events within your Buildkite organization, providing a real time view of activity and allowing you to extend and integrate Buildkite into these systems. For Pipelines, webhooks can be [added and configured](/docs/apis/webhooks/pipelines#add-a-webhook) on your Buildkite organization's [**Notification Services** settings](https://buildkite.com/organizations/-/services) page. For Test Engine and Package Registries, webhooks can be configured through their specific [test suites](/docs/apis/webhooks/test-engine) and [registries](/docs/apis/webhooks/package-registries#add-a-webhook), respectively. This section also covers documentation on how to configure incoming webhooks for the Buildkite platform, available through [pipeline triggers](/docs/apis/webhooks/incoming/pipeline-triggers). Learn more about Buildkite's webhooks from the [Webhooks overview](/docs/apis/webhooks) page. --- ### Overview URL: https://buildkite.com/docs/apis #### Buildkite APIs The Buildkite APIs documentation contains docs for all API-related features of Buildkite available across Buildkite [Pipelines](/docs/pipelines), [Test Engine](/docs/test-engine), and [Package Registries](/docs/package-registries). ##### Authentication The Buildkite [REST](#rest-api) and [GraphQL](#graphql) APIs expect an access token to be provided using the `Authorization` HTTP header: ```bash curl -H "Authorization: Bearer $TOKEN" https://api.buildkite.com/v2/user ``` Generate an [access token](https://buildkite.com/user/api-access-tokens). ###### Managing API access tokens Learn more about Buildkite's API access tokens and how to manage them in [Managing API access tokens](/docs/apis/managing-api-tokens), which covers the following topics: - The [scopes](/docs/apis/managing-api-tokens#token-scopes) which can be assigned to API access tokens. - [Auditing](/docs/apis/managing-api-tokens#auditing-tokens) token usage. - [Removing](/docs/apis/managing-api-tokens#auditing-tokens-removing-an-organization-from-a-token) Buildkite organization access to tokens. - [Limiting](/docs/apis/managing-api-tokens#restricting-api-access-by-ip-address) a token's access by IP address. - A token's [lifecycle](/docs/apis/managing-api-tokens#api-token-lifecycle) characteristics. - Managing a token's [security](/docs/apis/managing-api-tokens#api-token-security), including [token rotation](/docs/apis/managing-api-tokens#api-token-security-rotation) and [GitHub's secret scanning program](/docs/apis/managing-api-tokens#api-token-security-github-secret-scanning-program). ###### Webhook authentication If you are implementing [Buildkite webhooks](#webhooks), all webhooks for [Pipelines](/docs/apis/webhooks/pipelines#http-headers) and [Package Registries](/docs/apis/webhooks/package-registries#http-headers) contain an `X-Buildkite-Token` header which allows you to verify the authenticity of the request. ##### REST API The Buildkite REST API aims to give you complete programmatic access and control of Buildkite to extend, integrate and automate anything to suit your particular needs. Using the Buildkite REST API is as easy as: 1. Ensuring you have generated an [API access token](/docs/apis/managing-api-tokens) with as many [scopes](/docs/apis/managing-api-tokens#token-scopes) as you require. 2. Making requests to https://api.buildkite.com using the token you generated in the `Authorization` header, for example: ```bash curl -H "Authorization: Bearer $TOKEN" https://api.buildkite.com/v2/user ``` Learn more about Buildkite's REST API in the [REST API overview](/docs/apis/rest-api). ##### GraphQL The Buildkite GraphQL API provides an alternative to the REST API. The GraphQL API allows for more efficient retrieval of data by enabling you to fetch multiple, nested resources in a single request. You can access the GraphQL API through the _GraphQL console_ (see the [GraphQL overview](/docs/apis/graphql-api) page > [Getting started](/docs/apis/graphql-api#getting-started) section for more information), as well as at the command line (see the [Console and CLI tutorial](/docs/apis/graphql/graphql-tutorial) page for more information). For command line access, you'll need a Buildkite [API access token](/docs/apis/managing-api-tokens) with the **Enable GraphQL API Access** permission selected. Learn more about: - Buildkite's GraphQL API in the [GraphQL API overview](/docs/apis/graphql-api) and [Console and CLI tutorial](/docs/apis/graphql/graphql-tutorial) pages. - The differences between Buildkite's REST and GraphQL APIs in [API differences](/docs/apis/api-differences). ###### Portals In the absence of configurable [scope](/docs/apis/managing-api-tokens#token-scopes) restrictions on API access tokens for the GraphQL API, the _portals_ feature provides a mechanism to restrict access to the Buildkite platform through the GraphQL API. Portals are GraphQL-based operations, which are stored by Buildkite, and are made accessible through authenticated URL endpoints. Learn more about the portals feature in [Portals](/docs/apis/graphql/portals). ##### MCP server Buildkite provides both remote and local [MCP servers](https://modelcontextprotocol.io/docs/learn/server-concepts), which provide your AI tools with access to Buildkite's REST API features. Learn more about the Buildkite MCP server from the [MCP server overview](/docs/apis/mcp-server) page, along with its configurable [tools](/docs/apis/mcp-server/tools#available-mcp-tools) and [toolsets](/docs/apis/mcp-server/tools/toolsets). ##### Webhooks Buildkite's webhooks allow your third-party applications and systems to monitor and respond to events within your Buildkite organization, providing a real time view of activity and allowing you to extend and integrate Buildkite into these systems. For Pipelines, webhooks can be [added and configured](/docs/apis/webhooks/pipelines#add-a-webhook) on your Buildkite organization's [**Notification Services** settings](https://buildkite.com/organizations/-/services) page. For Test Engine and Package Registries, webhooks can be configured through their specific [test suites](/docs/apis/webhooks/test-engine) and [registries](/docs/apis/webhooks/package-registries#add-a-webhook), respectively. This section also covers documentation on how to configure incoming webhooks for the Buildkite platform, available through [pipeline triggers](/docs/apis/webhooks/incoming/pipeline-triggers). Learn more about Buildkite's webhooks from the [Webhooks overview](/docs/apis/webhooks) page. --- ### Managing API access tokens URL: https://buildkite.com/docs/apis/managing-api-tokens #### Managing API access tokens Buildkite API access tokens are issued to individual Buildkite user accounts, not Buildkite organizations. ##### Creating and editing API access tokens You can [create](#creating-and-editing-api-access-tokens-create-an-api-access-token) and [edit](#creating-and-editing-api-access-tokens-edit-an-existing-api-access-token) API access tokens through your **Personal Settings**. > 📘 > You'll need to be a member of a Buildkite organization to generate and use an API access token with this organization. This is especially important for contributors to public and open-source projects. > Once API access tokens have been created within a Buildkite organization, Buildkite organization administrators can use the [API Access Audit](#auditing-tokens) page to view and manage all such tokens that have been created within it. ###### Create an API access token To create a new API access token: 1. Select **Personal Settings** in the global navigation > [**API Access Tokens**](https://buildkite.com/user/api-access-tokens) to open its page. 1. Select **New API Access Token**. If prompted, enter your password in the **Confirm Password** field. 1. Enter an appropriate **Description** for your new API access token, and ensure **Token** is selected in **Credential Type**. 1. Ensure the appropriate Buildkite organization is selected in **Organization Access**. This organization is the one that your API access token will have access to and operate within. **Note:** Your most recently used Buildkite organization is automatically selected from this list. 1. Select an appropriate **Token Expiry** duration. 1. Select from the appropriate **REST API Scopes** or **GraphQL API** permission, or both. Learn more about these in [Token scopes](#token-scopes). 1. To restrict which network addresses your new API access token can operate from, specify these addresses in the **Allowed IP Addresses** field, using [CIDR notation](https://en.wikipedia.org/wiki/Classless_Inter-Domain_Routing). 1. Select **Create New API Access Token** to create the token, and enter your password again if prompted. **Note:** On the resulting page, don't forget to copy your new API access token's value now, as this is the last time you'll see this value. ###### Edit an existing API access token To edit an existing API access token: 1. Select **Personal Settings** in the global navigation > [**API Access Tokens**](https://buildkite.com/user/api-access-tokens) to open its page. 1. Select the API access token to edit from the list of existing ones on this page. 1. Edit the required fields as you would when you [created the API access token](#creating-and-editing-api-access-tokens-create-an-api-access-token), as well as its [token scopes](#token-scopes). 1. Select **Update API Access Token** to save your changes. ##### Token scopes When an [API access token is being created or edited](#creating-and-editing-api-access-tokens), define the required **REST API Scopes**, for which you select permissions (**READ**, **WRITE**, **DELETE**) for different Buildkite platform features that this token is granted access to. Each individual combination of these permissions and features is known as a _scope_. You can also select **Enable GraphQL API access** as an additional scope, noting that this is a full-access option that does not provide any further granular scopes/permission restrictions to Buildkite platform features. To restrict an API access token's scope to individual GraphQL API features, you can do so by implementing [GraphQL API portals](/docs/apis/graphql/portals). A token's **REST API Scopes** are organized by Buildkite platform feature categories and their individual features. See the relevant tables within this section for details of these features, along with the permission types (**READ**, **WRITE**, **DELETE**) that each of these features provide. For REST API scopes, you can use the following: - The **Search** feature allows you to filter the available Buildkite platform features. - The **Presets** feature allows you to select between all **Read only**, all **Read + Write**, or all **Full Access** (which includes **DELETE**) permissions across all of these Buildkite platform features, regardless of whether or not these features have been filtered using **Search**. Token scopes are also available to OAuth access tokens, which are issued by the Buildkite platform on behalf of your Buildkite user account for certain processes. However, when these processes occur, while you can select a Buildkite organization you're a member of, which the OAuth token grants access to, the Buildkite platform defines the scopes for these access tokens. ###### CLI OAuth token scopes When you authenticate with `bk auth login`, the [Buildkite CLI](/docs/platform/cli) requests all available REST API scopes by default. The Buildkite platform enforces server-side restrictions. The issued token only grants permissions that your Buildkite user account actually has. The `graphql` scope is excluded from this process due to its unscoped nature. To restrict the scopes requested during OAuth login, use the `--scopes` flag: - `--scopes "read_only"` requests only `read_*` scopes (read-only access). - `--scopes "read_only write_builds"` combines the `read_only` group with an individual scope. - `--scopes "read_user read_organizations"` requests specific individual scopes. For organizations that enforce the principle of least privilege, use `--scopes` to issue tokens with only the minimum scopes required. Learn more about the `--scopes` flag in the [`bk auth login` reference](/docs/platform/cli/reference/auth#login-auth). A token's REST API scopes are granular, and you can select some or all of the following Buildkite platform features and their scopes. ###### CI/CD | Feature and scopes | Description | Read | Write | Delete | **** | | | | ###### Organization and users | Feature and scopes | Description | Read | Write | Delete | **** | | | | ###### Security | Feature and scopes | Description | Read | Write | Delete | **** | | | | ###### Test Engine | Feature and scopes | Description | Read | Write | Delete | **** | | | | ###### Packages | Feature and scopes | Description | Read | Write | Delete | **** | | | | ###### Portals | Feature and scopes | Description | Read | Write | Delete | **** | | | | ##### Auditing tokens Viewing the [**API Access Audit** page](https://buildkite.com/organizations/~/api-access-audit) requires Buildkite organization administrator privileges. You can access this page by selecting **Settings** in the global navigation > **API Access Audit** within the **Audit** section. All API access tokens that users within your Buildkite organization have created, and currently have access to your organization's data will be listed. The table includes the scopes of each token, how long ago they were created, and how long since they've been used. From the **API Access Audit** page, navigate through to any token to see more detailed information about its scopes and the most recent request, where you can also [remove a token's access to your Buildkite organization's data](#auditing-tokens-removing-an-organization-from-a-token) if required. The list of tokens can be filtered by username, scopes, IP address, or whether the user has admin privileges. ###### Removing an organization from a token If you have old API access tokens that should no longer be used, or need to prevent such a token from performing further actions, Buildkite organization administrators can remove the token's access to organization data. From the [**API Access Audit** page](#auditing-tokens), find the API token whose access you want to remove and select it. You can search for tokens using usernames, token scopes, full IP addresses, admin privileges, or the value of the token itself. Scroll to the end of the specific token's page, then select **Remove Organization from Token**. Removing access from a token sends a notification email to the token's owner, who cannot re-add your organization to the token's scope. ##### Restricting API access by IP address > 📘 Enterprise plan feature > Restricting API access by IP address is only available to Buildkite customers on the [Enterprise](https://buildkite.com/pricing) plan. If you'd like to limit an API token's access to your organization by IP address, you can create an allowlist of IP addresses in the [organization's API security settings](https://buildkite.com/organizations/~/security/api). You can specify multiple IP addresses, separated by individual spaces, as well as [CIDR notation](https://en.wikipedia.org/wiki/Classless_Inter-Domain_Routing) to specify a range of IP addresses. You can also manage the allowlist with the [`organizationApiIpAllowlistUpdate`](/docs/apis/graphql/schemas/mutation/organizationapiipallowlistupdate) mutation in the GraphQL API. ##### Inactive API tokens revocation > 📘 Enterprise plan feature > Revoking inactive API tokens automatically is only available to Buildkite customers on the [Enterprise](https://buildkite.com/pricing) plan. To enable the inactive API access tokens revocation feature, navigate to your [organization's API security settings](https://buildkite.com/organizations/~/security/api) and specify the maximum timeframe for inactive tokens to remain valid. An _inactive API access token_ refers to one that has not been used within the specified duration. When an API token surpasses the configured setting, Buildkite will automatically revoke the token's access to your organization. Upon token revocation, Buildkite will notify the owner of their change in access. ##### Programmatically managing tokens The `access-token` REST API endpoint can be used to retrieve or revoke an API access token. See the [REST API access token](/docs/apis/rest-api/access-token) page for further information. ##### API token lifecycle Buildkite's API access tokens have the following lifecycle characteristics: - API access tokens are issued for users within a Buildkite organization. The tokens are stored in the Buildkite database (linked to the user ID) and by the user for which they're issued. - The tokens are associated with a specific user and can only be revoked by that user. Buildkite organization administrators can remove a user from an organization, which prevents the user from accessing any organization resources and pipelines, and prevents access using any API access token associated with that user. ##### API token security This section explains risk mitigation strategies which you can implement, and others which are in place, to prevent your Buildkite API access tokens being compromised. ###### Rotation Buildkite's API access tokens have no built-in expiration date. The best practices regarding regular credential rotation recommended by [OWASP](https://cheatsheetseries.owasp.org/cheatsheets/Cryptographic_Storage_Cheat_Sheet.html#key-lifetimes-and-rotation) suggest rotating the tokens at least once a year. In case of a security compromise or breach, it is strongly recommended that the old tokens are [invalidated](/docs/apis/managing-api-tokens#auditing-tokens-removing-an-organization-from-a-token) or inactive ones [revoked](#inactive-api-tokens-revocation), and new tokens are issued. The [API Access Tokens page](https://buildkite.com/user/api-access-tokens) has a _Duplicate_ button that can be used to create a new token with the same permissions as the existing token. ###### GitHub secret scanning program Learn more about this program in [Token security](/docs/platform/security/tokens). ##### FAQs ###### Can I view an existing token? No, you can change the scope and description of a token, or revoke it, but you can't view the actual token after creating it ###### Can I re-add my organization to a token? No. If an organization has revoked a token, it cannot be re-added to the token. The token owner would have to create a new token with access to your organization. ###### Can I delete a token? Yes. If you need to delete a token entirely, you can use the [REST API `access-token` endpoint](/docs/apis/rest-api/access-token#revoke-the-current-token). You will need to know the full token value. If you own the token, you can revoke your token from the [API access token page](https://buildkite.com/user/api-access-tokens) in your Personal Settings. ###### What happens if I remove the access for a token that's currently in use? The token will lose access to the organization data. Any future API requests will no longer successfully authorize. ###### Does restricting API access by IP address apply to the remote MCP server? Yes. Although the [Buildkite remote MCP server](/docs/apis/mcp-server/remote/configuring-ai-tools) makes API calls from Buildkite's infrastructure, these requests are still subject to your organization's IP allowlist for API token access. [Agent token]: /docs/agent/self-hosted/tokens --- ### API differences URL: https://buildkite.com/docs/apis/api-differences #### API differences between REST and GraphQL Buildkite provides both a [REST API](/docs/apis/rest-api) and [GraphQL API](/docs/apis/graphql-api), but there are some differences between the two. Some tasks can only be achieved using the GraphQL API or the REST API. For example, REST API is a good choice for Organization-level tasks and it also allows using granular access permissions, while GraphQL is more comprehensive and often can help you achieve things a single user would want to do in the Buildkite UI. We recommend using a mixture of both when required. The strengths of the GraphQL API are in complex data queries, and the strengths of the REST API are in creating and modifying records. On this page, we've collected the known limitation where some API features are only available with either REST or GraphQL. ##### Features only available in the REST API - [Granular access permissions](/docs/apis/managing-api-tokens#token-scopes). - [Display the information about the access token currently in use](/docs/apis/rest-api/access-token#get-the-current-token). - [Revoke the current access token](/docs/apis/rest-api/access-token#revoke-the-current-token). - [Delete annotations on a build](/docs/apis/rest-api/annotations#delete-an-annotation-on-a-build). - [Get the `group_key` field for jobs that belong to group steps](/docs/apis/rest-api/builds#get-a-build) (only available through builds endpoints, not individual job endpoints). - [Get an output of job logs](/docs/apis/rest-api/jobs#get-a-jobs-log-output). - [Retry data for jobs](/docs/apis/rest-api/jobs#retry-a-job). - [Get a list of IPs from which Buildkite sends webhooks](/docs/apis/rest-api/meta#get-meta-information). - [Set provider properties](/docs/apis/rest-api/pipelines#provider-settings-properties) `provider_settings` allow configuring how the pipeline is triggered based on the source code provider's events; available on pipeline for all the pipeline inputs on pipeline create. ##### Features only available in the GraphQL API - [Get a list of agent token IDs (agent tokens are currently only available via GraphQL)](/docs/apis/graphql/cookbooks/agents#get-a-list-of-unclustered-agent-token-ids). - [Get all environment variables set on a build](/docs/apis/graphql/cookbooks/builds#get-environment-variables-set-on-a-build). - [Increase the next build number](/docs/apis/graphql/cookbooks/builds#increase-the-next-build-number). - [Get build info by ID](/docs/apis/graphql/cookbooks/builds#get-build-info-by-id). - [Get all jobs in a given queue for a given timeframe](/docs/apis/graphql/cookbooks/jobs#get-all-jobs-in-a-given-queue-for-a-given-timeframe). - [Get all jobs in a particular concurrency group](/docs/apis/graphql/cookbooks/jobs#get-all-jobs-in-a-particular-concurrency-group). - list job events. - [Cancel a job](/docs/apis/graphql/schemas/mutation/jobtypecommandcancel). - [Remove users from an organization](/docs/apis/graphql/cookbooks/organizations#delete-an-organization-member). - [Set up SSO](/docs/platform/sso/sso-setup-with-graphql). - [Get all the pipeline metrics from the dashboard from the API](/docs/apis/graphql/cookbooks/pipelines#get-pipeline-metrics). - [Get the creation date of the most recent build in every pipeline](/docs/apis/graphql/cookbooks/builds#get-the-creation-date-of-the-most-recent-build-in-every-pipeline). - [Count the number of builds on a branch](/docs/apis/graphql/cookbooks/builds#count-the-number-of-builds-on-a-branch). - Filter results from pipeline listings. - Create and manage pipeline schedules. - [Invite a user into a specific team with a specific role and permissions set](/docs/apis/graphql/cookbooks/organizations#create-a-user-add-them-to-a-team-and-set-user-permissions). --- ### Overview URL: https://buildkite.com/docs/apis/rest-api #### REST API overview The Buildkite REST API aims to give you complete programmatic access and control of Buildkite to extend, integrate and automate anything to suit your particular needs. The current version of the Buildkite API is v2. For the list of existing disparities between the REST API and the GraphQL API, see [API differences](/docs/apis/api-differences). ##### Schema All API access is over HTTPS, and accessed from the `api.buildkite.com` domain. All data is sent as JSON. The following `curl` command: ```bash curl https://api.buildkite.com ``` Generates a response like: ```json {"message":"🛠","timestamp":1719276157} ``` where the `timestamp` value is the current [Unix time](https://en.wikipedia.org/wiki/Unix_time) value. ##### Endpoints This section lists all the available endpoints organized by resource type. Each endpoint includes its HTTP method, path structure, and links to detailed documentation with request and response examples and additional relevant information. ###### Organizations Method | Endpoint | Description ------ | -------- | ----------- GET | `/v2/organizations` | [List organizations](/docs/apis/rest-api/organizations#list-organizations) GET | `/v2/organizations/{org.slug}` | [Get an organization](/docs/apis/rest-api/organizations#get-an-organization) ###### Organization members Method | Endpoint | Description ------ | -------- | ----------- GET | `/v2/organizations/{org.slug}/members` | [List organization members](/docs/apis/rest-api/organizations/members#list-organization-members) GET | `/v2/organizations/{org.slug}/members/{user.uuid}` | [Get an organization member](/docs/apis/rest-api/organizations/members#get-an-organization-member) ###### Pipelines Method | Endpoint | Description ------ | -------- | ----------- GET | `/v2/organizations/{org.slug}/pipelines` | [List pipelines](/docs/apis/rest-api/pipelines#list-pipelines) GET | `/v2/organizations/{org.slug}/pipelines/{slug}` | [Get a pipeline](/docs/apis/rest-api/pipelines#get-a-pipeline) POST | `/v2/organizations/{org.slug}/pipelines` | [Create a pipeline](/docs/apis/rest-api/pipelines#create-a-yaml-pipeline) PATCH | `/v2/organizations/{org.slug}/pipelines/{slug}` | [Update a pipeline](/docs/apis/rest-api/pipelines#update-a-pipeline) DELETE | `/v2/organizations/{org.slug}/pipelines/{slug}` | [Delete a pipeline](/docs/apis/rest-api/pipelines#delete-a-pipeline) POST | `/v2/organizations/{org.slug}/pipelines/{slug}/archive` | [Archive a pipeline](/docs/apis/rest-api/pipelines#archive-a-pipeline) POST | `/v2/organizations/{org.slug}/pipelines/{slug}/unarchive` | [Unarchive a pipeline](/docs/apis/rest-api/pipelines#unarchive-a-pipeline) POST | `/v2/organizations/{org.slug}/pipelines/{slug}/webhook` | [Add a webhook](/docs/apis/rest-api/pipelines#add-a-webhook) ###### Builds Method | Endpoint | Description ------ | -------- | ----------- GET | `/v2/builds` | [List all builds](/docs/apis/rest-api/builds#list-all-builds) GET | `/v2/organizations/{org.slug}/builds` | [List builds for an organization](/docs/apis/rest-api/builds#list-builds-for-an-organization) GET | `/v2/organizations/{org.slug}/pipelines/{pipeline.slug}/builds` | [List builds for a pipeline](/docs/apis/rest-api/builds#list-builds-for-a-pipeline) GET | `/v2/organizations/{org.slug}/pipelines/{pipeline.slug}/builds/{number}` | [Get a build](/docs/apis/rest-api/builds#get-a-build) POST | `/v2/organizations/{org.slug}/pipelines/{pipeline.slug}/builds` | [Create a build](/docs/apis/rest-api/builds#create-a-build) PUT | `/v2/organizations/{org.slug}/pipelines/{pipeline.slug}/builds/{number}/cancel` | [Cancel a build](/docs/apis/rest-api/builds#cancel-a-build) PUT | `/v2/organizations/{org.slug}/pipelines/{pipeline.slug}/builds/{number}/rebuild` | [Rebuild a build](/docs/apis/rest-api/builds#rebuild-a-build) ###### Jobs Method | Endpoint | Description ------ | -------- | ----------- PUT | `/v2/organizations/{org.slug}/pipelines/{pipeline.slug}/builds/{number}/jobs/{job.id}/retry` | [Retry a job](/docs/apis/rest-api/jobs#retry-a-job) PUT | `/v2/organizations/{org.slug}/pipelines/{pipeline.slug}/builds/{number}/jobs/{job.id}/reprioritize` | [Reprioritize a job](/docs/apis/rest-api/jobs#reprioritize-a-job) PUT | `/v2/organizations/{org.slug}/pipelines/{pipeline.slug}/builds/{number}/jobs/{job.id}/unblock` | [Unblock a job](/docs/apis/rest-api/jobs#unblock-a-job) GET | `/v2/organizations/{org.slug}/pipelines/{pipeline.slug}/builds/{number}/jobs/{job.id}/log` | [Get a job's log](/docs/apis/rest-api/jobs#get-a-jobs-log-output) DELETE | `/v2/organizations/{org.slug}/pipelines/{pipeline.slug}/builds/{number}/jobs/{job.id}/log` | [Delete a job's log](/docs/apis/rest-api/jobs#delete-a-jobs-log-output) GET | `/v2/organizations/{org.slug}/pipelines/{pipeline.slug}/builds/{number}/jobs/{job.id}/env` | [Get a job's environment](/docs/apis/rest-api/jobs#get-a-jobs-environment-variables) ###### Annotations Method | Endpoint | Description ------ | -------- | ----------- GET | `/v2/organizations/{org.slug}/pipelines/{pipeline.slug}/builds/{number}/annotations` | [List annotations](/docs/apis/rest-api/annotations#list-annotations-for-a-build) POST | `/v2/organizations/{org.slug}/pipelines/{pipeline.slug}/builds/{number}/annotations` | [Create an annotation](/docs/apis/rest-api/annotations#create-an-annotation-on-a-build) DELETE | `/v2/organizations/{org.slug}/pipelines/{pipeline.slug}/builds/{number}/annotations/{uuid}` | [Delete an annotation](/docs/apis/rest-api/annotations#delete-an-annotation-on-a-build) ###### Artifacts Method | Endpoint | Description ------ | -------- | ----------- GET | `/v2/organizations/{org.slug}/pipelines/{pipeline.slug}/builds/{number}/artifacts` | [List artifacts for a build](/docs/apis/rest-api/artifacts#list-artifacts-for-a-build) GET | `/v2/organizations/{org.slug}/pipelines/{pipeline.slug}/builds/{number}/jobs/{job.id}/artifacts` | [List artifacts for a job](/docs/apis/rest-api/artifacts#list-artifacts-for-a-job) GET | `/v2/organizations/{org.slug}/pipelines/{pipeline.slug}/builds/{number}/jobs/{job.id}/artifacts/{id}` | [Get an artifact](/docs/apis/rest-api/artifacts#get-an-artifact) GET | `/v2/organizations/{org.slug}/pipelines/{pipeline.slug}/builds/{number}/jobs/{job.id}/artifacts/{id}/download` | [Download an artifact](/docs/apis/rest-api/artifacts#download-an-artifact) DELETE | `/v2/organizations/{org.slug}/pipelines/{pipeline.slug}/builds/{number}/jobs/{job.id}/artifacts/{id}` | [Delete an artifact](/docs/apis/rest-api/artifacts#delete-an-artifact) ###### Agents Method | Endpoint | Description ------ | -------- | ----------- GET | `/v2/organizations/{org.slug}/agents` | [List agents](/docs/apis/rest-api/agents#list-agents) GET | `/v2/organizations/{org.slug}/agents/{id}` | [Get an agent](/docs/apis/rest-api/agents#get-an-agent) PUT | `/v2/organizations/{org.slug}/agents/{id}/stop` | [Stop an agent](/docs/apis/rest-api/agents#stop-an-agent) PUT | `/v2/organizations/{org.slug}/agents/{id}/pause` | [Pause an agent](/docs/apis/rest-api/agents#pause-an-agent) PUT | `/v2/organizations/{org.slug}/agents/{id}/resume` | [Resume an agent](/docs/apis/rest-api/agents#resume-an-agent) ###### Teams Method | Endpoint | Description ------ | -------- | ----------- GET | `/v2/organizations/{org.slug}/teams` | [List teams](/docs/apis/rest-api/teams#list-teams) GET | `/v2/organizations/{org.slug}/teams/{team.uuid}` | [Get a team](/docs/apis/rest-api/teams#get-a-team) POST | `/v2/organizations/{org.slug}/teams` | [Create a team](/docs/apis/rest-api/teams#create-a-team) PATCH | `/v2/organizations/{org.slug}/teams/{team.uuid}` | [Update a team](/docs/apis/rest-api/teams#update-a-team) DELETE | `/v2/organizations/{org.slug}/teams/{team.uuid}` | [Delete a team](/docs/apis/rest-api/teams#delete-a-team) ###### Team members Method | Endpoint | Description ------ | -------- | ----------- GET | `/v2/organizations/{org.slug}/teams/{team.uuid}/members` | [List team members](/docs/apis/rest-api/teams/members#list-team-members) GET | `/v2/organizations/{org.slug}/teams/{team.uuid}/members/{user.uuid}` | [Get a team member](/docs/apis/rest-api/teams/members#get-a-team-member) POST | `/v2/organizations/{org.slug}/teams/{team.uuid}/members` | [Create a team member](/docs/apis/rest-api/teams/members#create-a-team-member) PATCH | `/v2/organizations/{org.slug}/teams/{team.uuid}/members/{user.uuid}` | [Update a team member](/docs/apis/rest-api/teams/members#update-a-team-member) DELETE | `/v2/organizations/{org.slug}/teams/{team.uuid}/members/{user.uuid}` | [Delete a team member](/docs/apis/rest-api/teams/members#delete-a-team-member) ###### Team pipelines Method | Endpoint | Description ------ | -------- | ----------- GET | `/v2/organizations/{org.slug}/teams/{team.uuid}/pipelines` | [List team pipelines](/docs/apis/rest-api/teams/pipelines#list-team-pipelines) GET | `/v2/organizations/{org.slug}/teams/{team.uuid}/pipelines/{uuid}` | [Get a team pipeline](/docs/apis/rest-api/teams/pipelines#get-a-team-pipeline) POST | `/v2/organizations/{org.slug}/teams/{team.uuid}/pipelines` | [Create a team pipeline](/docs/apis/rest-api/teams/pipelines#create-a-team-pipeline) PATCH | `/v2/organizations/{org.slug}/teams/{team.uuid}/pipelines/{uuid}` | [Update a team pipeline](/docs/apis/rest-api/teams/pipelines#update-a-team-pipeline) DELETE | `/v2/organizations/{org.slug}/teams/{team.uuid}/pipelines/{uuid}` | [Delete a team pipeline](/docs/apis/rest-api/teams/pipelines#delete-a-team-pipeline) ###### Team suites Method | Endpoint | Description ------ | -------- | ----------- GET | `/v2/organizations/{org.slug}/teams/{team.uuid}/suites` | [List team suites](/docs/apis/rest-api/teams/suites#list-team-suites) GET | `/v2/organizations/{org.slug}/teams/{team.uuid}/suites/{uuid}` | [Get a team suite](/docs/apis/rest-api/teams/suites#get-a-team-suite) POST | `/v2/organizations/{org.slug}/teams/{team.uuid}/suites` | [Create a team suite](/docs/apis/rest-api/teams/suites#create-a-team-suite) PATCH | `/v2/organizations/{org.slug}/teams/{team.uuid}/suites/{uuid}` | [Update a team suite](/docs/apis/rest-api/teams/suites#update-a-team-suite) DELETE | `/v2/organizations/{org.slug}/teams/{team.uuid}/suites/{uuid}` | [Delete a team suite](/docs/apis/rest-api/teams/suites#delete-a-team-suite) ###### Clusters Method | Endpoint | Description ------ | -------- | ----------- GET | `/v2/organizations/{org.slug}/clusters` | [List clusters](/docs/apis/rest-api/clusters#clusters-list-clusters) GET | `/v2/organizations/{org.slug}/clusters/{id}` | [Get a cluster](/docs/apis/rest-api/clusters#clusters-get-a-cluster) POST | `/v2/organizations/{org.slug}/clusters` | [Create a cluster](/docs/apis/rest-api/clusters#clusters-create-a-cluster) PUT | `/v2/organizations/{org.slug}/clusters/{id}` | [Update a cluster](/docs/apis/rest-api/clusters#clusters-update-a-cluster) DELETE | `/v2/organizations/{org.slug}/clusters/{id}` | [Delete a cluster](/docs/apis/rest-api/clusters#clusters-delete-a-cluster) ###### Queues Method | Endpoint | Description ------ | -------- | ----------- GET | `/v2/organizations/{org.slug}/clusters/{cluster.id}/queues` | [List queues](/docs/apis/rest-api/clusters/queues#list-queues) GET | `/v2/organizations/{org.slug}/clusters/{cluster.id}/queues/{id}` | [Get a queue](/docs/apis/rest-api/clusters/queues#get-a-queue) POST | `/v2/organizations/{org.slug}/clusters/{cluster.id}/queues` | [Create a self-hosted queue](/docs/apis/rest-api/clusters/queues#create-a-self-hosted-queue) POST | `/v2/organizations/{org.slug}/clusters/{cluster.id}/queues` | [Create a Buildkite hosted queue](/docs/apis/rest-api/clusters/queues#create-a-buildkite-hosted-queue) PUT | `/v2/organizations/{org.slug}/clusters/{cluster.id}/queues/{id}` | [Update a queue](/docs/apis/rest-api/clusters/queues#update-a-queue) DELETE | `/v2/organizations/{org.slug}/clusters/{cluster.id}/queues/{id}` | [Delete a queue](/docs/apis/rest-api/clusters/queues#delete-a-queue) POST | `/v2/organizations/{org.slug}/clusters/{cluster.id}/queues/{id}/pause_dispatch` | [Pause a queue](/docs/apis/rest-api/clusters/queues#pause-a-queue) POST | `/v2/organizations/{org.slug}/clusters/{cluster.id}/queues/{id}/resume_dispatch` | [Resume a paused queue](/docs/apis/rest-api/clusters/queues#resume-a-paused-queue) ###### Agent tokens Method | Endpoint | Description ------ | -------- | ----------- GET | `/v2/organizations/{org.slug}/clusters/{cluster.id}/tokens` | [List agent tokens](/docs/apis/rest-api/clusters/agent-tokens#list-tokens) GET | `/v2/organizations/{org.slug}/clusters/{cluster.id}/tokens/{id}` | [Get an agent token](/docs/apis/rest-api/clusters/agent-tokens#get-a-token) POST | `/v2/organizations/{org.slug}/clusters/{cluster.id}/tokens` | [Create an agent token](/docs/apis/rest-api/clusters/agent-tokens#create-a-token) PUT | `/v2/organizations/{org.slug}/clusters/{cluster.id}/tokens/{id}` | [Update an agent token](/docs/apis/rest-api/clusters/agent-tokens#update-a-token) DELETE | `/v2/organizations/{org.slug}/clusters/{cluster.id}/tokens/{id}` | [Revoke an agent token](/docs/apis/rest-api/clusters/agent-tokens#revoke-a-token) ###### Pipeline templates Method | Endpoint | Description ------ | -------- | ----------- GET | `/v2/organizations/{org.slug}/pipeline-templates` | [List pipeline templates](/docs/apis/rest-api/pipeline-templates#list-pipeline-templates) GET | `/v2/organizations/{org.slug}/pipeline-templates/{uuid}` | [Get a pipeline template](/docs/apis/rest-api/pipeline-templates#get-a-pipeline-template) POST | `/v2/organizations/{org.slug}/pipeline-templates` | [Create a pipeline template](/docs/apis/rest-api/pipeline-templates#create-a-pipeline-template) PATCH | `/v2/organizations/{org.slug}/pipeline-templates/{uuid}` | [Update a pipeline template](/docs/apis/rest-api/pipeline-templates#update-a-pipeline-template) DELETE | `/v2/organizations/{org.slug}/pipeline-templates/{uuid}` | [Delete a pipeline template](/docs/apis/rest-api/pipeline-templates#delete-a-pipeline-template) ###### Rules Method | Endpoint | Description ------ | -------- | ----------- GET | `/v2/organizations/{org.slug}/rules` | [List rules](/docs/apis/rest-api/rules#rules-list-rules) GET | `/v2/organizations/{org.slug}/rules/{uuid}` | [Get a rule](/docs/apis/rest-api/rules#rules-get-a-rule) POST | `/v2/organizations/{org.slug}/rules` | [Create a rule](/docs/apis/rest-api/rules#rules-create-a-rule) DELETE | `/v2/organizations/{org.slug}/rules/{uuid}` | [Delete a rule](/docs/apis/rest-api/rules#rules-delete-a-rule) ###### Emojis Method | Endpoint | Description ------ | -------- | ----------- GET | `/v2/organizations/{org.slug}/emojis` | [List emojis](/docs/apis/rest-api/emojis#list-emojis) ###### User Method | Endpoint | Description ------ | -------- | ----------- GET | `/v2/user` | [Get current user](/docs/apis/rest-api/user#get-the-current-user) ###### Access token Method | Endpoint | Description ------ | -------- | ----------- GET | `/v2/access-token` | [Get current token](/docs/apis/rest-api/access-token#get-the-current-token) DELETE | `/v2/access-token` | [Revoke current token](/docs/apis/rest-api/access-token#revoke-the-current-token) ###### Meta Method | Endpoint | Description ------ | -------- | ----------- GET | `/v2/meta` | [Get meta information](/docs/apis/rest-api/meta#get-meta-information) ##### Query string parameters Some API endpoints accept query string parameters which are added to the end of the URL. For example, the [builds listing APIs](/docs/apis/rest-api/builds#list-all-builds) can be filtered by `state` using the following `curl` command: ```bash curl -H "Authorization: Bearer $TOKEN" \ -X GET "https://api.buildkite.com/v2/organizations/my-org/pipelines/my-pipeline/builds?state=passed" ``` ##### Request body properties Some API requests accept JSON request bodies for specifying data. For example, the [build create API](/docs/apis/rest-api/builds#create-a-build) can be passed the required properties using the following `curl` command: ```bash curl -H "Authorization: Bearer $TOKEN" \ -X POST "https://api.buildkite.com/v2/organizations/my-org/pipelines/my-pipeline/builds" \ -H "Content-Type: application/json" \ -d '{ "key": "value" }' ``` The data encoding is assumed to be `application/json`. Unless explicitly stated you can not encode properties as `www-form-urlencoded` or `multipart/form-data`. ##### Authentication You can authenticate with the Buildkite API using access tokens, represented by the value `$TOKEN` throughout this documentation. API access tokens authenticate calls to the API and can be created from the [API access tokens](https://buildkite.com/user/api-access-tokens) page. When configuring API access tokens, you can limit their access to individual organizations and permissions, and these tokens can be revoked at any time from the web interface [or the REST API](/docs/apis/rest-api/access-token#revoke-the-current-token). To authenticate an API call using an access token, set the `Authorization` HTTP header to the word `Bearer`, followed by a space, followed by the access token. For example: ```bash curl -H "Authorization: Bearer $TOKEN" \ -X GET "https://api.buildkite.com/v2/user" ``` API access using basic HTTP authentication is not supported. ###### Public key > 📘 This feature is currently available in preview. API access tokens can be created with a public key pair instead of a static token. The private key can be used to sign [JWTs](https://datatracker.ietf.org/doc/html/rfc7519) to authenticate API calls. You must use the API access token's UUID as the `iss` claim in the JWT, have an `iat` within 10 seconds of the current time, and an `exp` within 5 minutes of your `iat`. For example, in Ruby - where `private_key.pem` contains the private key corresponding to an access token's public key and `$UUID` is the UUID of the access token: ```ruby require "net/http" require "openssl" require "jwt" # https://rubygems.org/gems/jwt claims = { "iss" => "$UUID", "iat" => Time.now.to_i - 5, "exp" => Time.now.to_i + 60, } private_key = OpenSSL::PKey::RSA.new(File.read("private_key.pem")) jwt = JWT.encode(claims, private_key, "RS256") Net::HTTP.get(URI("https://api.buildkite.com/v2/access-token"), "Authorization" => "Bearer #{jwt}") ``` ##### Pagination For endpoints which support pagination, the pagination information can be found in the `Link` HTTP response header containing zero or more of `next`, `prev`, `first` and `last`. ```bash curl -i -H "Authorization: Bearer $TOKEN" \ -X GET "https://api.buildkite.com/v2/organizations/my-great-org/pipelines/my-pipeline/builds" ``` ``` HTTP/1.1 200 OK ... Link: ; rel="next", ; rel="last" ``` You can set the page using the following query string parameters: | `page` | The page of results to return_Default:_ `1` | `per_page` | How many results to return per-page_Default:_ `30` _Maximum:_ `100` ##### CORS headers API responses include the following [CORS headers](https://developer.mozilla.org/en-US/docs/Web/HTTP/Access_control_CORS) allowing you to use the API directly from the browser: * `Access-Controller-Allow-Origin: *` * `Access-Control-Expose-Headers: Link` For an example of this in use, see the [Emojis API example on CodePen](https://codepen.io/dannymidnight/pen/jOpJpmY) for adding emoji support to your own browser-based dashboards and build screens. ##### Migrating from v1 to v2 The following changes were made in v2 of our API: * `POST /v1/organizations/{org.slug}/agents` has been removed * `DELETE /v1/organizations/{org.slug}/agents/{id}` has been removed * All project-related properties in JSON responses and requests have been renamed to pipeline * The `featured_build` pipeline property has been removed * The deprecated `/accounts` URL has been removed * URLs containing `/projects` have been renamed to `/pipelines` ##### Clients To make getting started easier, check out these clients available from our contributors: * [Buildkit](https://github.com/Shopify/buildkit) for [Ruby](https://www.ruby-lang.org) * [go-buildkite](https://github.com/buildkite/go-buildkite) for [Go](https://golang.org) * [PSBuildkite](https://github.com/felixfbecker/PSBuildkite) for [PowerShell](https://microsoft.com/powershell) * [pybuildkite](https://github.com/pyasi/pybuildkite) for [Python](https://www.python.org/) * [buildkite-php](https://github.com/bbaga/buildkite-php) for [PHP](https://www.php.net/) * [buildkite-swift](https://github.com/aaronsky/buildkite-swift) for [Swift](https://swift.org) * [buildkite-api-client](https://github.com/SourceLabOrg/Buildkite-Api-Client) for [Java](https://www.java.com) --- ### Rate limits URL: https://buildkite.com/docs/apis/rest-api/rate-limits #### REST API rate limits To ensure stability and prevent excessive or abusive calls to the server, Buildkite imposes limits on the number of REST API requests that can be made within a minute. These limits apply to the Pipelines REST API as well as the Analytics REST API. The REST API enforces two rate limits, and a request is rejected if either is exceeded: - An [organization-level limit](#organization-rate-limit) shared across all users in the organization. - A [per-user limit](#per-user-rate-limits). The default per-user limit is 50 requests per minute. ##### Organization rate limit Buildkite imposes a rate limit of 200 requests per minute for each organization. This is the cumulative limit of all API requests made by users in an organization. > 📘 Buildkite MCP server requests > Requests to the Buildkite REST API made through the [Buildkite MCP server](/docs/apis/mcp-server) are handled differently based on whether you're using the [remote](/docs/apis/mcp-server#types-of-mcp-servers-remote-mcp-server) or [local](/docs/apis/mcp-server#types-of-mcp-servers-local-mcp-server) MCP server. > The remote MCP server's requests are tracked through a separate per-user rate limit, which _does not_ count towards your Buildkite organization's REST API limit. See [Remote MCP server rate limits](/docs/apis/mcp-server/remote/rate-limits) for details. > The local MCP server's requests, however, _do_ count towards your Buildkite organization's REST API limit. ##### Per-user rate limits In addition to the organization-level limit, the REST API enforces a per-user rate limit on requests. This limit prevents a single user from consuming the entire organization's API quota. The per-user limit is evaluated for the authenticated user associated with the API access token. The default per-user limit is 50 requests per minute. A request counts towards both the per-user limit and the [organization-level limit](#organization-rate-limit). The request is rejected with a `429` status code if either limit is exceeded. Check the `RateLimit-User-Remaining` response header to monitor your per-user quota. See [Exceeding the rate limit](#exceeding-the-rate-limit) for details. Organization administrators can view the per-user limits that apply to their organization on the [**Service Quotas**](https://buildkite.com/organizations/~/quotas) page, accessible from **Settings** > **Quotas** in the Buildkite interface. ##### Checking rate limit details Every API response includes two independent sets of rate limit headers: one for the [organization-level limit](#organization-rate-limit) and one for the [per-user limit](#per-user-rate-limits). You can monitor both limits independently and determine which one your application is closer to reaching. The `RateLimit-*` headers track the organization's shared quota, while the `RateLimit-User-*` headers track the quota for the authenticated user making the request. A `429` response is returned if either limit is exceeded. Organization-level headers: - `RateLimit-Scope`: The scope of the organization-level rate limit. Set to `rest`. - `RateLimit-Remaining`: The remaining requests within the current organization time window. - `RateLimit-Limit`: The organization rate limit. - `RateLimit-Reset`: The number of seconds remaining until the organization time window resets. Per-user headers: - `RateLimit-User-Scope`: The scope of the per-user rate limit. Set to `rest_user`. - `RateLimit-User-Remaining`: The remaining requests for the authenticated user within the current time window. - `RateLimit-User-Limit`: The per-user rate limit. - `RateLimit-User-Reset`: The number of seconds remaining until the per-user time window resets. For example, the following response headers show an authenticated user with 35 requests remaining against their per-user limit of 50. The organization has 80 requests remaining against its limit of 200, reflecting usage from multiple users across the organization: ```js RateLimit-User-Scope: rest_user RateLimit-User-Remaining: 35 RateLimit-User-Limit: 50 RateLimit-User-Reset: 42 RateLimit-Scope: rest RateLimit-Remaining: 80 RateLimit-Limit: 200 RateLimit-Reset: 42 ``` ###### Using the rate limit API You can also programmatically query your organization's rate limit status using the dedicated rate limit endpoint. See the [rate limit endpoint documentation](/docs/apis/rest-api/organizations/rate-limits) for details on retrieving comprehensive rate limit information for both REST API and GraphQL API usage. ##### Exceeding the rate limit Once the rate limit is exceeded, subsequent API requests return a `429` HTTP status code. You should not make any further requests until the relevant `RateLimit-Reset` or `RateLimit-User-Reset` header specifies a new availability window. The `429` response body includes additional context about which limit was exceeded: ```json { "message": "You have exceeded your API rate limit. Please wait 42 seconds before making more requests.", "scope": "rest_user", "limit": 50, "current": 55, "reset": 42 } ``` The `scope` field indicates which limit was exceeded. For example, `rest` for the organization limit or `rest_user` for the per-user limit. ##### Best practices to avoid rate limits To ensure the smooth functioning and efficient use of the API, design your client application with the following best practices in mind: - Implement appropriate pagination techniques when querying data. - Use caching strategies to avoid excessive calls to the Buildkite API. - Regulate the rate of your requests to ensure smoother distribution by using strategies such as queues or scheduling API calls at appropriate intervals. - Use metadata about your API usage, including rate limit status, to manage behavior dynamically. - Consider all users making requests across your organization when designing your rate-limiting solution. - Be aware of retries, errors, and loops when designing your application, as they can easily accumulate and use up allocated quotas. --- ### Access token URL: https://buildkite.com/docs/apis/rest-api/access-token #### Access token API The access token API endpoint allows you to inspect and revoke an API access token. This can be useful if you find a token, can't identify its owner, and you want to revoke it. > 📘 > All the endpoints expect the token to be provided using the [Authorization HTTP header](/docs/apis/rest-api#authentication). ##### Get the current token Returns details about the API access token that was used to authenticate the request. ```bash curl -H "Authorization: Bearer $TOKEN" \ -X GET "https://api.buildkite.com/v2/access-token" ``` ```json { "uuid": "b63254c0-3271-4a98-8270-7cfbd6c2f14e", "scopes": ["read_build"], "description": "Development Token", "created_at": "2025-07-16 06:07:42 UTC", "user": { "email": "algernon.m@buildkite.com", "name": "Algernon Moncrieff" } } ``` Required scope: none Success response: `200 OK` ##### Revoke the current token Revokes the API access token that was used to authenticate the request. Once revoked, the token can no longer be used for further requests. ```bash curl -H "Authorization: Bearer $TOKEN" \ -X DELETE "https://api.buildkite.com/v2/access-token" ``` Required scope: none Success response: `204 No Content` --- ### Emojis URL: https://buildkite.com/docs/apis/rest-api/emojis #### Emojis API Buildkite supports emojis (using the `\:emoji\:` syntax) in build step names and build log header groups. The Emoji API allows you to fetch the list of emojis for an organization so you can display emojis correctly in your own integrations. Emojis can be found in text using the pattern `/:([\w+-]+):/` ##### List emojis Returns a list of all the emojis for a given organization, including custom emojis and aliases. This list is not paginated. ```bash curl -H "Authorization: Bearer $TOKEN" \ -X GET "https://api.buildkite.com/v2/organizations/{org.slug}/emojis" ``` ```json [ { "name": "rocket", "url": "https://buildkiteassets.com/emoji/unicode/1f680.png?v1" }, { "name": "shipit", "url": "https://buildkiteassets.com/emoji/shipit.png?v1", "aliases": [ "squirrel" ] }, { "name": "trollface", "url": "https://buildkiteassets.com/emoji/trollface.png?v1" }, { "name": "debian", "url": "https://buildkiteassets.com/emoji/debian.png?v1" }, { "name": "bundler", "url": "https://buildkiteassets.com/emoji/bundler.png?v1" }, { "name": "bugsnag", "url": "https://buildkiteassets.com/emoji/bugsnag.png?v1" } ] ``` Required scope: none Success response: `200 OK` --- ### Meta URL: https://buildkite.com/docs/apis/rest-api/meta #### Meta API The meta API endpoint provides information about Buildkite, including a list of Buildkite's IP addresses. It does not require authentication. ##### Get meta information Returns an object with properties describing Buildkite. `webhook_ips` is a list of IP addresses in CIDR notation that Buildkite uses to send outbound traffic such as webhooks and commit statuses. These are subject to change from time to time. We recommend checking for new addresses daily, and will try to advertise new addresses for at least 7 days before they are used. Note: The IP addresses shown here are examples. You must query the API to get the current set of IP addresses. ```bash curl "https://api.buildkite.com/v2/meta" ``` ```json { "webhook_ips": ["192.0.2.0/24", "198.51.100.12"] } ``` Success response: `200 OK` --- ### User URL: https://buildkite.com/docs/apis/rest-api/user #### User API The user API endpoint allows you to inspect details about the user account that owns the API token that is currently being used. ##### Get the current user Returns basic details about the user account that sent the request. ```bash curl -H "Authorization: Bearer $TOKEN" \ -X GET "https://api.buildkite.com/v2/user" ``` ```json { "id": "abc123-4567-8910-...", "graphql_id": "VXNlci0tLWU1N2ZiYTBmLWFiMTQtNGNjMC1iYjViLTY5NTc3NGZmYmZiZQ==", "name": "John Smith", "email": "john.smith@example.com", "avatar_url": "https://www.gravatar.com/avatar/abc123...", "created_at": "2012-03-04T56:07:08.910Z" } ``` Required scope: `read_user` Success response: `200 OK` --- ### Overview URL: https://buildkite.com/docs/apis/rest-api/organizations #### Organizations API The organizations API endpoint: - Allows you to list organizations and retrieve information about a Buildkite organization. - Forms the basis of several more Buildkite REST API endpoints, such as those for [pipelines](/docs/apis/rest-api/pipelines) and [teams](/docs/apis/rest-api/teams). ##### List organizations Returns a [paginated list](/docs/apis/rest-api#pagination) of organizations accessible by the user's access token. ```bash curl -H "Authorization: Bearer $TOKEN" \ -X GET "https://api.buildkite.com/v2/organizations" ``` ```json [ { "id": "bb3125de-4dc9-44cf-ad18-65d2b71a5a34", "graphql_id": "T3JnYW5pemF0aW9uLS0tOGEzMjAwOTMtMjE4OC00MmNiLWI5ZGQtNzE4NjZjZTYyYjA4", "url": "https://api.buildkite.com/v2/organizations/my-great-org", "web_url": "https://buildkite.com/my-great-org", "name": "My Great Org", "slug": "my-great-org", "pipelines_url": "https://api.buildkite.com/v2/organizations/my-great-org/pipelines", "agents_url": "https://api.buildkite.com/v2/organizations/my-great-org/agents", "emojis_url": "https://api.buildkite.com/v2/organizations/my-great-org/emojis", "created_at": "2015-05-09T21:05:59.874Z" } ] ``` Required scope: `read_organizations` Success response: `200 OK` ##### Get an organization ```bash curl -H "Authorization: Bearer $TOKEN" \ -X GET "https://api.buildkite.com/v2/organizations/{org.slug}" ``` ```json { "id": "bb3125de-4dc9-44cf-ad18-65d2b71a5a34", "graphql_id": "T3JnYW5pemF0aW9uLS0tOGEzMjAwOTMtMjE4OC00MmNiLWI5ZGQtNzE4NjZjZTYyYjA4", "url": "https://api.buildkite.com/v2/organizations/my-great-org", "web_url": "https://buildkite.com/my-great-org", "name": "My Great Org", "slug": "my-great-org", "pipelines_url": "https://api.buildkite.com/v2/organizations/my-great-org/pipelines", "agents_url": "https://api.buildkite.com/v2/organizations/my-great-org/agents", "emojis_url": "https://api.buildkite.com/v2/organizations/my-great-org/emojis", "created_at": "2015-05-09T21:05:59.874Z" } ``` Required scope: `read_organizations` Success response: `200 OK` --- ### Members URL: https://buildkite.com/docs/apis/rest-api/organizations/members #### Organization members API The organization members API endpoint allows users to view all members of a Buildkite organization. ##### Organization member data model | `id` | UUID of the user | `name` | Name of the user | `email` | Email of the user ##### List organization members Returns a list of an organization's members. ```bash curl -H "Authorization: Bearer $TOKEN" \ -X GET "https://api.buildkite.com/v2/organizations/{org.slug}/members" ``` ```json [ { "id": "0185c636-fcbf-4a6c-b49d-c4048e7b8aea", "name": "Scout Finch", "email": "scout@example.com" }, { "id": "0185dbbf-8447-4f72-ac7e-4ea3c2ec8381", "name": "Huck Finn", "email": "huck@example.com" } ] ``` Required scope: `read_organizations` Success response: `200 OK` ##### Get an organization member ```bash curl -H "Authorization: Bearer $TOKEN" \ -X GET "https://api.buildkite.com/v2/organizations/{org.slug}/members/{user.uuid}" ``` ```json { "id": "0185dbbf-8447-4f72-ac7e-4ea3c2ec8381", "name": "Victor Frankenstein", "email": "vic@example.com" } ``` Required scope: `read_organizations` Success response: `200 OK` --- ### Rate limits URL: https://buildkite.com/docs/apis/rest-api/organizations/rate-limits #### Organization rate limits API The organization rate limits API endpoint allows you to obtain a Buildkite organization's current [REST API](/docs/apis/rest-api) and [GraphQL API](/docs/apis/graphql-api) rate limit status. ##### Get rate limits Returns the current [REST API](/docs/apis/rest-api) and [GraphQL API](/docs/apis/graphql-api) rate limits for a Buildkite organization. ```bash curl -H "Authorization: Bearer $TOKEN" \ https://api.buildkite.com/v2/organizations/{organization.slug}/rate_limit ``` ```json { "scopes": { "rest": { "limit": 200, "current": 5, "reset": 35, "reset_at": "2025-12-02T05:30:00Z", "enforced": true }, "graphql": { "limit": 50000, "current": 1000, "reset": 263, "reset_at": "2025-12-02T05:34:52Z", "enforced": true } } } ``` Required scope: `read_accounts` Success response: `200 OK` ##### Response fields The response contains two JSON objects (or scopes)—`rest` for [REST API](#response-fields-rest-api) limits, and `graphql` for [GraphQL API](#response-fields-graphql-api) limits. ###### REST API The `rest` scope provides current REST API rate limits for the Buildkite organization. Field | Type | Description ----- | ---- | ----------- `limit` | integer | Maximum requests allowed in a time window. `current` | integer | Number of requests made in the current time window. `reset` | integer | Seconds remaining until the current time window resets to zero. `reset_at` | string | ISO 8601 timestamp when the current time window resets to zero. `enforced` | boolean | Indicates if rate limiting is currently enforced for this Buildkite organization. The REST API rate limit time window is 60 seconds. ###### GraphQL API The `graphql` scope provides current GraphQL API [complexity points](/docs/apis/graphql/graphql-resource-limits#rate-limits-organization-time-based-rate-limit) for the Buildkite organization. Field | Type | Description ----- | ---- | ----------- `limit` | integer | Maximum complexity points allowed in a time window. `current` | integer | Complexity points used in the current time window. `reset` | integer | Seconds remaining until the current time window resets to zero. `reset_at` | string | ISO 8601 timestamp when the current time window resets to zero. `enforced` | boolean | Indicates if rate limiting is currently enforced for this Buildkite organization. The GraphQL API rate limit time window is 300 seconds (five minutes). ##### Per-user rate limits In addition to the organization-level limits above, Buildkite enforces a [per-user rate limit](/docs/apis/rest-api/rate-limits#per-user-rate-limits) on API requests. This limit prevents a single user from consuming the entire organization's API quota. The default per-user limit is 50 requests per minute for the REST API and 5,000 complexity points per 5 minutes for the GraphQL API. Per-user rate limit information is not available through this endpoint. Instead, inspect the response headers on any REST or GraphQL API call: - `RateLimit-User-Scope`: The scope of the per-user rate limit (`rest_user` or `graphql_user`). - `RateLimit-User-Remaining`: The remaining requests or complexity points for the authenticated user. - `RateLimit-User-Limit`: The per-user rate limit. - `RateLimit-User-Reset`: The number of seconds remaining until the per-user time window resets. These headers are returned alongside the organization-level `RateLimit-*` headers on every API response. Learn more in [REST API rate limits](/docs/apis/rest-api/rate-limits) and [GraphQL resource limits](/docs/apis/graphql/graphql-resource-limits#rate-limits-per-user-rate-limit). --- ### Registries URL: https://buildkite.com/docs/apis/rest-api/package-registries/registries #### Registries API The registries API endpoint lets you [create and manage registries](/docs/package-registries/registries/manage) in your organization. ##### Create a registry ```bash curl -H "Authorization: Bearer $TOKEN" \ -X POST "https://api.buildkite.com/v2/packages/organizations/{org.slug}/registries" \ -H "Content-Type: application/json" \ -d '{ "name": "my registry", "ecosystem": "ruby", "description": "registry containing ruby gems", "team_ids": [ "team-one-uuid", "team-two-uuid" ], "oidc_policy": [ { "iss": "https://agent.buildkite.com", "claims": { "organization_slug": "my-org", "pipeline_slug": { "in": ["my-pipeline", "my-other-pipeline"] } } } ] }' ``` ```json { "id": "0191df84-85e4-77aa-83ba-6579084728eb", "graphql_id": "UmVnaXN0cnktLS0wMTkxZGY4NC04NWU0LTc3YWEtODNiYS02NTc5MDg0NzI4ZWI=", "slug": "my-registry", "url": "https://api.buildkite.com/v2/packages/organizations/my-org/registries/my-registry", "web_url": "https://buildkite.com/organizations/my-org/registries/my-registry", "name": "my registry", "ecosystem": "ruby", "description": "registry containing ruby gems", "emoji": null, "color": null, "public": false, "oidc_policy": null } ``` Required [request body properties](/docs/api#request-body-properties): | `name` | Name of the new registry. _Example:_ `"my registry"`. | `ecosystem` | Registry ecosystem based on the [package ecosystem](/docs/package-registries#get-started) for the new registry. _Example:_ `"ruby"`. | `team_ids` | The IDs of one or more teams who will be granted access to this registry. Required only when the [teams feature](/docs/platform/team-management/permissions) has been enabled. _Example:_ `[ "2a1ac413-a56c-4aaa-aede-dc49bb46d00f", "5916fed1-1fbb-4a8d-9791-5b3fa0b5d269" ]`. Optional [request body properties](/docs/api#request-body-properties): | `description` | Description of the registry. _Default value:_ `null`. | `emoji` | Emoji for the registry using the [emoji syntax](/docs/apis/rest-api/emojis). _Example:_ `"\:sunflower\:"`. | `color` | Color hex code for the registry. _Example:_ `"#f0ccff"`. | `oidc_policy` | A policy matching a [basic](/docs/package-registries/security/oidc#define-an-oidc-policy-for-a-registry-basic-oidc-policy-format) or [more complex](/docs/package-registries/security/oidc#define-an-oidc-policy-for-a-registry-complex-oidc-policy-example) OIDC policy format. Can be either stringified YAML, or a JSON array of policy statements. _Default value:_ `null`. Required scope: `write_registries` Success response: `200 OK` ##### List all registries Returns a list of an organization's registries. ```bash curl -H "Authorization: Bearer $TOKEN" \ -X GET "https://api.buildkite.com/v2/packages/organizations/{org.slug}/registries" ``` ```json [ { "id": "0191df84-85e4-77aa-83ba-6579084728eb", "graphql_id": "UmVnaXN0cnktLS0wMTkxZGY4NC04NWU0LTc3YWEtODNiYS02NTc5MDg0NzI4ZWI=", "slug": "my-registry", "url": "https://api.buildkite.com/v2/packages/organizations/my-org/registries/my-registry", "web_url": "https://buildkite.com/organizations/my-org/packages/registries/my-registry", "name": "my registry", "ecosystem": "ruby", "description": "registry containing ruby gems", "emoji": null, "color": null, "public": false, "oidc_policy": null } ] ``` Required scope: `read_registries` Success response: `200 OK` ##### Get a registry Returns the details for a single registry, looked up by its slug. ```bash curl -H "Authorization: Bearer $TOKEN" \ -X GET "https://api.buildkite.com/v2/packages/organizations/{org.slug}/registries/{registry.slug}" ``` ```json { "id": "0191df84-85e4-77aa-83ba-6579084728eb", "graphql_id": "UmVnaXN0cnktLS0wMTkxZGY4NC04NWU0LTc3YWEtODNiYS02NTc5MDg0NzI4ZWI=", "slug": "my-registry", "url": "https://api.buildkite.com/v2/packages/organizations/my-org/registries/my-registry", "web_url": "https://buildkite.com/organizations/my-org/registries/my-registry", "name": "my registry", "ecosystem": "ruby", "description": "registry containing ruby gems", "emoji": null, "color": null, "public": false, "oidc_policy": null } ``` Required scope: `read_registries` Success response: `200 OK` ##### Update a registry ```bash curl -H "Authorization: Bearer $TOKEN" \ -X POST "https://api.buildkite.com/v2/packages/organizations/{org.slug}/registries/{registry.slug}" \ -H "Content-Type: application/json" \ -d '{ "name": "my registry", "description": "registry containing ruby gems", "oidc_policy": [ { "iss": "https://agent.buildkite.com", "claims": { "organization_slug": "my-org", "pipeline_slug": { "in": ["my-pipeline"] } } } ] }' ``` ```json { "id": "0191df84-85e4-77aa-83ba-6579084728eb", "graphql_id": "UmVnaXN0cnktLS0wMTkxZGY4NC04NWU0LTc3YWEtODNiYS02NTc5MDg0NzI4ZWI=", "slug": "my-registry", "url": "https://api.buildkite.com/v2/packages/organizations/my-org/registries/my-registry", "web_url": "https://buildkite.com/organizations/my-org/registries/my-registry", "name": "my registry", "ecosystem": "ruby", "description": "registry containing ruby gems", "emoji": null, "color": null, "public": false, "oidc_policy": null } ``` Optional [request body properties](/docs/api#request-body-properties): | `name` | Name of the registry. _Example:_ `my registry`. | `description` | Description of the registry. _Example:_ `registry containing ruby gems`. | `oidc_policy` | A policy matching a [basic](/docs/package-registries/security/oidc#define-an-oidc-policy-for-a-registry-basic-oidc-policy-format) or [more complex](/docs/package-registries/security/oidc#define-an-oidc-policy-for-a-registry-complex-oidc-policy-example) OIDC policy format. Can be either stringified YAML, or a JSON array of policy statements. Be aware that if you are modifying an existing OIDC policy, the entire revised OIDC policy needs to be re-posted in this update request. _Default value:_ `null`. Required scope: `write_registries` Success response: `200 OK` ##### Delete a registry ```bash curl -H "Authorization: Bearer $TOKEN" \ -X DELETE "https://api.buildkite.com/v2/packages/organizations/{org.slug}/registries/{registry.slug}" ``` Required scope: `delete_registries` Success response: `200 OK` --- ### Registry tokens URL: https://buildkite.com/docs/apis/rest-api/package-registries/registry-tokens #### Registry tokens API The registry tokens API endpoint lets you create and manage credentials needed to install and use packages in a registry. ##### Create a registry token ```bash curl -H "Authorization: Bearer $TOKEN" \ -X POST "https://api.buildkite.com/v2/packages/organizations/{org.slug}/registries/{registry.slug}/tokens" \ -H "Content-Type: application/json" \ -d '{ "description": "Usher" }' ``` ```json { "id": "0191b6a2-aa51-70d0-8a5f-aabce115b0fd", "graphql_id": "UmVnaXN0cnlUb2tlbi0tLTAxOTFiNmEyLWFhNTEtNzBkMC04YTVmLWFhYmNlMTE1YjBmZA==", "description": "Usher", "url": "http://api.buildkite.com/v2/packages/organizations/my_great_org/registries/my-registry/tokens/0191b6a2-aa51-70d0-8a5f-aabce115b0fd", "created_at": "2024-09-03T06:46:39.441Z", "created_by": { "id": "0191b13b-0eb6-470d-a4c0-2085974f3580", "graphql_id": "VXNlci0tLTAxOTFiMTNiLTBlYjYtNDcwZC1hNGMwLTIwODU5NzRmMzU4MA==", "name": "Eminem", "email": "eminem@buildkite.com", "avatar_url": null, "created_at": "2024-09-02T05:35:23.318Z" }, "organization": { "id": "018a456f-e581-44b6-c5a4-1d8a5f7094ee", "slug": "my_great_org", "url": "https://api.buildkite.com/v2/analytics/organizations/my_great_org", "web_url": "https://buildkite.com/organizations/my_great_org" }, "registry": { "id": "018f56ef-9ef4-70f0-aba2-0f4578e3d69d", "graphql_id": "UmVnaXN0cnktLS0wMThmNTZlZi05ZWY0LTcwZjAtYWJhMi0wZjQ1NzhlM2Q2OWQ=", "slug": "my-registry", "url": "http://api.buildkite.com/v2/packages/organizations/my_great_org/registries/my-registry", "web_url": "http://buildkite.com/organizations/buildkite/my_great_org/registries/my-registry" } } ``` Required [request body properties](/docs/api#request-body-properties): | `description` | Description of the token _Example:_ `"Usher"`. Required scope: `write_registries` Success response: `200 OK` ##### List all registry tokens Returns a list of a registry's tokens. ```bash curl -H "Authorization: Bearer $TOKEN" \ -X GET "https://api.buildkite.com/v2/packages/organizations/{org.slug}/registries/{registry.slug}/tokens" \ -H "Content-Type: application/json" ``` ```json [ { "id": "0191b6a2-aa51-70d0-8a5f-aabce115b0fd", "graphql_id": "UmVnaXN0cnlUb2tlbi0tLTAxOTFiNmEyLWFhNTEtNzBkMC04YTVmLWFhYmNlMTE1YjBmZA==", "description": "Usher", "url": "http://api.buildkite.com/v2/packages/organizations/my_great_org/registries/my-registry/tokens/0191b6a2-aa51-70d0-8a5f-aabce115b0fd", "created_at": "2024-09-03T06:46:39.441Z", "created_by": { "id": "0191b13b-0eb6-470d-a4c0-2085974f3580", "graphql_id": "VXNlci0tLTAxOTFiMTNiLTBlYjYtNDcwZC1hNGMwLTIwODU5NzRmMzU4MA==", "name": "Eminem", "email": "eminem@buildkite.com", "avatar_url": null, "created_at": "2024-09-02T05:35:23.318Z" }, "organization": { "id": "018a456f-e581-44b6-c5a4-1d8a5f7094ee", "slug": "my_great_org", "url": "https://api.buildkite.com/v2/analytics/organizations/my_great_org", "web_url": "https://buildkite.com/organizations/my_great_org" }, "registry": { "id": "018f56ef-9ef4-70f0-aba2-0f4578e3d69d", "graphql_id": "UmVnaXN0cnktLS0wMThmNTZlZi05ZWY0LTcwZjAtYWJhMi0wZjQ1NzhlM2Q2OWQ=", "slug": "my-registry", "url": "http://api.buildkite.com/v2/packages/organizations/my_great_org/registries/my-registry", "web_url": "http://buildkite.com/organizations/buildkite/my_great_org/registries/my-registry" } } ] ``` Required scope: `read_registries` Success response: `200 OK` ##### Get a registry token Returns the details for a single registry token. ```bash curl -H "Authorization: Bearer $TOKEN" \ -X GET "https://api.buildkite.com/v2/packages/organizations/{org.slug}/registries/{registry.slug}/tokens/{id}" ``` ```json { "id": "0191b6a2-aa51-70d0-8a5f-aabce115b0fd", "graphql_id": "UmVnaXN0cnlUb2tlbi0tLTAxOTFiNmEyLWFhNTEtNzBkMC04YTVmLWFhYmNlMTE1YjBmZA==", "description": "Usher", "url": "http://api.buildkite.com/v2/packages/organizations/my_great_org/registries/my-registry/tokens/0191b6a2-aa51-70d0-8a5f-aabce115b0fd", "created_at": "2024-09-03T06:46:39.441Z", "created_by": { "id": "0191b13b-0eb6-470d-a4c0-2085974f3580", "graphql_id": "VXNlci0tLTAxOTFiMTNiLTBlYjYtNDcwZC1hNGMwLTIwODU5NzRmMzU4MA==", "name": "Eminem", "email": "eminem@buildkite.com", "avatar_url": null, "created_at": "2024-09-02T05:35:23.318Z" }, "organization": { "id": "018a456f-e581-44b6-c5a4-1d8a5f7094ee", "slug": "my_great_org", "url": "https://api.buildkite.com/v2/analytics/organizations/my_great_org", "web_url": "https://buildkite.com/organizations/my_great_org" }, "registry": { "id": "018f56ef-9ef4-70f0-aba2-0f4578e3d69d", "graphql_id": "UmVnaXN0cnktLS0wMThmNTZlZi05ZWY0LTcwZjAtYWJhMi0wZjQ1NzhlM2Q2OWQ=", "slug": "my-registry", "url": "http://api.buildkite.com/v2/packages/organizations/my_great_org/registries/my-registry", "web_url": "http://buildkite.com/organizations/buildkite/my_great_org/registries/my-registry" } } ``` Required scope: `read_registries` Success response: `200 OK` ##### Update a registry token ```bash curl -H "Authorization: Bearer $TOKEN" \ -X POST "https://api.buildkite.com/v2/packages/organizations/{org.slug}/registries/{registry.slug}/tokens/{id}" \ -H "Content-Type: application/json" \ -d '{ "description": "Usher" }' ``` ```json { "id": "0191b6a2-aa51-70d0-8a5f-aabce115b0fd", "graphql_id": "UmVnaXN0cnlUb2tlbi0tLTAxOTFiNmEyLWFhNTEtNzBkMC04YTVmLWFhYmNlMTE1YjBmZA==", "description": "Usher", "url": "http://api.buildkite.com/v2/packages/organizations/my_great_org/registries/my-registry/tokens/0191b6a2-aa51-70d0-8a5f-aabce115b0fd", "created_at": "2024-09-03T06:46:39.441Z", "created_by": { "id": "0191b13b-0eb6-470d-a4c0-2085974f3580", "graphql_id": "VXNlci0tLTAxOTFiMTNiLTBlYjYtNDcwZC1hNGMwLTIwODU5NzRmMzU4MA==", "name": "Eminem", "email": "eminem@buildkite.com", "avatar_url": null, "created_at": "2024-09-02T05:35:23.318Z" }, "organization": { "id": "018a456f-e581-44b6-c5a4-1d8a5f7094ee", "slug": "my_great_org", "url": "https://api.buildkite.com/v2/analytics/organizations/my_great_org", "web_url": "https://buildkite.com/organizations/my_great_org" }, "registry": { "id": "018f56ef-9ef4-70f0-aba2-0f4578e3d69d", "graphql_id": "UmVnaXN0cnktLS0wMThmNTZlZi05ZWY0LTcwZjAtYWJhMi0wZjQ1NzhlM2Q2OWQ=", "slug": "my-registry", "url": "http://api.buildkite.com/v2/packages/organizations/my_great_org/registries/my-registry", "web_url": "http://buildkite.com/organizations/buildkite/my_great_org/registries/my-registry" } } ``` Required [request body properties](/docs/api#request-body-properties): | `description` | Description of the token _Example:_ `"Usher"`. Required scope: `write_registries` Success response: `200 OK` ##### Delete a registry token ```bash curl -H "Authorization: Bearer $TOKEN" \ -X DELETE "https://api.buildkite.com/v2/packages/organizations/{org.slug}/registries/{registry.slug}/tokens/{id}" ``` Required scope: `write_registries` Success response: `200 OK` --- ### Packages URL: https://buildkite.com/docs/apis/rest-api/package-registries/packages #### Packages API The packages API endpoint lets you create and manage packages in a registry. ##### Publish a package The following type of `curl` syntax for publishing to registries will work across [all package ecosystems supported by Buildkite Package Registries](/docs/package-registries/ecosystems), with the `file` form-field modified accordingly. ```bash curl -H "Authorization: Bearer $TOKEN" \ -X POST "https://api.buildkite.com/v2/packages/organizations/{org.slug}/registries/{registry.slug}/packages" \ -F 'file=@path/to/debian/package/banana_1.1-2_amd64.deb ``` However, this type of REST API call is just recommended for: - [Alpine (apk)](/docs/package-registries/ecosystems/alpine#publish-a-package) packages - [Debian/Ubuntu (deb)](/docs/package-registries/ecosystems/debian#publish-a-package) packages - [Files (generic)](/docs/package-registries/ecosystems/files#publish-a-file) - [Helm (Standard)](/docs/package-registries/ecosystems/helm#publish-a-chart) charts - [Python (PyPI)](/docs/package-registries/ecosystems/python#publish-a-package) packages - [Red Hat (RPM)](/docs/package-registries/ecosystems/red-hat#publish-a-package) packages - [Terraform](/docs/package-registries/ecosystems/terraform#publish-a-module) modules For other supported package ecosystems, it is recommended that you use their native tools to publish to registries in your Buildkite Package Registries organization. These ecosystems' native tools are for: - [OCI (Docker)](/docs/package-registries/ecosystems/oci#publish-an-image) images - [Helm (OCI)](/docs/package-registries/ecosystems/helm-oci#publish-a-chart) charts - Java ([Maven](/docs/package-registries/ecosystems/maven#publish-a-package) or [Gradle](/docs/package-registries/ecosystems/gradle-kotlin#publish-a-package)) packages - [JavaScript (npm)](/docs/package-registries/ecosystems/javascript#publish-a-package) packages - [Ruby (RubyGems)](/docs/package-registries/ecosystems/ruby#publish-a-package) packages The following type of response is returned by Buildkite upon a successful `curl` publishing event. ```json { "id": "0191e23a-4bc8-7683-bfa4-5f73bc9b7c44", "url": "https://api.buildkite.com/v2/packages/organizations/my_great_org/registries/my-registry/packages/0191e23a-4bc8-7683-bfa4-5f73bc9b7c44", "web_url": "https://buildkite.com/organizations/my_great_org/packages/registries/my-registry/packages/0191e23a-4bc8-7683-bfa4-5f73bc9b7c44", "name": "banana", "digests": { "sha256": "d3e1515a82ece5ad1f63273aa259d91c88967f65d9ae75f880b5c93926586fdf", "sha512": "5bd1481bfd924b1e272bcc7736c91e8490947cc4aa7d756daacfa1aa3705e7180ca2b7800af3ebd4f7ed4b27bcec2da580545cf351499d195e5d4e00e080c87e" }, "organization": { "id": "0190e784-eeb7-4ce4-9d2d-87f7aba85433", "slug": "my_great_org", "url": "https://api.buildkite.com/v2/organizations/my_great_org", "web_url": "https://buildkite.com/my_great_org" }, "registry": { "id": "0191e238-e0a3-7b0b-bb34-beea0035a39d", "graphql_id": "UmVnaXN0cnktLS0wMTkxZTIzOC1lMGEzLTdiMGItYmIzNC1iZWVhMDAzNWEzOWQ=", "slug": "my-registry", "url": "https://api.buildkite.com/v2/packages/organizations/my_great_org/registries/my-registry", "web_url": "https://buildkite.com/organizations/my_great_org/packages/registries/my-registry" } } ``` Required request form-field content: | `file` | Path to the package. _Example:_ `"file=@path/to/debian/package/banana_1.1-2_amd64.deb"`. Required scope: `write_packages` Success response: `200 OK` ##### List all packages Returns a [paginated list](/docs/apis/rest-api#pagination) of all packages in a registry. Packages are listed in the order they were created (newest first). ```bash curl -H "Authorization: Bearer $TOKEN" \ -X GET "https://api.buildkite.com/v2/packages/organizations/{org.slug}/registries/{registry.slug}/packages" ``` ```json { "items": [ { "id": "0191e23a-4bc8-7683-bfa4-5f73bc9b7c44", "url": "https://api.buildkite.com/v2/packages/organizations/my_great_org/registries/my-registry/packages/0191e23a-4bc8-7683-bfa4-5f73bc9b7c44", "web_url": "https://buildkite.com/organizations/my_great_org/packages/registries/my-registry/packages/0191e23a-4bc8-7683-bfa4-5f73bc9b7c44", "name": "banana", "created_at": "2024-08-22T06:24:53Z", "version": "1.0", "digests": { "sha256": "d3e1515a82ece5ad1f63273aa259d91c88967f65d9ae75f880b5c93926586fdf", "sha512": "5bd1481bfd924b1e272bcc7736c91e8490947cc4aa7d756daacfa1aa3705e7180ca2b7800af3ebd4f7ed4b27bcec2da580545cf351499d195e5d4e00e080c87e" }, }, { "id": "019178c2-6b08-7d66-a1db-b79b8ba83151", "url": "https://api.buildkite.com/v2/packages/organizations/my_great_org/registries/my-registry/packages/019178c2-6b08-7d66-a1db-b79b8ba83151", "web_url": "https://buildkite.com/organizations/my_great_org/packages/registries/my-registry/packages/019178c2-6b08-7d66-a1db-b79b8ba83151", "name": "grapes", "created_at": "2024-08-21T06:24:53Z", "version": "2.8.3", "digests": { "sha256": "d3e1515a82ece5ad1f63273aa259d91c88967f65d9ae75f880b5c93926586fdf", "sha512": "5bd1481bfd924b1e272bcc7736c91e8490947cc4aa7d756daacfa1aa3705e7180ca2b7800af3ebd4f7ed4b27bcec2da580545cf351499d195e5d4e00e080c87e" }, } ], "links": { "self": "https://api.buildkite.com/v2/packages/organizations/my_great_org/registries/my-registry/packages", } } ``` Optional [query string parameters](/docs/api#query-string-parameters): | `name` | Filters the results by the package name. _Example:_ `?name=banana`. Required scope: `read_packages` Success response: `200 OK` ##### Get a package Returns the details for a single package. ```bash curl -H "Authorization: Bearer $TOKEN" \ -X GET "https://api.buildkite.com/v2/packages/organizations/{org.slug}/registries/{registry.slug}/packages/{id}" ``` ```json { "id": "0191e23a-4bc8-7683-bfa4-5f73bc9b7c44", "url": "https://api.buildkite.com/v2/packages/organizations/my_great_org/registries/my-registry/packages/0191e23a-4bc8-7683-bfa4-5f73bc9b7c44", "web_url": "https://buildkite.com/organizations/my_great_org/packages/registries/my-registry/packages/0191e23a-4bc8-7683-bfa4-5f73bc9b7c44", "name": "banana", "digests": { "sha256": "d3e1515a82ece5ad1f63273aa259d91c88967f65d9ae75f880b5c93926586fdf", "sha512": "5bd1481bfd924b1e272bcc7736c91e8490947cc4aa7d756daacfa1aa3705e7180ca2b7800af3ebd4f7ed4b27bcec2da580545cf351499d195e5d4e00e080c87e" }, "organization": { "id": "0190e784-eeb7-4ce4-9d2d-87f7aba85433", "slug": "my_great_org", "url": "https://api.buildkite.com/v2/organizations/my_great_org", "web_url": "https://buildkite.com/my_great_org" }, "registry": { "id": "0191e238-e0a3-7b0b-bb34-beea0035a39d", "graphql_id": "UmVnaXN0cnktLS0wMTkxZTIzOC1lMGEzLTdiMGItYmIzNC1iZWVhMDAzNWEzOWQ=", "slug": "my-registry", "url": "https://api.buildkite.com/v2/packages/organizations/my_great_org/registries/my-registry", "web_url": "https://buildkite.com/organizations/my_great_org/packages/registries/my-registry" } } ``` Required scope: `read_packages` Success response: `200 OK` ##### Copy a package For some supported [package ecosystems](/docs/package-registries/ecosystems), copies a package from a source registry to a destination registry. ```bash curl -H "Authorization: Bearer $TOKEN" \ -X POST "https://api.buildkite.com/v2/packages/organizations/{org.slug}/registries/{source_registry.slug}/packages/{package.id}/copy?to={destination_registry.slug}" -H "Content-Type: application/json" ``` Currently, this REST API call only supports package types belonging to the following package ecosystems: - [Alpine (apk)](/docs/package-registries/ecosystems/alpine) - [Debian/Ubuntu (deb)](/docs/package-registries/ecosystems/debian) - [Files (generic)](/docs/package-registries/ecosystems/files) - [JavaScript (npm)](/docs/package-registries/ecosystems/javascript) - [Python (PyPI)](/docs/package-registries/ecosystems/python) - [Red Hat (RPM)](/docs/package-registries/ecosystems/red-hat) - [Ruby (RubyGems)](/docs/package-registries/ecosystems/ruby) If you wish this feature to be available for package types belonging to other package ecosystems, please contact [support](https://buildkite.com/about/contact/). The following type of response is returned by Buildkite upon a successful `curl` copying event. ```json { "id": "0191e23a-4bc8-7683-bfa4-5f73bc9b7c44", "url": "https://api.buildkite.com/v2/packages/organizations/my_great_org/registries/my-registry/packages/0191e23a-4bc8-7683-bfa4-5f73bc9b7c44", "web_url": "https://buildkite.com/organizations/my_great_org/packages/registries/my-registry/packages/0191e23a-4bc8-7683-bfa4-5f73bc9b7c44", "name": "banana", "digests": { "sha256": "d3e1515a82ece5ad1f63273aa259d91c88967f65d9ae75f880b5c93926586fdf", "sha512": "5bd1481bfd924b1e272bcc7736c91e8490947cc4aa7d756daacfa1aa3705e7180ca2b7800af3ebd4f7ed4b27bcec2da580545cf351499d195e5d4e00e080c87e" }, "organization": { "id": "0190e784-eeb7-4ce4-9d2d-87f7aba85433", "slug": "my_great_org", "url": "https://api.buildkite.com/v2/organizations/my_great_org", "web_url": "https://buildkite.com/my_great_org" }, "registry": { "id": "0191e238-e0a3-7b0b-bb34-beea0035a39d", "graphql_id": "UmVnaXN0cnktLS0wMTkxZTIzOC1lMGEzLTdiMGItYmIzNC1iZWVhMDAzNWEzOWQ=", "slug": "my-registry", "url": "https://api.buildkite.com/v2/packages/organizations/my_great_org/registries/my-registry", "web_url": "https://buildkite.com/organizations/my_great_org/packages/registries/my-registry" } } ``` Required [query string parameters](/docs/api#query-string-parameters): | `to` | Destination registry slug. _Example:_ `"to=my-registry"`. Required scopes: `read_packages, write_packages` Success response: `200 OK` ##### Delete a package ```bash curl -H "Authorization: Bearer $TOKEN" \ -X DELETE "https://api.buildkite.com/v2/packages/organizations/{org.slug}/registries/{registry.slug}/packages/{id}" ``` Required scope: `delete_packages` Success response: `200 OK` --- ### Overview URL: https://buildkite.com/docs/apis/rest-api/pipelines #### Pipelines API The pipelines API endpoint consists of several endpoints that allow you to manage: - Pipelines, along with their [builds](/docs/apis/rest-api/builds) and [schedules](/docs/apis/rest-api/pipeline-schedules). - A build's [annotations](/docs/apis/rest-api/annotations), [artifacts](/docs/apis/rest-api/artifacts), and [jobs](/docs/apis/rest-api/jobs). This section of the REST API documentation also contains several other endpoints that allow you to manage other aspects of Buildkite functionality associated with your pipelines, such as: - [Clusters](/docs/apis/rest-api/clusters), including the management of [queues](/docs/apis/rest-api/clusters/queues), [agent tokens](/docs/apis/rest-api/clusters/agent-tokens), [cluster maintainers](/docs/apis/rest-api/clusters/maintainers), and [Buildkite secrets](/docs/apis/rest-api/clusters/secrets). - [Agents](/docs/apis/rest-api/agents) themselves. ##### Pipeline data model | `id` | UUID of the pipeline | `graphql_id` | [GraphQL ID](/docs/apis/graphql-api#graphql-ids) of the pipeline | `url` | Canonical API URL of the pipeline | `web_url` | URL of the pipeline on Buildkite | `name` | Name of the pipeline | `description` | Description of the pipeline | `slug` | URL slug of the pipeline | `repository` | URL of the source code repository | `cluster_id` | UUID of the cluster the pipeline belongs to (if using clusters) | `branch_configuration` | Branch filter pattern for limiting which branches trigger builds | `default_branch` | Default branch for the pipeline | `skip_queued_branch_builds` | Whether to skip intermediate builds on the same branch (`true`, `false`) | `skip_queued_branch_builds_filter` | Branch filter pattern for skip queued builds behavior | `cancel_running_branch_builds` | Whether to cancel running builds when a new build is created on the same branch (`true`, `false`) | `cancel_running_branch_builds_filter` | Branch filter pattern for cancel running builds behavior | `allow_rebuilds` | Whether rebuilds are allowed (`true`, `false`) | `provider` | Source control provider settings (includes `id`, `webhook_url`, and `settings`) | `builds_url` | API URL for the pipeline's builds | `badge_url` | URL for the pipeline's build status badge | `created_by` | User who created the pipeline | `created_at` | When the pipeline was created | `archived_at` | When the pipeline was archived (if archived) | `env` | Environment variables configured for the pipeline | `scheduled_builds_count` | Number of currently scheduled builds | `running_builds_count` | Number of currently running builds | `scheduled_jobs_count` | Number of currently scheduled jobs | `running_jobs_count` | Number of currently running jobs | `waiting_jobs_count` | Number of jobs waiting for agents | `visibility` | Visibility of the pipeline: `private` or `public` | `steps` | Array of step configurations (for non-YAML pipelines) | `configuration` | YAML pipeline configuration (for YAML pipelines) ##### List pipelines Returns a [paginated list](/docs/apis/rest-api#pagination) of an organization's pipelines. > 📘 > Filtering pipelines by creation date sorts results from newest to oldest. ```bash curl -H "Authorization: Bearer $TOKEN" \ -X GET "https://api.buildkite.com/v2/organizations/{org.slug}/pipelines" ``` ```json [ { "id": "849411f9-9e6d-4739-a0d8-e247088e9b52", "graphql_id": "UGlwZWxpbmUtLS1lOTM4ZGQxYy03MDgwLTQ4ZmQtOGQyMC0yNmQ4M2E0ZjNkNDg=", "url": "https://api.buildkite.com/v2/organizations/acme-inc/pipelines/my-pipeline", "web_url": "https://buildkite.com/acme-inc/my-pipeline", "name": "My Pipeline", "slug": "my-pipeline", "repository": "git@github.com:acme-inc/my-pipeline.git", "branch_configuration": null, "default_branch": "main", "provider": { "id": "github", "webhook_url": "https://webhook.buildkite.com/deliver/xxx", "settings": { "publish_commit_status": true, "build_pull_requests": true, "build_pull_request_forks": false, "build_tags": false, "publish_commit_status_per_step": false, "repository": "acme-inc/my-pipeline", "trigger_mode": "code" } }, "skip_queued_branch_builds": false, "skip_queued_branch_builds_filter": null, "cancel_running_branch_builds": false, "cancel_running_branch_builds_filter": null, "builds_url": "https://api.buildkite.com/v2/organizations/acme-inc/pipelines/my-pipeline/builds", "badge_url": "https://badge.buildkite.com/58b3da999635d0ad2daae5f784e56d264343eb02526f129bfb.svg", "created_by": { "id": "3d3c3bf0-7d58-4afe-8fe7-b3017d5504de", "graphql_id": "VXNlci0tLTNkM2MzYmYwLTdkNTgtNGFmZS04ZmU3LWIzMDE3ZDU1MDRkZQo=", "name": "Keith Pitt", "email": "keith@buildkite.com", "avatar_url": "https://www.gravatar.com/avatar/e14f55d3f939977cecbf51b64ff6f861", "created_at": "2013-08-29T10:10:03.000Z" }, "created_at": "2013-09-03 13:24:38 UTC", "archived_at": null, "scheduled_builds_count": 0, "running_builds_count": 0, "scheduled_jobs_count": 0, "running_jobs_count": 0, "waiting_jobs_count": 0, "visibility": "private", "steps": [ { "type": "script", "name": "Test :white_check_mark:", "command": "script/test.sh", "artifact_paths": "results/*", "branch_configuration": "main feature/*", "env": { }, "timeout_in_minutes": null, "agent_query_rules": [ ] } ], "env": { } } ] ``` Optional [query string parameters](/docs/api#query-string-parameters): | `name` | Filters the results by the pipeline name. Supports partial matches and is case insensitive._Example:_ `?name=agent` | `repository` | Filters the results by the repository URL of the source repository. Supports partial matches and is case insensitive._Example:_ `?repository=agent` > 📘 Webhook URL > The response only includes a webhook URL in `provider.webhook_url` if the user has edit permissions for the pipeline. Otherwise, the field returns with an empty string. Required scope: `read_pipelines` Success response: `200 OK` ##### Get a pipeline ```bash curl -H "Authorization: Bearer $TOKEN" \ -X GET "https://api.buildkite.com/v2/organizations/{org.slug}/pipelines/{slug}" ``` ```json { "id": "849411f9-9e6d-4739-a0d8-e247088e9b52", "graphql_id": "UGlwZWxpbmUtLS1lOTM4ZGQxYy03MDgwLTQ4ZmQtOGQyMC0yNmQ4M2E0ZjNkNDg=", "url": "https://api.buildkite.com/v2/organizations/acme-inc/pipelines/my-pipeline", "web_url": "https://buildkite.com/acme-inc/my-pipeline", "name": "My Pipeline", "description": "This pipeline is amazing! :tada:", "slug": "my-pipeline", "repository": "git@github.com:acme-inc/my-pipeline.git", "branch_configuration": null, "default_branch": "main", "provider": { "id": "github", "webhook_url": "https://webhook.buildkite.com/deliver/xxx", "settings": { "publish_commit_status": true, "build_pull_requests": true, "build_pull_request_forks": false, "build_tags": false, "publish_commit_status_per_step": false, "repository": "acme-inc/my-pipeline", "trigger_mode": "code" } }, "skip_queued_branch_builds": false, "skip_queued_branch_builds_filter": null, "cancel_running_branch_builds": false, "cancel_running_branch_builds_filter": null, "builds_url": "https://api.buildkite.com/v2/organizations/acme-inc/pipelines/my-pipeline/builds", "badge_url": "https://badge.buildkite.com/58b3da999635d0ad2daae5f784e56d264343eb02526f129bfb.svg", "created_by": { "id": "3d3c3bf0-7d58-4afe-8fe7-b3017d5504de", "graphql_id": "VXNlci0tLTNkM2MzYmYwLTdkNTgtNGFmZS04ZmU3LWIzMDE3ZDU1MDRkZQo=", "name": "Keith Pitt", "email": "keith@buildkite.com", "avatar_url": "https://www.gravatar.com/avatar/e14f55d3f939977cecbf51b64ff6f861", "created_at": "2013-08-29T10:10:03.000Z" }, "created_at": "2013-09-03 13:24:38 UTC", "archived_at": null, "scheduled_builds_count": 0, "running_builds_count": 0, "scheduled_jobs_count": 0, "running_jobs_count": 0, "waiting_jobs_count": 0, "visibility": "private" "steps": [ { "type": "script", "name": "Test :white_check_mark:", "command": "script/test.sh", "artifact_paths": "results/*", "branch_configuration": "main feature/*", "env": { }, "timeout_in_minutes": null, "agent_query_rules": [ ] } ], "env": { } } ``` > 📘 Webhook URL > The response only includes a webhook URL in `pipeline.provider.webhook_url` if the user has edit permissions for the pipeline. Otherwise, the field returns with an empty string. Required scope: `read_pipelines` Success response: `200 OK` ##### Create a YAML pipeline YAML pipelines are the recommended way to [manage your pipelines](/docs/pipelines/tutorials/pipeline-upgrade). To create a YAML pipeline using this endpoint, set the `configuration` key in your json request body to the YAML you want in your pipeline. For example, to create a pipeline called `"My Pipeline"` containing the following command step ```yaml steps: - command: "script/release.sh" name: "Build \:package\:" ``` make the following POST request, substituting your organization slug instead of `{org.slug}`. Make sure to escape the quotes (`"`) in your YAML, and to replace line breaks with `\n`: ```bash curl -H "Authorization: Bearer $TOKEN" \ -X POST "https://api.buildkite.com/v2/organizations/{org.slug}/pipelines" \ -H "Content-Type: application/json" \ -d '{ "name": "My Pipeline X", "cluster_id": "xxx", "repository": "git@github.com:acme-inc/my-pipeline.git", "configuration": "env:\n \"FOO\": \"bar\"\nsteps:\n - command: \"script/release.sh\"\n \"name\": \"Build :package:\"" }' ``` > 📘 > When setting pipeline configuration using the API, you must pass in a string that Buildkite parses as valid YAML, escaping quotes and line breaks. > To avoid writing an entire YAML file in a single string, you can place a `pipeline.yml` file in a `.buildkite` directory at the root of your repo, and use the `pipeline upload` command in your configuration to tell Buildkite where to find it. This means you only need the following: > `"configuration": "steps:\n - command: \"buildkite-agent pipeline upload\""` The response contains information about your new pipeline: ```json { "id": "ad93b461-96ab-4a1e-9281-260ead506a0e", "graphql_id": "UGlwZWxpbmUtLS1hZDkzYjQ2MS05NmFiLTRhMWUtOTI4MS0yNjBlYWQ1MDZhMGU=", "url": "https://api.buildkite.com/v2/organizations/acme-inc/pipelines/my-pipeline-x", "web_url": "https://buildkite.com/acme-inc/my-pipeline-x", "name": "My Pipeline X", "description": null, "slug": "my-pipeline-x", "repository": "git@github.com:acme-inc/my-pipeline.git", "cluster_id": null, "pipeline_template_uuid": null, "branch_configuration": null, "default_branch": "main", "skip_queued_branch_builds": false, "skip_queued_branch_builds_filter": null, "cancel_running_branch_builds": false, "cancel_running_branch_builds_filter": null, "allow_rebuilds": true, "provider": { "id": "github", "settings": { "trigger_mode": "code", "build_pull_requests": true, "pull_request_branch_filter_enabled": false, "skip_builds_for_existing_commits": false, "skip_pull_request_builds_for_existing_commits": true, "build_pull_request_ready_for_review": false, "build_pull_request_labels_changed": false, "build_pull_request_forks": false, "prefix_pull_request_fork_branch_names": true, "build_branches": true, "build_tags": false, "cancel_deleted_branch_builds": false, "publish_commit_status": true, "publish_commit_status_per_step": false, "separate_pull_request_statuses": false, "publish_blocked_as_pending": false, "use_step_key_as_commit_status": false, "filter_enabled": false, "repository": "acme-inc/my-pipeline" }, "webhook_url": "https://webhook.buildkite.com/deliver/fe08e0f823297a158fc4ca2bfddd6ea3ced92b5167a658a0bb" }, "builds_url": "https://api.buildkite.com/v2/organizations/acme-inc/pipelines/my-pipeline-x/builds", "badge_url": "https://badge.buildkite.com/05bf6d997d16c993ae6180ed7d85d29c9be8f8d8f37ac96477.svg", "created_by": { "id": "3cc415b8-3d63-4b9a-acb0-c120dbcb231c", "graphql_id": "VXNlci0tLTNjYzQxNWI4LTNkNjMtNGI5YS1hY2IwLWMxMjBkYmNiMjMxYw==", "name": "Sam Wright", "email": "sam@example.com", "avatar_url": "https://www.gravatar.com/avatar/3536621b97b6d9d39488202709317051", "created_at": "2020-02-14T16:57:23.153Z" }, "created_at": "2021-05-06T14:54:21.088Z", "archived_at": null, "env": { "FOO": "bar" }, "scheduled_builds_count": 0, "running_builds_count": 0, "scheduled_jobs_count": 0, "running_jobs_count": 0, "waiting_jobs_count": 0, "visibility": "private", "tags": null, "configuration": "env:\n \"FOO\": \"bar\"\n\"steps\":\n - command: \"script/release.sh\"\n \"name\": \"Build :package:\"", "steps": [{ "type": "script", "name": "Build :package:", "command": "script/release.sh", "artifact_paths": null, "branch_configuration": null, "env": {}, "timeout_in_minutes": null, "agent_query_rules": [], "concurrency": null, "parallelism": null }] } ``` Required [request body properties](/docs/api#request-body-properties): | `name` | The name of the pipeline._Example:_ `"New Pipeline"` | `cluster_id` | The ID value of the cluster the pipeline will be associated with._Example:_ `"018e5a22-d14c-7085-bb28-db0f83f43a1c"` | `repository` | The repository URL._Example:_ `"git@github.com:acme-inc/my-pipeline.git"` | `configuration` | The YAML pipeline that consists of the build pipeline steps._Example:_ `"steps:\n - command: \"script/release.sh\"\n"` Optional [request body properties](/docs/api#request-body-properties): | `allow_rebuilds` | Enables rebuilding of existing builds. _Example:_ `false` _Default:_ `true` | `branch_configuration` | A [branch filter pattern](/docs/pipelines/configure/workflows/branch-configuration#pipeline-level-branch-filtering) to limit which pushed branches trigger builds on this pipeline. _Example:_ `"main feature/*"` _Default:_ `null` | `cancel_running_branch_builds` | Cancel intermediate builds. When a new build is created on a branch, any previous builds that are running on the same branch will be automatically canceled. _Example:_ `true` _Default:_ `false` | `cancel_running_branch_builds_filter` | A [branch filter pattern](/docs/pipelines/configure/workflows/branch-configuration#branch-pattern-examples) to limit which branches intermediate build canceling applies to. _Example:_ `"develop prs/*"` _Default:_ `null` | `color` | A color hex code to represent this pipeline. _Example:_ `"#FF5733"` | `default_branch` | The name of the branch to prefill when new builds are created or triggered in Buildkite. It is also used to filter the builds and metrics shown on the Pipelines page. _Example:_ `"main"` | `default_command_step_timeout` | The default timeout in minutes for all command steps in this pipeline. This can still be overridden in any command step. _Example:_ `30` | `description` | The pipeline description. _Example:_ `"\:package\: A testing pipeline"` | `emoji` | An emoji to represent this pipeline. _Example:_ `"\:rocket\:"` (will be rendered as "🚀") | `maximum_command_step_timeout` | The maximum timeout in minutes for all command steps in this pipeline. Any command step without a timeout or with a timeout greater than this value will be capped at this limit. _Example:_ `120` | `pipeline_template_uuid` | The UUID of the [pipeline template](/docs/apis/rest-api/pipeline-templates) the pipeline should run with. Set to `null` to remove the pipeline template from the pipeline._Example:_ `"018e5a22-d14c-7085-bb28-db0f83f43a1c"` `provider_settings` | The source provider settings. See the [Provider Settings](#provider-settings-properties) section for accepted properties. _Example:_ `{ "publish_commit_status": true, "build_pull_request_forks": true }` | `skip_queued_branch_builds` | Skip intermediate builds. When a new build is created on a branch, any previous builds that haven't yet started on the same branch will be automatically marked as skipped. _Example:_ `true` _Default:_ `false` | `skip_queued_branch_builds_filter` | A [branch filter pattern](/docs/pipelines/configure/workflows/branch-configuration#branch-pattern-examples) to limit which branches intermediate build skipping applies to. _Example:_ `"!main"` _Default:_ `null` | `slug` | A custom identifier for the pipeline. If provided, this slug will be used as the pipeline's URL path instead of automatically converting the pipeline name. If the value is `null`, the pipeline name will be used to generate the slug. _Example:_ `"my-custom-pipeline-slug"` | `tags` | An array of strings representing [tags](/docs/pipelines/configure/tags) to add to this pipeline. Emojis, using the `:emoji:` string syntax, are also supported. _Example:_`["\:terraform\:", "testing"]` | `teams` | An array of team UUIDs to add this pipeline to. Allows you to specify the access level for the pipeline in a team. The available access level options are: `read_only` `build_and_read` `manage_build_and_read` You can find your team's UUID either using the [GraphQL API](/docs/apis/graphql-api), or on the Settings page for a team. This property is only available if your organization has enabled Teams. Once your organization enables Teams, only administrators can create pipelines without providing team UUIDs. Replaces deprecated `team_uuids` parameter. _Example:_ | `visibility` | Whether the pipeline is visible to everyone, including users outside this organization. _Example:_ `"public"` _Default:_ `"private"` Required scope: `write_pipelines` Success response: `201 Created` Error responses: | `422 Unprocessable Entity` | `{ "message": "Validation Failed", "errors": [ ... ] }` ###### Deriving a pipeline slug from the pipeline's name Pipeline slugs are derived from the pipeline name you provide when the pipeline is created (unless you use the optional `slug` parameter to specify a custom slug). This derivation process involves converting all space characters (including consecutive ones) in the pipeline's name to single hyphen `-` characters, and all uppercase characters to their lowercase counterparts. Therefore, pipeline names of either `Hello there friend` or `Hello    There Friend` are converted to the slug `hello-there-friend`. The maximum permitted length for a pipeline slug is 100 characters. > 📘 > The following regular expression is used to derive and convert the pipeline name to its slug: > `/\A[a-zA-Z0-9]+[a-zA-Z0-9\-]*\z/` Any attempt to create a new pipeline with a name that matches an existing pipeline's name, results in an error. ##### Create a visual step pipeline YAML pipelines are the recommended way to [manage your pipelines](/docs/pipelines/tutorials/pipeline-upgrade) but if you're still using visual steps you can add them by setting the `steps` key in your json request body to an array of steps: ```bash curl -H "Authorization: Bearer $TOKEN" \ -X POST "https://api.buildkite.com/v2/organizations/{org.slug}/pipelines" \ -H "Content-Type: application/json" \ -d '{ "name": "My Pipeline", "cluster_id": "xxx", "repository": "git@github.com:acme-inc/my-pipeline.git", "steps": [ { "type": "script", "name": "Build \:package\:", "command": "script/release.sh" }, { "type": "waiter" }, { "type": "script", "name": "Test \:wrench\:", "command": "script/release.sh", "artifact_paths": "log/*" }, { "type": "manual", "label": "Deploy" }, { "type": "script", "name": "Release \:rocket\:", "command": "script/release.sh", "branch_configuration": "main", "env": { "AMAZON_S3_BUCKET_NAME": "my-pipeline-releases" }, "timeout_in_minutes": 10, "agent_query_rules": ["aws=true"] }, { "type": "trigger", "label": "Deploy \:ship\:", "trigger_project_slug": "deploy", "trigger_commit": "HEAD", "trigger_branch": "main", "trigger_async": true } ] }' ``` The response contains information about your new pipeline: ```json { "id": "14e9501c-69fe-4cda-ae07-daea9ca3afd3", "graphql_id": "UGlwZWxpbmUtLS1lOTM4ZGQxYy03MDgwLTQ4ZmQtOGQyMC0yNmQ4M2E0ZjNkNDg=", "url": "https://api.buildkite.com/v2/organizations/acme-inc/pipelines/my-pipeline", "web_url": "https://buildkite.com/acme-inc/my-pipeline", "name": "My Pipeline", "description": null, "slug": "my-pipeline", "repository": "git@github.com:acme-inc/my-pipeline.git", "branch_configuration": null, "default_branch": "main" "provider": { "id": "github", "webhook_url": "https://webhook.buildkite.com/deliver/xxx", "settings": { "publish_commit_status": true, "build_pull_requests": true, "build_pull_request_forks": false, "build_tags": false, "publish_commit_status_per_step": false, "repository": "acme-inc/my-pipeline", "trigger_mode": "code" } }, "skip_queued_branch_builds": false, "skip_queued_branch_builds_filter": null, "cancel_running_branch_builds": false, "cancel_running_branch_builds_filter": null, "builtype": "script", "name": "Build \:package\:", "command": "script/release.sh", "artifact_paths": null, "branch_configuration": null, "env": {}, "timeout_in_minutes": null, "agent_query_rules": [], "concurrency": null, "parallelism": null }, { "type": "waiter" }, { "type": "script", "name": "Test \:wrench\:", "command": "script/release.sh", "artifact_paths": "log/*", "branch_configuration": null, "env": {}, "timeout_in_minutes": null, "agent_query_rules": [ ], "concurrency": null, "parallelism": null }, { "type": "manual", "label": "Deploy" }, { "type": "script", "name": "Release \:rocket\:", "command": "script/release.sh", "artifact_paths": null, "branch_configuration": "main", "env": { "AMAZON_S3_BUCKET_NAME": "my-pipeline-releases" }, "timeout_in_minutes": 10, "agent_query_rules": [ "aws=true" ], "concurrency": null, "parallelism": null }, { "type": "trigger", "label": "Deploy \:ship\:", "pipeline": "deploy", "build": { "message": null, "branch": "main", "commit": "HEAD", "env": null }, "async": true, "branch_configuration": null, "concurrency": null, "parallelism": null } ], "env": { }, "scheduled_builds_count": 0, "running_builds_count": 0, "scheduled_jobs_count": 0, "running_jobs_count": 0, "waiting_jobs_count": 0, "visibility": "private" } ``` The resulting pipeline: [Image: pipeline-example.png] Required [request body properties](/docs/api#request-body-properties): | `name` | The name of the pipeline._Example:_ `"New Pipeline"` | `cluster_id` | The ID value of the cluster the pipeline will be associated with._Example:_ `"018e5a22-d14c-7085-bb28-db0f83f43a1c"` | `repository` | The repository URL._Example:_ `"git@github.com:acme-inc/my-pipeline.git"` | `steps` | An array of the build pipeline steps._Script:_ `{ "type": "script", "name": "Script", "command": "command.sh" }` _Wait for all previous steps to finish:_ `{ "type": "waiter" }` _Block pipeline (see the [job unblock API](/docs/apis/rest-api/jobs#unblock-a-job)):_ `{ "type": "manual" }` Optional [request body properties](/docs/api#request-body-properties): | `branch_configuration` | A [branch filter pattern](/docs/pipelines/configure/workflows/branch-configuration#pipeline-level-branch-filtering) to limit which pushed branches trigger builds on this pipeline. _Example:_ `"main feature/*"` _Default:_ `null` | `cancel_running_branch_builds` | Cancel intermediate builds. When a new build is created on a branch, any previous builds that are running on the same branch will be automatically canceled. _Example:_ `true` _Default:_ `false` | `cancel_running_branch_builds_filter` | A [branch filter pattern](/docs/pipelines/configure/workflows/branch-configuration#branch-pattern-examples) to limit which branches intermediate build canceling applies to. _Example:_ `"develop prs/*"` _Default:_ `null` | `default_branch` | The name of the branch to prefill when new builds are created or triggered in Buildkite. It is also used to filter the builds and metrics shown on the Pipelines page. _Example:_ `"main"` | `description` | The pipeline description. _Example:_ `":package: A testing pipeline"` | `env` | The pipeline environment variables. _Example:_ `{"KEY":"value"}` | `pipeline_template_uuid` | The UUID of the [pipeline template](/docs/apis/rest-api/pipeline-templates) the pipeline should run with. Set to `null` to remove the pipeline template from the pipeline. _Example:_ `"018e5a22-d14c-7085-bb28-db0f83f43a1c"` `provider_settings` | The source provider settings. See the [Provider Settings](#provider-settings-properties) section for accepted properties. _Example:_ `{ "publish_commit_status": true, "build_pull_request_forks": true }` | `skip_queued_branch_builds` | Skip intermediate builds. When a new build is created on a branch, any previous builds that haven't yet started on the same branch will be automatically marked as skipped. _Example:_ `true` _Default:_ `false` | `skip_queued_branch_builds_filter` | A [branch filter pattern](/docs/pipelines/configure/workflows/branch-configuration#branch-pattern-examples) to limit which branches intermediate build skipping applies to. _Example:_ `"!main"` _Default:_ `null` | `slug` | A custom identifier for the pipeline. If provided, this slug will be used as the pipeline's URL path instead of automatically converting the pipeline name. If the value is `null`, the pipeline name will be used to generate the slug. _Example:_ `"my-custom-pipeline-slug"` | `tags` | An array of strings representing [tags](/docs/pipelines/configure/tags) to add to this pipeline. Emojis, using the `:emoji:` string syntax, are also supported. _Example:_`["\:terraform\:", "testing"]` | `teams` | An array of team UUIDs to add this pipeline to. Allows you to specify the access level for the pipeline in a team. The available access level options are: `read_only` `build_and_read` `manage_build_and_read` You can find your team's UUID either using the [GraphQL API](/docs/apis/graphql-api), or on the Settings page for a team. This property is only available if your organization has enabled Teams. Once your organization enables Teams, only administrators can create pipelines without providing team UUIDs. Replaces deprecated `team_uuids` parameter. _Example:_ Required scope: `write_pipelines` Success response: `201 Created` Error responses: | `422 Unprocessable Entity` | `{ "message": "Validation Failed", "errors": [ ... ] }` ##### Update a pipeline Updates one or more properties of an existing pipeline. To update a pipeline's YAML steps, make a PATCH request to the `pipelines` endpoint, passing the `configuration` attribute in the request body: ```bash curl -H "Authorization: Bearer $TOKEN" \ -X PATCH "https://api.buildkite.com/v2/organizations/{org.slug}/pipelines/{slug}" \ -H "Content-Type: application/json" \ -d '{ "repository": "git@github.com:acme-inc/new-repo.git", "configuration": "steps:\n - command: \"new.sh\"\n agents:\n - \"myqueue=true\"", "tags": ["\:terraform\:", "testing"] }' ``` > 🚧 > Patch requests can only update attributes already present in the pipeline YAML. ```json { "id": "14e9501c-69fe-4cda-ae07-daea9ca3afd3", "graphql_id": "UGlwZWxpbmUtLS1lOTM4ZGQxYy03MDgwLTQ4ZmQtOGQyMC0yNmQ4M2E0ZjNkNDg=", "url": "https://api.buildkite.com/v2/organizations/acme-inc/pipelines/my-pipeline", "web_url": "https://buildkite.com/acme-inc/my-pipeline", "name": "My Pipeline", "description": null, "slug": "my-pipeline", "repository": "git@github.com:acme-inc/new-repo.git", "branch_configuration": "main", "default_branch": "main" "provider": { "id": "github", "webhook_url": "https://webhook.buildkite.com/deliver/xxx", "settings": { "publish_commit_status": true, "build_pull_requests": true, "build_pull_request_forks": false, "build_tags": false, "publish_commit_status_per_step": false, "repository": "acme-inc/new-repo", "trigger_mode": "code" } }, "skip_queued_branch_builds": false, "skip_queued_branch_builds_filter": null, "cancel_running_branch_builds": false, "cancel_running_branch_builds_filter": null, "builds_url": "https://api.buildkite.com/v2/organizations/acme-inc/pipelines/my-pipeline/builds", "badge_url": "https://badge.buildkite.com/58b3da999635d0ad2daae5f784e56d264343eb02526f129bfb.svg", "created_at": "2015-03-01 06:44:40 UTC", "archived_at": null, "configuration": "steps:\n - command: \"new.sh\"\n agents:\n - \"something=true\"", "steps": [ { "type": "script", "name": null, "command": "new.sh", "artifact_paths": null, "branch_configuration": null, "env": {}, "timeout_in_minutes": null, "agent_query_rules": [ "myqueue=true" ], "concurrency": null, "parallelism": null } ], "env": { }, "scheduled_builds_count": 0, "running_builds_count": 0, "scheduled_jobs_count": 0, "running_jobs_count": 0, "waiting_jobs_count": 0, "visibility": "private" } ``` Optional [request body properties](/docs/api#request-body-properties): | `allow_rebuilds` | Enables rebuilding of existing builds. _Example:_ `false` _Default:_ `true` | `branch_configuration` | A [branch filter pattern](/docs/pipelines/configure/workflows/branch-configuration#pipeline-level-branch-filtering) to limit which pushed branches trigger builds on this pipeline._Example:_ `"main feature/*"` _Default:_ `null` | `cancel_running_branch_builds` | Cancel intermediate builds. When a new build is created on a branch, any previous builds that are running on the same branch will be automatically canceled._Example:_ `true` _Default:_ `false` | `cancel_running_branch_builds_filter` | A [branch filter pattern](/docs/pipelines/configure/workflows/branch-configuration#branch-pattern-examples) to limit which branches intermediate build canceling applies to. _Example:_ `"develop prs/*"` _Default:_ `null` | `color` | A color hex code to represent this pipeline. _Example:_ `"#FF5733"` | `cluster_id` | The ID of the [cluster](/docs/pipelines/security/clusters) the pipeline should run in. Set to `null` to remove the pipeline from a cluster._Example:_ `"42f1a7da-812d-4430-93d8-1cc7c33a6bcf"` `configuration` | The YAML pipeline that consists of the build pipeline steps._Example:_ `"steps:\n - command: \"new.sh\"\n agents:\n - \"myqueue=true\""` | `default_branch` | The name of the branch to prefill when new builds are created or triggered in Buildkite. _Example:_ `"main"` | `default_command_step_timeout` | The default timeout in minutes for all command steps in this pipeline. This can still be overridden in any command step. _Example:_ `30` | `description` | The pipeline description. _Example:_ `"\:package\: A testing pipeline"` | `env` | The pipeline environment variables. _Example:_ `{"KEY":"value"}` | `emoji` | An emoji to represent this pipeline. _Example:_ `"\:rocket\:"` (will be rendered as "🚀") | `maximum_command_step_timeout` | The maximum timeout in minutes for all command steps in this pipeline. Any command step without a timeout or with a timeout greater than this value will be capped at this limit. _Example:_ `120` | `name` | The name of the pipeline. If you provide a new name without a `slug` parameter, the slug will be automatically updated to match the new name. _Example:_ `"New Pipeline"` | `pipeline_template_uuid` | The UUID of the [pipeline template](/docs/apis/rest-api/pipeline-templates) the pipeline should run with. Set to `null` to remove the pipeline template from the pipeline._Example:_ `"018e5a22-d14c-7085-bb28-db0f83f43a1c"` `provider_settings` | The source provider settings. See the [Provider Settings](#provider-settings-properties) section for accepted properties. _Example:_ `{ "publish_commit_status": true, "build_pull_request_forks": true }` | `repository` | The repository URL._Example:_ `"git@github.com/org/repo.git"` | `skip_queued_branch_builds` | Skip intermediate builds. When a new build is created on a branch, any previous builds that haven't yet started on the same branch will be automatically marked as skipped._Example:_ `true` _Default:_ `false` | `skip_queued_branch_builds_filter` | A [branch filter pattern](/docs/pipelines/configure/workflows/branch-configuration#branch-pattern-examples) to limit which branches intermediate build skipping applies to. _Example:_ `"!main"` _Default:_ `null` | `slug` | A custom identifier for the pipeline. This slug will be used as the pipeline's URL path. It can only contain alphanumeric characters or dashes and cannot begin with a dash. The slug updates whenever the pipeline name changes. If you don't provide a slug when you update the pipeline name, the slug will be automatically generated from the new pipeline name. _Example:_ `"my-custom-pipeline-slug"` | `tags` | An array of strings representing [tags](/docs/pipelines/configure/tags) to modify on this pipeline. Emojis, using the `:emoji:` string syntax, are also supported. _Example:_`["\:terraform\:", "testing"]` | `teams` | An array of team UUIDs to add this pipeline to. Allows you to specify the access level for the pipeline in a team. The available access level options are: `read_only` `build_and_read` `manage_build_and_read` You can find your team's UUID either using the [GraphQL API](/docs/apis/graphql-api), or on the Settings page for a team. This property is only available if your organization has enabled Teams. Once your organization enables Teams, only administrators can create pipelines without providing team UUIDs. Replaces deprecated `team_uuids` parameter. _Example:_ | `visibility` | Whether the pipeline is visible to everyone, including users outside this organization. _Example:_ `"public"` _Default:_ `"private"` Required scope: `write_pipelines` Success response: `200 OK` Error responses: | `422 Unprocessable Entity` | `{ "message": "Validation Failed", "errors": [ ... ] }` > 🚧 > To update a pipeline's teams, please use the [GraphQL API](/docs/apis/graphql-api). ##### Archive a pipeline Archived pipelines are read-only, and are hidden from Pipeline pages by default. Builds, build logs, and artifacts are preserved. ```bash curl -H "Authorization: Bearer $TOKEN" \ -X POST "https://api.buildkite.com/v2/organizations/{org.slug}/pipelines/{slug}/archive" ``` ```json { "id": "14e9501c-69fe-4cda-ae07-daea9ca3afd3", "graphql_id": "UGlwZWxpbmUtLS1lOTM4ZGQxYy03MDgwLTQ4ZmQtOGQyMC0yNmQ4M2E0ZjNkNDg=", "url": "https://api.buildkite.com/v2/organizations/acme-inc/pipelines/my-pipeline", "web_url": "https://buildkite.com/acme-inc/my-pipeline", "name": "My Pipeline", "description": null, "slug": "my-pipeline", "repository": "git@github.com:acme-inc/new-repo.git", "branch_configuration": "main", "default_branch": "main" "provider": { "id": "github", "webhook_url": "https://webhook.buildkite.com/deliver/xxx", "settings": { "publish_commit_status": true, "build_pull_requests": true, "build_pull_request_forks": false, "build_tags": false, "publish_commit_status_per_step": false, "repository": "acme-inc/new-repo", "trigger_mode": "code" } }, "skip_queued_branch_builds": false, "skip_queued_branch_builds_filter": null, "cancel_running_branch_builds": false, "cancel_running_branch_builds_filter": null, "builds_url": "https://api.buildkite.com/v2/organizations/acme-inc/pipelines/my-pipeline/builds", "badge_url": "https://badge.buildkite.com/58b3da999635d0ad2daae5f784e56d264343eb02526f129bfb.svg", "created_at": "2015-03-01 06:44:40 UTC", "archived_at": "2021-06-01 08:23:35 UTC", "configuration": "steps:\n - command: \"new.sh\"\n agents:\n - \"something=true\"", "steps": [ { "type": "script", "name": null, "command": "new.sh", "artifact_paths": null, "branch_configuration": null, "env": {}, "timeout_in_minutes": null, "agent_query_rules": [ "myqueue=true" ], "concurrency": null, "parallelism": null } ], "env": { }, "scheduled_builds_count": 0, "running_builds_count": 0, "scheduled_jobs_count": 0, "running_jobs_count": 0, "waiting_jobs_count": 0, "visibility": "private" } ``` Required scope: `write_pipelines` Success response: `200 OK` Error responses: | `403 Forbidden` | `{ "message": "Forbidden" }` | `422 Unprocessable Entity` | `{ "message": "Pipeline could not be archived." }` ##### Unarchive a pipeline Unarchived pipelines are editable, and are shown on the Pipeline pages. ```bash curl -H "Authorization: Bearer $TOKEN" \ -X POST "https://api.buildkite.com/v2/organizations/{org.slug}/pipelines/{slug}/unarchive" ``` ```json { "id": "14e9501c-69fe-4cda-ae07-daea9ca3afd3", "graphql_id": "UGlwZWxpbmUtLS1lOTM4ZGQxYy03MDgwLTQ4ZmQtOGQyMC0yNmQ4M2E0ZjNkNDg=", "url": "https://api.buildkite.com/v2/organizations/acme-inc/pipelines/my-pipeline", "web_url": "https://buildkite.com/acme-inc/my-pipeline", "name": "My Pipeline", "description": null, "slug": "my-pipeline", "repository": "git@github.com:acme-inc/new-repo.git", "branch_configuration": "main", "default_branch": "main" "provider": { "id": "github", "webhook_url": "https://webhook.buildkite.com/deliver/xxx", "settings": { "publish_commit_status": true, "build_pull_requests": true, "build_pull_request_forks": false, "build_tags": false, "publish_commit_status_per_step": false, "repository": "acme-inc/new-repo", "trigger_mode": "code" } }, "skip_queued_branch_builds": false, "skip_queued_branch_builds_filter": null, "cancel_running_branch_builds": false, "cancel_running_branch_builds_filter": null, "builds_url": "https://api.buildkite.com/v2/organizations/acme-inc/pipelines/my-pipeline/builds", "badge_url": "https://badge.buildkite.com/58b3da999635d0ad2daae5f784e56d264343eb02526f129bfb.svg", "created_at": "2015-03-01 06:44:40 UTC", "archived_at": null, "configuration": "steps:\n - command: \"new.sh\"\n agents:\n - \"something=true\"", "steps": [ { "type": "script", "name": null, "command": "new.sh", "artifact_paths": null, "branch_configuration": null, "env": {}, "timeout_in_minutes": null, "agent_query_rules": [ "myqueue=true" ], "concurrency": null, "parallelism": null } ], "env": { }, "scheduled_builds_count": 0, "running_builds_count": 0, "scheduled_jobs_count": 0, "running_jobs_count": 0, "waiting_jobs_count": 0, "visibility": "private" } ``` Required scope: `write_pipelines` Success response: `200 OK` Error responses: | `403 Forbidden` | `{ "message": "Forbidden" }` | `422 Unprocessable Entity` | `{ "message": "Pipeline could not be unarchived." }` ##### Delete a pipeline ```bash curl -H "Authorization: Bearer $TOKEN" \ -X DELETE "https://api.buildkite.com/v2/organizations/{org.slug}/pipelines/{slug}" ``` Required scope: `write_pipelines` Success response: `204 No Content` ##### Add a webhook Create an GitHub webhook for an existing pipeline that is configured using our GitHub App. Pushes to the linked GitHub repository will trigger builds. ```bash curl -H "Authorization: Bearer $TOKEN" \ -X POST "https://api.buildkite.com/v2/organizations/{org.slug}/pipelines/{slug}/webhook" ``` Required scope: `write_pipelines` Success response: `201 Created` Error responses: | `403 Forbidden` | `{ "message": "Forbidden" }` | `422 Unprocessable Entity` | `{ "message": "Auto-creating webhooks is not supported for your repository." }` | `422 Unprocessable Entity` | `{ "message": "Webhooks could not be created for your repository." }` This error might be returned because an existing webhook has already been created for this pipeline, and the additional webhook could not be created as it is outside the intended scope of this feature. ##### Provider settings properties The [Create a YAML pipeline](#create-a-yaml-pipeline) and [Update pipeline](#update-a-pipeline) endpoints accept a `provider_settings` property, which allows you to configure how the pipeline is triggered based on source code provider events. Each pipeline provider's supported settings are below. Properties available for all providers: | `filter_enabled` | Whether filter conditions are used for this pipeline. _Values:_ `true`, `false` | `filter_condition` | The conditions under which this pipeline will trigger a build. See the [Using conditionals](/docs/pipelines/configure/conditionals) guide for more information. _Example:_ `"build.pull_request.base_branch =~ /main/"` Bitbucket Cloud, Bitbucket Server, GitHub, and GitHub Enterprise all have optional `provider_settings`. Properties available for Bitbucket Server: | `build_branches` | Whether to create builds when branches are pushed. _Values:_ `true`, `false` | `build_pull_requests` | Whether to create builds for commits that are part of a Pull Request. _Values:_ `true`, `false` | `build_tags` | Whether to create builds when tags are pushed. _Values:_ `true`, `false` Properties available for Bitbucket Cloud, GitHub, and GitHub Enterprise: | `build_branches` | Whether to create builds when branches are pushed. _Values:_ `true`, `false` | `build_pull_requests` | Whether to create builds for commits that are part of a Pull Request. _Values:_ `true`, `false` | `build_tags` | Whether to create builds when tags are pushed. _Values:_ `true`, `false` | `cancel_deleted_branch_builds` | A boolean to enable automatically cancelling any running builds for a branch if the branch is deleted. _Values:_ `true`, `false` | `publish_commit_status` | Whether to update the status of commits in Bitbucket or GitHub. _Values:_ `true`, `false` | `publish_commit_status_per_step` | Whether to create a separate status for each job in a build, allowing you to see the status of each job directly in Bitbucket or GitHub. _Values:_ `true`, `false` | `pull_request_branch_filter_enabled` | Whether to limit the creation of builds to specific branches or patterns. _Values:_ `true`, `false` | `pull_request_branch_filter_configuration` | The branch filtering pattern. Only pull requests on branches matching this pattern will cause builds to be created. _Example:_ `"features/*"` | `skip_builds_for_existing_commits` | Whether to skip creating a new build if a build for the commit and branch already exists. _Values:_ `true`, `false` | `skip_pull_request_builds_for_existing_commits` | Whether to skip creating a new build for a pull request if an existing build for the commit and branch already exists. _Values:_ `true`, `false` Additional properties available for GitHub and GitHub Enterprise: | `build_pull_request_base_branch_changed` | Whether to create builds for pull requests when the base branch is changed. _Values:_ `true`, `false` | `build_pull_request_forks` | Whether to create builds for pull requests from third-party forks. _Values:_ `true`, `false` | `build_pull_request_labels_changed` | Whether to create builds for pull requests when labels are added or removed. Requires `build_pull_requests` to be `true`. _Values:_ `true`, `false` | `build_pull_request_ready_for_review` | Whether to create builds for pull requests that are ready for review. Requires `build_pull_requests` to be `true`. _Values:_ `true`, `false` | `build_pull_request_reopened` | Whether to create builds when a pull request is reopened. Requires `build_pull_requests` to be `true`. _Values:_ `true`, `false` | `build_pull_request_edited` | Whether to create builds when a pull request is edited (title, description, or base branch changed). Requires `build_pull_requests` to be `true`. _Values:_ `true`, `false` | `build_pull_request_converted_to_draft` | Whether to create builds when a pull request is converted to draft. Requires `build_pull_requests` to be `true`. _Values:_ `true`, `false` | `build_pull_request_review_requested` | Whether to create builds when a review is requested on a pull request. Requires `build_pull_requests` to be `true`. _Values:_ `true`, `false` | `build_check_run_completed` | Whether to create builds when a check run completes. The check runs from Buildkite Pipelines builds are automatically skipped to prevent loops. _Values:_ `true`, `false` | `build_pull_request_review_submitted` | Whether to create builds when a pull request review is submitted. _Values:_ `true`, `false` | `build_pull_request_review_dismissed` | Whether to create builds when a pull request review is dismissed. _Values:_ `true`, `false` | `build_release_published` | Whether to create builds when a GitHub release is published. _Values:_ `true`, `false` | `build_release_created` | Whether to create builds when a GitHub release is created. _Values:_ `true`, `false` | `build_release_released` | Whether to create builds when a GitHub release is released. _Values:_ `true`, `false` | `build_issue_comment_created` | Whether to create builds when a comment is posted on a pull request. Comments must match the configured command word (default: `/bk`) and come from a trusted author (owner, member, or collaborator). _Values:_ `true`, `false` | `issue_comment_command_word` | The command word that a PR comment must match to trigger a build. Only used when `build_issue_comment_created` is `true`. _Default:_ `/bk` | `build_deployment_status_created` | Whether to create builds when a GitHub deployment status is created. Requires the `deployment` trigger mode. Deployment statuses posted by Buildkite are automatically skipped to prevent loops. _Values:_ `true`, `false` | `build_pull_request_review_comment_created` | Whether to create builds when an inline diff comment is posted on a pull request. Comments must match the configured review comment command word and come from a trusted author (owner, member, or collaborator). _Values:_ `true`, `false` | `review_comment_command_word` | The command word that a PR review comment must match to trigger a build. Only used when `build_pull_request_review_comment_created` is `true`. _Default:_ `/bk` | `review_comment_match_mode` | How the review comment command word is matched against the comment body. `exact` requires the entire whitespace-trimmed comment to equal the command word; `contains` matches if the command word appears anywhere in the comment. Both modes are case-insensitive. _Values:_ `exact`, `contains`. _Default:_ `exact` | `issue_comment_match_mode` | How the issue comment command word is matched against the comment body. `exact` requires the entire whitespace-trimmed comment to equal the command word; `contains` matches if the command word appears anywhere in the comment. Both modes are case-insensitive. _Values:_ `exact`, `contains`. _Default:_ `exact` | `build_pull_request_dequeued` | Whether to create builds when a pull request is removed from a merge queue. Requires `build_pull_requests` to be `true`. _Values:_ `true`, `false` | `build_create_event` | Whether to create builds when a branch or tag is created. _Values:_ `true`, `false` | `cancel_deleted_branch_builds` | Whether to cancel running builds when a branch is deleted. _Values:_ `true`, `false` | `build_merge_group_checks_requested` | Whether to create merge queue builds for merge queue enabled GitHub repository with required status checks. _Values:_ `true`, `false` | `cancel_when_merge_group_destroyed` | Whether to cancel any running builds belonging to a removed merge group. _Values:_ `true`, `false` | `use_merge_group_base_commit_for_git_diff_base` | When enabled, agents performing a git diff to determine steps to upload based on [if_changed](/docs/pipelines/configure/step-types/command-step#agent-applied-attributes) comparisons will use the base commit that points to the previous merge group rather than the base branch. _Values:_ `true`, `false` | `prefix_pull_request_fork_branch_names` | Prefix branch names for third-party fork builds to ensure they don't trigger branch conditions. For example, the `main` branch from `some-user` will become `some-user:main`. _Values:_ `true`, `false` | `publish_blocked_as_pending` | The status to use for blocked builds. `Pending` can be used with [required status checks](https://help.github.com/en/articles/enabling-required-status-checks) to prevent merging pull requests with blocked builds. _Values:_ `true`, `false` | `separate_pull_request_statuses` | Whether to create a separate status for pull request builds, allowing you to require a passing pull request build in your [required status checks](https://help.github.com/en/articles/enabling-required-status-checks) in GitHub. _Values:_ `true`, `false` | `trigger_mode` | What type of event to trigger builds on. `code` creates builds when code is pushed to GitHub. `deployment` creates builds when a deployment is created with the [GitHub Deployments API](https://developer.github.com/v3/repos/deployments/). `fork` creates builds when the GitHub repository is forked. `none` will not create any builds based on GitHub activity. _Values:_ `code`, `deployment`, `fork`, `none` --- ### Agents URL: https://buildkite.com/docs/apis/rest-api/agents #### Agents API The Buildkite agents are small, reliable cross-platform build runners. Their main responsibilities are polling buildkite.com for work, running build jobs, reporting back the status code and output log of the job, and uploading the job's artifacts. ##### Agent data model | `id` | UUID of the agent | `graphql_id` | [GraphQL ID](/docs/apis/graphql-api#graphql-ids) of the agent | `url` | Canonical API URL of the agent | `web_url` | URL of the agent on Buildkite | `name` | Name of the agent | `connection_state` | Connection state: `connected`, `disconnected`, `stopping`, or `stopped` | `hostname` | Hostname of the machine running the agent | `ip_address` | IP address of the agent | `user_agent` | User agent string identifying the agent version and platform | `version` | Version of the Buildkite agent | `creator` | User or token that registered the agent | `created_at` | When the agent was registered | `job` | Current job the agent is running (if any) | `last_job_finished_at` | When the agent last finished a job | `priority` | Priority value for job assignment | `meta_data` | Array of agent tags in `key=value` format ##### List agents Returns a [paginated list](/docs/apis/rest-api#pagination) of an organization's agents. The list only includes connected agents - agents in a disconnected state are not returned. ```bash curl -H "Authorization: Bearer $TOKEN" \ -X GET "https://api.buildkite.com/v2/organizations/{org.slug}/agents" ``` ```json [ { "id": "0b461f65-e7be-4c80-888a-ef11d81fd971", "graphql_id": "QWdlbnQtLS1mOTBhNzliNC01YjJlLTQzNzEtYjYxZS03OTA4ZDAyNmUyN2E=", "url": "https://api.buildkite.com/v2/organizations/my-great-org/agents/my-agent", "web_url": "https://buildkite.com/organizations/my-great-org/clusters/78088c9a-6e72-4896-848d-e6f479f50c24/queues/c109939f-3b71-4cd3-b175-8eb79d2eb38e/agents/0b461f65-e7be-4c80-888a-ef11d81fd971", "name": "my-agent", "connection_state": "connected", "hostname": "some.server", "ip_address": "144.132.19.12", "user_agent": "buildkite-agent/2.1.0 (linux; amd64)", "version": "2.1.0", "creator": { "id": "2eba97bc-7cc7-427f-8feb-1008c72aa1d8", "name": "Keith Pitt", "email": "me@keithpitt.com", "avatar_url": "https://www.gravatar.com/avatar/e14f55d3f939977cecbf51b64ff6f861", "created_at": "2015-05-09T21:05:59.874Z" }, "created_at": "2014-02-24T22:33:45.263Z", "job": { "id": "cd164055-9649-452b-8d8e-28fe67370a1e", "graphql_id": "Sm9iLS0tMTQ4YWQ0MzgtM2E2My00YWIxLWIzMjItNzIxM2Y3YzJhMWFi", "type": "script", "name": "rspec", "agent_query_rules": ["*"], "state": "passed", "build_url": "https://api.buildkite.com/v2/organizations/my-great-org/pipelines/sleeper/builds/50", "web_url": "https://buildkite.com/my-great-org/sleeper/builds/50#cd164055-9649-452b-8d8e-28fe67370a1e", "log_url": "https://api.buildkite.com/v2/organizations/my-great-org/pipelines/sleeper/builds/50/jobs/cd164055-9649-452b-8d8e-28fe67370a1e/log", "raw_log_url": "https://api.buildkite.com/v2/organizations/my-great-org/pipelines/sleeper/builds/50/jobs/cd164055-9649-452b-8d8e-28fe67370a1e/log.txt", "artifacts_url": "https://api.buildkite.com/v2/organizations/my-great-org/pipelines/sleeper/builds/50/jobs/cd164055-9649-452b-8d8e-28fe67370a1e/artifacts", "script_path": "sleep 1", "command": "sleep 1", "soft_failed": false, "exit_status": 0, "artifact_paths": "*", "agent": null, "created_at": "2015-07-30T12:58:22.942Z", "scheduled_at": "2015-07-30T12:58:22.935Z", "started_at": "2015-07-30T12:58:34.000Z", "finished_at": "2015-07-30T12:58:37.000Z" }, "last_job_finished_at": null, "priority": null, "meta_data": ["key1=val1","key2=val2"] } ] ``` Optional [query string parameters](/docs/api#query-string-parameters): | `name` | Filters the results by the given agent name_Example:_ `?name=ci-agent-1` | `hostname` | Filters the results by the given hostname_Example:_ `?hostname=ci-box-1` | `version` | Filters the results by the given exact version number_Example:_ `?version=2.1.0` Required scope: `read_agents` Success response: `200 OK` ##### Get an agent Returns the details for a single agent, looked up by unique ID. Any valid agents can be returned, including running and disconnected agents. ```bash curl -H "Authorization: Bearer $TOKEN" \ -X GET "https://api.buildkite.com/v2/organizations/{org.slug}/agents/{id}" ``` ```json { "id": "0b461f65-e7be-4c80-888a-ef11d81fd971", "graphql_id": "QWdlbnQtLS1mOTBhNzliNC01YjJlLTQzNzEtYjYxZS03OTA4ZDAyNmUyN2E=", "url": "https://api.buildkite.com/v2/organizations/my-great-org/agents/my-agent", "web_url": "https://buildkite.com/organizations/my-great-org/clusters/78088c9a-6e72-4896-848d-e6f479f50c24/queues/c109939f-3b71-4cd3-b175-8eb79d2eb38e/agents/0b461f65-e7be-4c80-888a-ef11d81fd971", "name": "my-agent", "connection_state": "connected", "hostname": "some.server", "ip_address": "144.132.19.12", "user_agent": "buildkite-agent/2.1.0 (linux; amd64)", "version": "2.1.0", "creator": { "id": "2eba97bc-7cc7-427f-8feb-1008c72aa1d8", "graphql_id": "VXNlci0tLThmNzFlOWI1LTczMDEtNDI4ZS1hMjQ1LWUwOWI0YzI0OWRiZg==", "name": "Keith Pitt", "email": "me@keithpitt.com", "avatar_url": "https://www.gravatar.com/avatar/e14f55d3f939977cecbf51b64ff6f861", "created_at": "2015-05-09T21:05:59.874Z" }, "created_at": "2015-05-09T21:05:59.874Z", "job": { "id": "cd164055-9649-452b-8d8e-28fe67370a1e", "graphql_id": "Sm9iLS0tZGM5YTg5MmQtM2I5Ny00MzgyLWEzYzItNWJhZmU5M2RlZWI1", "type": "script", "name": "rspec", "agent_query_rules": ["*"], "state": "passed", "build_url": "https://api.buildkite.com/v2/organizations/my-great-org/pipelines/sleeper/builds/50", "web_url": "https://buildkite.com/my-great-org/sleeper/builds/50#cd164055-9649-452b-8d8e-28fe67370a1e", "log_url": "https://api.buildkite.com/v2/organizations/my-great-org/pipelines/sleeper/builds/50/jobs/cd164055-9649-452b-8d8e-28fe67370a1e/log", "raw_log_url": "https://api.buildkite.com/v2/organizations/my-great-org/pipelines/sleeper/builds/50/jobs/cd164055-9649-452b-8d8e-28fe67370a1e/log.txt", "artifacts_url": "https://api.buildkite.com/v2/organizations/my-great-org/pipelines/sleeper/builds/50/jobs/cd164055-9649-452b-8d8e-28fe67370a1e/artifacts", "script_path": "sleep 1", "command": "sleep 1", "soft_failed": false, "exit_status": 0, "artifact_paths": "*", "agent": null, "created_at": "2015-07-30T12:58:22.942Z", "scheduled_at": "2015-07-30T12:58:22.935Z", "started_at": "2015-07-30T12:58:34.000Z", "finished_at": "2015-07-30T12:58:37.000Z" }, "last_job_finished_at": null, "priority": null, "meta_data": ["key1=val1","key2=val2"] } ``` Required scope: `read_agents` Success response: `200 OK` ##### Stop an agent > 📘 Required permissions > To stop an agent you need either: > - An Admin user API token with `write_agents` [scope](/docs/apis/managing-api-tokens#token-scopes). > - Or, if you're using the Buildkite organization's [security for pipelines](/docs/pipelines/security/permissions#manage-organization-security-for-pipelines) feature, a user token with the **Stop Agents** permission. Instruct an agent to stop accepting new build jobs and shut itself down. ```bash curl -H "Authorization: Bearer $TOKEN" \ -X PUT "https://api.buildkite.com/v2/organizations/{org.slug}/agents/{id}/stop" \ -H "Content-Type: application/json" \ -d '{ "force": true }' ``` Optional [request body properties](/docs/api#request-body-properties): | `force` | If the agent is currently processing a job, the job and the build will be canceled._Default:_ `true` Required scope: `write_agents` Success response: `204 No Content` Error responses: | `400 Bad Request` | `{ "message": "Can only stop connected agents" }` ##### Pause an agent > 📘 Required permissions > To pause an agent you need either: > - An Admin user API token with `write_agents` [scope](/docs/apis/managing-api-tokens#token-scopes). > - Or, if you're using the Buildkite organization's [security for pipelines](/docs/pipelines/security/permissions#manage-organization-security-for-pipelines) feature, a user token with the **Stop Agents** permission. Prevent dispatching jobs to an agent, and instruct the agent (which would otherwise exit when the job either is completed or times out) to remain running after finishing its current job. ```bash curl -H "Authorization: Bearer ${TOKEN}" \ -X PUT "https://api.buildkite.com/v2/organizations/{org.slug}/agents/{id}/pause" \ -H "Content-Type: application/json" \ -d '{ "note": "A short note explaining why this agent is being paused", "timeout_in_minutes": 60 }' ``` Required scope: `write_agents` Success response: `204 No Content` Error responses: | `404 Not Found` | `{ "message": "No agent found" }` | `422 Unprocessable Entity` | `{ "message": "Agent is already paused" }` | `422 Unprocessable Entity` | `{ "message": "Only connected agents may be paused" }` ##### Resume an agent > 📘 Required permissions > To resume a paused agent you need either: > - An Admin user API token with `write_agents` [scope](/docs/apis/managing-api-tokens#token-scopes). > - Or, if you're using the Buildkite organization's [security for pipelines](/docs/pipelines/security/permissions#manage-organization-security-for-pipelines) feature, a user token with the **Stop Agents** permission. Resume dispatching jobs to an agent, and instruct the agent to resume normal operation. ```bash curl -H "Authorization: Bearer ${TOKEN}" \ -X PUT "https://api.buildkite.com/v2/organizations/{org.slug}/agents/{id}/resume" \ -H "Content-Type: application/json" \ -d '{}' ``` Required scope: `write_agents` Success response: `204 No Content` Error responses: | `404 Not Found` | `{ "message": "No agent found" }` | `422 Unprocessable Entity` | `{ "message": "Agent is not paused" }` ##### Agent tokens Agent tokens are created through the [clusters REST API endpoint](/docs/apis/rest-api/clusters/agent-tokens). --- ### Annotations URL: https://buildkite.com/docs/apis/rest-api/annotations #### Annotations API An annotation is a snippet of Markdown uploaded by your agent during the execution of a build's job. Annotations are created using the [`buildkite-agent annotate` command](/docs/agent/cli/reference/annotate) from within a job. ##### Annotation data model | `id` | ID of the annotation | `context` | The "context" specified when annotating the build. Only one annotation per build may have any given context value. | `style` | The style of the annotation. Can be `success`, `info`, `warning`, or `error`. | `scope` | The scope of the annotation. Either `build` or `job`. | `priority` | The priority of the annotation (`1` to `10`). Higher values are shown first. Default is `3`. | `body_html` | Rendered HTML of the annotation's body | `created_at` | When the annotation was first created | `updated_at` | When the annotation was last added to or replaced ##### List annotations for a build Returns a [paginated list](/docs/apis/rest-api#pagination) of a build's annotations. ```bash curl -H "Authorization: Bearer $TOKEN" \ -X GET "https://api.buildkite.com/v2/organizations/{org.slug}/pipelines/{pipeline.slug}/builds/{build.number}/annotations" ``` > 📘 > Note that this URL requires using a _build number_ (for example, `3`), not a _build ID_ (for example, `01908131-7d9f-495e-a17b-80ed31276810`). > Learn more about the difference between these concepts in [Build number vs build ID](/docs/apis/rest-api/builds#build-number-vs-build-id). ```json [ { "id": "de0d4ab5-6360-467a-a34b-e5ef5db5320d", "context": "default", "style": "info", "scope": "build", "priority": 3, "body_html": "# My Markdown Heading\n", "created_at": "2019-04-09T18:07:15.775Z", "updated_at": "2019-08-06T20:58:49.396Z" }, { "id": "5b3ceff6-78cb-4fe9-88ae-51be5f145977", "context": "coverage", "style": "info", "scope": "build", "priority": 3, "body_html": "Read the uploaded coverage report", "created_at": "2019-04-09T18:07:16.320Z", "updated_at": "2019-04-09T18:07:16.320Z" } ] ``` Required scope: `read_builds` Success response: `200 OK` ##### Create an annotation on a build Creates an annotation on a build. ```bash curl -H "Authorization: Bearer $TOKEN" \ -X POST "https://api.buildkite.com/v2/organizations/{org.slug}/pipelines/{pipeline.slug}/builds/{build.number}/annotations" \ -H "Content-Type: application/json" \ -d '{ "body": "Hello world!", "style": "info", "priority": 5, "context": "greeting" }' ``` > 📘 > Note that this URL requires using a _build number_ (for example, `3`), not a _build ID_ (for example, `01908131-7d9f-495e-a17b-80ed31276810`). > Learn more about the difference between these concepts in [Build number vs build ID](/docs/apis/rest-api/builds#build-number-vs-build-id). ```json { "id": "018b8d10-6b5b-4df2-b0ff-dfa2af566050", "context": "greeting", "style": "info", "scope": "build", "priority": 5, "body_html": "Hello world! \n", "created_at": "2023-11-01T22:45:45.435Z", "updated_at": "2023-11-01T22:45:45.435Z" } ``` Required [request body properties](/docs/api#request-body-properties): | `body` | The annotation's body, as [HTML or Markdown](/docs/pipelines/configure/annotations#formatting-annotations-supported-markdown-syntax). _Example:_ `"My annotation here"` Optional [request body properties](/docs/api#request-body-properties): | `style` | The style of the annotation. Can be `success`, `info`, `warning`, or `error`. _Example:_ `"info"` | `priority` | The priority of the annotation (`1` to `10`). Annotations with a priority of `10` are shown first, while annotations with a priority of `1` are shown last. When this option is not specified, annotations have a default priority of `3`. _Example:_ `5` | `context` | A string value by which to identify the annotation on the build. This is useful when appending to an existing annotation. Only one annotation per build may have any given context value. _Example:_ `"coverage"` | `append` | Whether to append the given `body` onto the annotation with the same context. _Example:_ `true` Required scope: `write_builds` Success response: `201 Created` ##### Delete an annotation on a build Deletes an annotation on a build. ```bash curl -H "Authorization: Bearer $TOKEN" \ -X DELETE "https://api.buildkite.com/v2/organizations/{org.slug}/pipelines/{pipeline.slug}/builds/{build.number}/annotations/{annotation.uuid}" ``` Required scope: `write_builds` Success response: `204 No Content` ##### List annotations for a job Returns a [paginated list](/docs/apis/rest-api#pagination) of a job's annotations. ```bash curl -H "Authorization: Bearer $TOKEN" \ -X GET "https://api.buildkite.com/v2/organizations/{org.slug}/pipelines/{pipeline.slug}/builds/{build.number}/jobs/{job.id}/annotations" ``` ```json [ { "id": "a7c5b1d2-4f3e-4a1b-9c8d-6e2f1a3b4c5d", "context": "test-results", "style": "success", "scope": "job", "priority": 3, "body_html": "All 42 tests passed \n", "created_at": "2024-01-15T10:30:00.000Z", "updated_at": "2024-01-15T10:30:00.000Z" } ] ``` Required scope: `read_builds` Success response: `200 OK` ##### Create an annotation on a job Creates an annotation scoped to a specific job in a build. Job-scoped annotations use the same parameters as build-scoped annotations. However, the `scope` is automatically set to `job`. ```bash curl -H "Authorization: Bearer $TOKEN" \ -X POST "https://api.buildkite.com/v2/organizations/{org.slug}/pipelines/{pipeline.slug}/builds/{build.number}/jobs/{job.id}/annotations" \ -H "Content-Type: application/json" \ -d '{ "body": "Test results: 42 passed", "style": "success", "context": "test-results" }' ``` ```json { "id": "a7c5b1d2-4f3e-4a1b-9c8d-6e2f1a3b4c5d", "context": "test-results", "style": "success", "scope": "job", "priority": 3, "body_html": "Test results: 42 passed \n", "created_at": "2024-01-15T10:30:00.000Z", "updated_at": "2024-01-15T10:30:00.000Z" } ``` Required [request body properties](/docs/api#request-body-properties): | `body` | The annotation's body, as [HTML or Markdown](/docs/pipelines/configure/annotations#formatting-annotations-supported-markdown-syntax). _Example:_ `"My annotation here"` Optional [request body properties](/docs/api#request-body-properties): | `style` | The style of the annotation. Can be `success`, `info`, `warning`, or `error`. _Example:_ `"info"` | `priority` | The priority of the annotation (`1` to `10`). Annotations with a priority of `10` are shown first, while annotations with a priority of `1` are shown last. When this option is not specified, annotations have a default priority of `3`. _Example:_ `5` | `context` | A string value by which to identify the annotation on the job. This is useful when appending to an existing annotation. Only one annotation per job may have any given context value. _Example:_ `"coverage"` | `append` | Whether to append the given `body` onto the annotation with the same context. _Example:_ `true` Required scope: `write_builds` Success response: `201 Created` ##### Delete an annotation on a job Deletes an annotation on a job. ```bash curl -H "Authorization: Bearer $TOKEN" \ -X DELETE "https://api.buildkite.com/v2/organizations/{org.slug}/pipelines/{pipeline.slug}/builds/{build.number}/jobs/{job.id}/annotations/{annotation.uuid}" ``` Required scope: `write_builds` Success response: `204 No Content` --- ### Artifacts URL: https://buildkite.com/docs/apis/rest-api/artifacts #### Artifacts API An artifact is a file uploaded by your agent during the execution of a build's job. The contents of the artifact can be retrieved using the `download_url` and the [artifact download API](#download-an-artifact). ##### Artifact data model | `id` | ID of the artifact | `job_id` | ID of the job | `url` | Canonical API URL of the artifact | `download_url` | Artifact Download API URL for the artifact | `state` | State of the artifact (`new`, `error`, `finished`, `deleted`, `expired`) | `path` | Path of the artifact | `dirname` | Path of the artifact excluding the filename | `filename` | Filename of the artifact | `mime_type` | Mime type of the artifact | `file_size` | File size of the artifact in bytes | `sha1sum` | SHA-1 hash of artifact contents as calculated by the agent > 🚧 Deprecated fields > Artifacts previously included `glob_path` and `original_path` but these were [deprecated](https://buildkite.com/changelog/71-artifacts-glob-path-and-original-path-fields-are-deprecated) and now return `null`. ##### List artifacts for a build Returns a [paginated list](/docs/apis/rest-api#pagination) of a build's artifacts across all of its jobs. ```bash curl -H "Authorization: Bearer $TOKEN" \ -X GET "https://api.buildkite.com/v2/organizations/{org.slug}/pipelines/{pipeline.slug}/builds/{build.number}/artifacts" ``` > 📘 > Note that this URL requires using a _build number_ (for example, `3`), not a _build ID_ (for example, `01908131-7d9f-495e-a17b-80ed31276810`). > Learn more about the difference between these concepts in [Build number vs build ID](/docs/apis/rest-api/builds#build-number-vs-build-id). ```json [ { "id": "76365070-34d5-4104-8b91-952780f8029f", "job_id": "aae578fe-994c-44e6-84da-4102616928ba", "url": "https://api.buildkite.com/v2/organizations/my-great-org/pipelines/my-pipeline/builds/1/jobs/aae578fe-994c-44e6-84da-4102616928ba/artifacts/76365070-34d5-4104-8b91-952780f8029f", "download_url": "https://api.buildkite.com/v2/organizations/my-great-org/pipelines/my-pipeline/builds/1/jobs/aae578fe-994c-44e6-84da-4102616928ba/artifacts/76365070-34d5-4104-8b91-952780f8029f/download", "state": "finished", "path": "dist/app.tar.gz", "dirname": "dist", "filename": "app.tar.gz", "mime_type": "application/x-gzip", "file_size": 529371, "glob_path": null, "original_path": null, "sha1sum": "884c4ad3f2545c85c69d0d0ef50c5d4f5266f0b7" }, { "id": "89f4ce5c-6e1d-482c-9ca6-88c050291c77", "job_id": "ea3cfae9-a565-4353-8a5e-16436c164e43", "url": "https://api.buildkite.com/v2/organizations/my-great-org/pipelines/my-pipeline/builds/1/jobs/ea3cfae9-a565-4353-8a5e-16436c164e43/artifacts/5c12c7f7-8fb1-419d-b979-48a9e45c7bd7", "download_url": "https://api.buildkite.com/v2/organizations/my-great-org/pipelines/my-pipeline/builds/1/jobs/ea3cfae9-a565-4353-8a5e-16436c164e43/artifacts/5c12c7f7-8fb1-419d-b979-48a9e45c7bd7/download", "state": "new", "path": "tmp/screenshots/155b0d82-4d8e-4b07-9fea-49b58c1c6f1b.png", "dirname": "tmp/screenshots", "filename": "155b0d82-4d8e-4b07-9fea-49b58c1c6f1b.png", "mime_type": "image/png", "file_size": 1521347, "glob_path": null, "original_path": null, "sha1sum": "7a788f56fa49ae0ba5ebde780efe4d6a89b5db47" } ] ``` Required scope: `read_artifacts` Success response: `200 OK` ##### List artifacts for a job Returns a [paginated list](/docs/apis/rest-api#pagination) of a job's artifacts. ```bash curl -H "Authorization: Bearer $TOKEN" \ -X GET "https://api.buildkite.com/v2/organizations/{org.slug}/pipelines/{pipeline.slug}/builds/{build.number}/jobs/{job.id}/artifacts" ``` > 📘 > Note that this URL requires using a _build number_ (for example, `3`), not a _build ID_ (for example, `01908131-7d9f-495e-a17b-80ed31276810`). > Learn more about the difference between these concepts in [Build number vs build ID](/docs/apis/rest-api/builds#build-number-vs-build-id). ```json [ { "id": "76365070-34d5-4104-8b91-952780f8029f", "job_id": "aae578fe-994c-44e6-84da-4102616928ba", "url": "https://api.buildkite.com/v2/organizations/my-great-org/pipelines/my-pipeline/builds/1/jobs/aae578fe-994c-44e6-84da-4102616928ba/artifacts/76365070-34d5-4104-8b91-952780f8029f", "download_url": "https://api.buildkite.com/v2/organizations/my-great-org/pipelines/my-pipeline/builds/1/jobs/aae578fe-994c-44e6-84da-4102616928ba/artifacts/76365070-34d5-4104-8b91-952780f8029f/download", "state": "finished", "path": "dist/app.tar.gz", "dirname": "dist", "filename": "app.tar.gz", "mime_type": "application/x-gzip", "file_size": 529371, "glob_path": null, "original_path": null, "sha1sum": "884c4ad3f2545c85c69d0d0ef50c5d4f5266f0b7" } ] ``` Required scope: `read_artifacts` Success response: `200 OK` ##### Get an artifact ```bash curl -H "Authorization: Bearer $TOKEN" \ -X GET "https://api.buildkite.com/v2/organizations/{org.slug}/pipelines/{pipeline.slug}/builds/{build.number}/jobs/{job.id}/artifacts/{id}" ``` > 📘 > Note that this URL requires using a _build number_ (for example, `3`), not a _build ID_ (for example, `01908131-7d9f-495e-a17b-80ed31276810`). > Learn more about the difference between these concepts in [Build number vs build ID](/docs/apis/rest-api/builds#build-number-vs-build-id). ```json { "id": "76365070-34d5-4104-8b91-952780f8029f", "job_id": "aae578fe-994c-44e6-84da-4102616928ba", "url": "https://api.buildkite.com/v2/organizations/my-great-org/pipelines/my-pipeline/builds/1/jobs/aae578fe-994c-44e6-84da-4102616928ba/artifacts/76365070-34d5-4104-8b91-952780f8029f", "download_url": "https://api.buildkite.com/v2/organizations/my-great-org/pipelines/my-pipeline/builds/1/jobs/aae578fe-994c-44e6-84da-4102616928ba/artifacts/76365070-34d5-4104-8b91-952780f8029f/download", "state": "finished", "path": "dist/app.tar.gz", "dirname": "dist", "filename": "app.tar.gz", "mime_type": "application/x-gzip", "file_size": 529371, "glob_path": null, "original_path": null, "sha1sum": "884c4ad3f2545c85c69d0d0ef50c5d4f5266f0b7" } ``` Required scope: `read_artifacts` Success response: `200 OK` ##### Download an artifact Returns a 302 response to a URL for downloading an artifact. The URL will be returned in the response body and the `Location` HTTP header. You should assume the URL returned will only be valid for 60 seconds, unless you've used your own S3 bucket where the URL will be the standard public S3 URL to the artifact object. ```bash curl -H "Authorization: Bearer $TOKEN" \ -X GET "https://api.buildkite.com/v2/organizations/{org.slug}/pipelines/{pipeline.slug}/builds/{build.number}/jobs/{job.id}/artifacts/{id}/download" ``` > 📘 > Note that this URL requires using a _build number_ (for example, `3`), not a _build ID_ (for example, `01908131-7d9f-495e-a17b-80ed31276810`). > Learn more about the difference between these concepts in [Build number vs build ID](/docs/apis/rest-api/builds#build-number-vs-build-id). ```json { "url": "https://buildkiteartifacts.com/artifacts/2196c80a1ff393a88482aebe929f9648/dist/app.tar.gz?AWSAccessKeyId=AKIAIPPJ2IPWN5U3O1OA&Expires=1288526454&Signature=5i4%2B99rUwhpP2SbNsJKhT/nSzsQ" } ``` Required scope: `read_artifacts` Success response: `302 Found` ##### Delete an artifact The artifact record is marked as deleted in the Buildkite database, and the artifact itself is removed from the Buildkite AWS S3 bucket. It will no longer be displayed in the job or build artifact lists, and it will not be returned by the artifact APIs. If the artifact was uploaded using the agent's custom [AWS S3](/docs/agent/cli/reference/artifact#using-your-private-aws-s3-bucket), [Google Cloud](/docs/agent/cli/reference/artifact#using-your-private-google-cloud-bucket), or [Artifactory](/docs/pipelines/integrations/artifacts-and-packages/artifactory) storage support, the file will not be automatically deleted from that storage. You must delete the file from your private storage yourself. ```bash curl -H "Authorization: Bearer $TOKEN" \ -X DELETE "http://api.buildkite.com/v2/organizations/{artifact.job.build.project.account.slug}/pipelines/{artifact.job.build.project.slug}/builds/{artifact.job.build.number}/jobs/{artifact.job.uuid}/artifacts/{artifact.uuid}?access_token={access_token.token}" ``` Required scope: `write_artifacts` Success response: `204 No Content` --- ### Builds URL: https://buildkite.com/docs/apis/rest-api/builds #### Builds API A build is a single run of a pipeline. You can trigger a build in various ways, including through the dashboard, API, as the result of a webhook, on a schedule, or even from another pipeline using a trigger step. ##### Build number vs build ID All builds have a _build ID_ (for example, `01908131-7d9f-495e-a17b-80ed31276810`), which is a unique value throughout the entire Buildkite platform, as well as a _build number_ (for example, `27`). A build number is unique to a pipeline, and its value is incremented with each build, although there may be occasional gaps. Note that some API request types on this page, especially those involving only a single build, require using a build number rather than a build ID. ##### Build data model | `id` | UUID of the build | `graphql_id` | [GraphQL ID](/docs/apis/graphql-api#graphql-ids) of the build | `url` | Canonical API URL of the build | `web_url` | URL of the build on Buildkite | `number` | Build number within the pipeline (unique per pipeline, may have gaps) | `state` | Current state of the build: `scheduled`, `running`, `passed`, `failed`, `blocked`, `canceled`, `canceling`, `skipped`, `not_run`, `waiting`, or `waiting_failed` | `blocked` | Whether the build is blocked waiting on a block step (`true`, `false`) | `cancel_reason` | Reason provided when the build was canceled (if applicable) | `message` | Commit message or custom message for the build | `commit` | Git commit SHA being built | `branch` | Git branch being built | `env` | Environment variables passed to the build | `source` | How the build was triggered: `webhook`, `api`, `ui`, `trigger_job`, or `schedule` | `creator` | User who created the build | `jobs` | Array of [Job](#job-data-model) objects in the build | `created_at` | When the build was created | `scheduled_at` | When the build was scheduled | `started_at` | When the build's first job was started by an agent | `finished_at` | When the build finished (passed, failed, canceled) | `meta_data` | Key-value metadata associated with the build | `pull_request` | Pull request information if applicable | `rebuilt_from` | Build this was rebuilt from (if applicable) | `pipeline` | Pipeline the build belongs to ##### Job data model Jobs are the individual units of work within a build. | `id` | UUID of the job | `graphql_id` | [GraphQL ID](/docs/apis/graphql-api#graphql-ids) of the job | `type` | Type of job: `script`, `waiter`, `manual`, or `trigger` | `name` | Display name of the job (may include emoji) | `step_key` | Key identifier for the step if specified in the pipeline | `step` | Step information including signature details | `priority` | Priority of the job | `agent_query_rules` | Agent query rules used to route this job | `state` | Current state: `pending`, `waiting`, `waiting_failed`, `blocked`, `blocked_failed`, `unblocked`, `unblocked_failed`, `scheduled`, `assigned`, `accepted`, `running`, `passed`, `failed`, `timed_out`, `timing_out`, `canceled`, `canceling`, `skipped`, `broken`, `expired`, or `limited` | `build_url` | URL of the build on Buildkite | `web_url` | URL of the job on Buildkite | `log_url` | API URL for the job's log | `raw_log_url` | API URL for the job's raw log text | `artifact_url` | API URL for the artifacts associated with the job | `command` | Command executed by the job | `soft_failed` | Whether the job soft-failed (`true`, `false`) | `exit_status` | Exit code of the command (integer) | `artifact_paths` | Glob patterns for artifact upload | `agent` | Agent that ran the job (if assigned) | `created_at` | When the job was added to the build | `scheduled_at` | When the job was scheduled for execution | `runnable_at` | When the job became ready to be accepted by an agent | `started_at` | When the job was started by an agent | `finished_at` | When the job finished | `retried` | Whether this job was retried (`true`, `false`) | `retried_in_job_id` | UUID of the retry job (if retried) | `retries_count` | Number of retries for this job | `retry_source` | Source of the retry job | `retry_type` | Type of retry if applicable | `parallel_group_index` | Index within a parallel group (if parallel job) | `parallel_group_total` | Total jobs in the parallel group (if parallel job) | `matrix` | Matrix configuration values (if matrix job) | `cluster_id` | UUID of the cluster (if using clusters) | `cluster_queue_id` | UUID of the cluster queue (if using clusters) | `async` | For `trigger` jobs; defines whether the triggered build runs asynchronously (`true`, `false`) | `triggered_build` | For `trigger` jobs, an object with details of the build that was triggered, containing `id`, `number`, `url`, and `web_url`. Returns `null` if the build has not yet been created. ##### Timestamp attributes There are several different timestamps relating to timing for builds and jobs. There are four main time values which are available on both build and job API calls. The timestamps are available using both the GraphQL and REST APIs. They differ slightly between the build and job objects. Each _build_ is provided with the following timestamps: | `scheduled_at` | The time the build was created. All builds from a `pipeline upload` have a `scheduled_at` copied from the job that did the uploading | `created_at` | The time the build was created. For uploaded pipelines it is when the `pipeline upload` was run. | `started_at` | The time the build's first job was started by an agent | `finished_at` | The time the build is marked as finished (passed, failed, paused, canceled) Each _job_ is provided with the same timestamps, but their values differ from those on each build: | `scheduled_at` | The time when the scheduler process processes the job. If a job was created after the build, the job's `scheduled_at` value will inherit the build's `created_at` value, because of this it can be earlier than the job's `created_at` timestamp. | `created_at` | The time when the job was added to the build | `runnable_at` | The time when a job was ready to be accepted by an agent | `started_at` | The time the job was started by an agent | `finished_at` | The time the job is marked as finished (passed, failed, paused, canceled) ##### List all builds Returns a [paginated list](/docs/apis/rest-api#pagination) of all builds across all the user's organizations and pipelines. If using token-based authentication the list of builds will be for the authorized organizations only. Builds are listed in the order they were created (newest first). ```bash curl -H "Authorization: Bearer $TOKEN" \ -X GET "https://api.buildkite.com/v2/builds" ``` Optional [query string parameters](/docs/api#query-string-parameters): | `branch` | Filters the results by the given branch or branches. _Example:_ `?branch=main` returns all builds on the `main` branch _Example:_ `?branch[]=main&branch[]=testing` returns all builds on `main` and `testing` branches _Example:_ `?branch=*dev*` returns all builds on branches with names containing `dev` | `commit` | Filters the results by the commit (only works for full SHA, not for shortened ones). _Example:_ `?commit=long-hash` You can query for multiple commits using Rails array syntax:_Example:_ `?commit[]=sha2&commit[]=sha1` | `created_from` | Filters the results by builds created on or after the given time (in ISO 8601 format) _Example:_ `?created_from=2025-01-08T23:22:05Z` | `created_to` | Filters the results by builds created before the given time (in ISO 8601 format) _Example:_ `?created_to=2025-02-13T23:22:05Z` | `creator` | Filters the results by the user who created the build _Example:_ `?creator=5acb99cf-d349-4189-b361-d1b9f36d70d7` | `finished_from` | Filters the results by builds finished on or after the given time (in ISO 8601 format) _Example:_ `?finished_from=2025-01-11T23:22:05Z` | `include_retried_jobs` | Include all retried job executions in each build's jobs list. Without this parameter, you'll see only the most recently run job for each step. _Example:_ `?include_retried_jobs=true` | `meta_data` | Filters the results by the given meta-data. _Example:_ `?meta_data[some-key]=some-value` | `state` | Filters the results by the given [build state](/docs/pipelines/configure/notify#build-states). The `finished` state is a shortcut to automatically search for builds with `passed`, `failed`, `blocked`, `canceled` states. _Valid states:_ `creating`, `scheduled`, `running`, `passed`, `failing`, `failed`, `blocked`, `canceling`, `canceled`, `skipped`, `not_run`, `finished` _Example:_ `?state=passed` returns all `passed` builds _Example:_ `?state[]=scheduled&state[]=running` returns all `scheduled` and `running` builds Required scope: `read_builds` Success response: `200 OK` ##### List builds for an organization Returns a [paginated list](/docs/apis/rest-api#pagination) of an organization's builds across all of an organization's pipelines. Builds are listed in the order they were created (newest first). ```bash curl -H "Authorization: Bearer $TOKEN" \ -X GET "https://api.buildkite.com/v2/organizations/{org.slug}/builds" ``` Optional [query string parameters](/docs/api#query-string-parameters): | `branch` | Filters the results by the given branch or branches. _Example:_ `?branch=main` returns all builds on the `main` branch _Example:_ `?branch[]=main&branch[]=testing` returns all builds on `main` and `testing` branches _Example:_ `?branch=*dev*` returns all builds on branches with names containing `dev` | `commit` | Filters the results by the commit (only works for full SHA, not for shortened ones). _Example:_ `?commit=long-hash` You can query for multiple commits using Rails array syntax:_Example:_ `?commit[]=sha2&commit[]=sha1` | `created_from` | Filters the results by builds created on or after the given time (in ISO 8601 format) _Example:_ `?created_from=2025-01-08T23:22:05Z` | `created_to` | Filters the results by builds created before the given time (in ISO 8601 format) _Example:_ `?created_to=2025-02-13T23:22:05Z` | `creator` | Filters the results by the user who created the build _Example:_ `?creator=5acb99cf-d349-4189-b361-d1b9f36d70d7` | `finished_from` | Filters the results by builds finished on or after the given time (in ISO 8601 format) _Example:_ `?finished_from=2025-01-11T23:22:05Z` | `include_retried_jobs` | Include all retried job executions in each build's jobs list. Without this parameter, you'll see only the most recently run job for each step. _Example:_ `?include_retried_jobs=true` | `meta_data` | Filters the results by the given meta-data. _Example:_ `?meta_data[some-key]=some-value` | `state` | Filters the results by the given [build state](/docs/pipelines/configure/notify#build-states). The `finished` state is a shortcut to automatically search for builds with `passed`, `failed`, `blocked`, `canceled` states. _Valid states:_ `creating`, `scheduled`, `running`, `passed`, `failing`, `failed`, `blocked`, `canceling`, `canceled`, `skipped`, `not_run`, `finished` _Example:_ `?state=passed` returns all `passed` builds _Example:_ `?state[]=scheduled&state[]=running` returns all `scheduled` and `running` builds Required scope: `read_builds` Success response: `200 OK` ##### List builds for a pipeline Returns a [paginated list](/docs/apis/rest-api#pagination) of a pipeline's builds. Builds are listed in the order they were created (newest first). ```bash curl -H "Authorization: Bearer $TOKEN" \ -X GET "https://api.buildkite.com/v2/organizations/{org.slug}/pipelines/{pipeline.slug}/builds" ``` ```json [ { "id": "f62a1b4d-10f9-4790-bc1c-e2c3a0c80983", "graphql_id": "QnVpbGQtLS1mYmQ2Zjk3OS0yOTRhLTQ3ZjItOTU0Ni1lNTk0M2VlMTAwNzE=", "url": "https://api.buildkite.com/v2/organizations/my-great-org/pipelines/my-pipeline/builds/1", "web_url": "https://buildkite.com/my-great-org/my-pipeline/builds/1", "number": 1, "state": "passed", "cancel_reason": "reason for a canceled build", "blocked": false, "message": "Bumping to version 0.2-beta.6", "commit": "abcd0b72a1e580e90712cdd9eb26d3fb41cd09c8", "branch": "main", "env": { }, "source": "webhook", "creator": { "id": "3d3c3bf0-7d58-4afe-8fe7-b3017d5504de", "name": "Keith Pitt", "email": "keith@buildkite.com", "avatar_url": "https://www.gravatar.com/avatar/e14f55d3f939977cecbf51b64ff6f861", "created_at": "2015-05-22T12:36:45.309Z" }, "jobs": [ { "id": "b63254c0-3271-4a98-8270-7cfbd6c2f14e", "graphql_id": "Sm9iLS0tMTQ4YWQ0MzgtM2E2My00YWIxLWIzMjItNzIxM2Y3YzJhMWFi", "type": "script", "name": "RSpec", "step_key": "rspec", "group_key": "tests", "step": { "id": "018c0f56-c87c-47e9-95ee-aa47397b4496", "signature": { "value": "eyJhbGciOiJFUzI1NiIsImtpZCI6InlvdSBzbHkgZG9nISB5b3UgY2F1Z2h0IG1lIG1vbm9sb2d1aW5nISJ9..m9LBvNgbzmO5JuZ4Bwoheyn7uqLf3TN1EdFwv_l_nMT2qh0_2EVs30SAEc-Ajjkq18MQk3cgU36AodLPl3_hBg", "algorithm": "EdDSA", "signed_fields": [ "command", "env", "matrix", "plugins", "repository_url" ] } }, "agent_query_rules": ["*"], "state": "passed", "web_url": "https://buildkite.com/my-great-org/my-pipeline/builds/1#b63254c0-3271-4a98-8270-7cfbd6c2f14e", "log_url": "https://api.buildkite.com/v2/organizations/my-great-org/pipelines/my-pipeline/builds/1/jobs/b63254c0-3271-4a98-8270-7cfbd6c2f14e/log", "raw_log_url": "https://api.buildkite.com/v2/organizations/my-great-org/pipelines/my-pipeline/builds/1/jobs/b63254c0-3271-4a98-8270-7cfbd6c2f14e/log.txt", "command": "bundle exec rspec", "soft_failed": false, "exit_status": 0, "artifact_paths": "", "agent": { "id": "0b461f65-e7be-4c80-888a-ef11d81fd971", "url": "https://api.buildkite.com/v2/organizations/my-great-org/agents/0b461f65-e7be-4c80-888a-ef11d81fd971", "name": "my-agent-123" }, "created_at": "2015-05-09T21:05:59.874Z", "scheduled_at": "2015-05-09T21:05:59.874Z", "runnable_at": "2015-05-09T21:06:59.874Z", "started_at": "2015-05-09T21:07:59.874Z", "finished_at": "2015-05-09T21:08:59.874Z", "retried": false, "retried_in_job_id": null, "retries_count": null, "retry_type": null, "parallel_group_index": null, "parallel_group_total": null, "matrix": null, "cluster_id": null, "cluster_url": null, "cluster_queue_id": null, "cluster_queue_url": null } ], "created_at": "2015-05-09T21:05:59.874Z", "scheduled_at": "2015-05-09T21:05:59.874Z", "started_at": "2015-05-09T21:05:59.874Z", "finished_at": "2015-05-09T21:05:59.874Z", "meta_data": { }, "pull_request": { }, "rebuilt_from": null, "pipeline": { "id": "849411f9-9e6d-4739-a0d8-e247088e9b52", "graphql_id": "UGlwZWxpbmUtLS1lOTM4ZGQxYy03MDgwLTQ4ZmQtOGQyMC0yNmQ4M2E0ZjNkNDg=", "url": "https://api.buildkite.com/v2/organizations/my-great-org/pipelines/my-pipeline", "web_url": "https://buildkite.com/my-great-org/my-pipeline", "name": "great-pipeline", "slug": "great-pipeline", "repository": "git@github.com:my-great-org/my-pipeline", "branch_configuration": null, "default_branch": "main", "provider": { "id": "github", "webhook_url": "https://webhook.buildkite.com/deliver/xxx", "settings": { "trigger_mode": "code", "build_pull_requests": true, "pull_request_branch_filter_enabled": false, "skip_pull_request_builds_for_existing_commits": true, "build_pull_request_forks": false, "prefix_pull_request_fork_branch_names": true, "build_tags": false, "publish_commit_status": true, "publish_commit_status_per_step": false, "publish_blocked_as_pending": false, "repository": "my-great-org/my-pipeline" }, }, "skip_queued_branch_builds": false, "skip_queued_branch_builds_filter": null, "cancel_running_branch_builds": false, "cancel_running_branch_builds_filter": null, "builds_url": "https://api.buildkite.com/v2/organizations/my-great-org/pipelines/my-pipeline/builds", "badge_url": "https://badge.buildkite.com/58b3da999635d0ad2daae5f784e56d264343eb02526f129bfb.svg", "created_at": "2015-05-09T21:05:59.874Z", "scheduled_builds_count": 0, "running_builds_count": 0, "scheduled_jobs_count": 0, "running_jobs_count": 0, "waiting_jobs_count": 0 } } ] ``` > 📘 Webhook URL > The response only includes a webhook URL in `pipeline.provider.webhook_url` if the user has edit permissions for the pipeline. Otherwise, the field returns with an empty string. Optional [query string parameters](/docs/api#query-string-parameters): | `branch` | Filters the results by the given branch or branches. _Example:_ `?branch=main` returns all builds on the `main` branch _Example:_ `?branch[]=main&branch[]=testing` returns all builds on `main` and `testing` branches _Example:_ `?branch=*dev*` returns all builds on branches with names containing `dev` | `commit` | Filters the results by the commit (only works for full SHA, not for shortened ones). _Example:_ `?commit=long-hash` You can query for multiple commits using Rails array syntax:_Example:_ `?commit[]=sha2&commit[]=sha1` | `created_from` | Filters the results by builds created on or after the given time (in ISO 8601 format). _Example:_ `?created_from=2025-01-08T23:22:05Z` | `created_to` | Filters the results by builds created before the given time (in ISO 8601 format). _Example:_ `?created_to=2025-02-13T23:22:05Z` | `creator` | Filters the results by the user who created the build. _Example:_ `?creator=5acb99cf-d349-4189-b361-d1b9f36d70d7` | `exclude_jobs` | Exclude the list of jobs from each build's details. _Example:_ `?exclude_jobs=true` | `exclude_pipeline` | Exclude the pipeline details from each build's details. _Example:_ `?exclude_pipeline=true` | `finished_from` | Filters the results by builds finished on or after the given time (in ISO 8601 format). _Example:_ `?finished_from=2025-01-11T23:22:05Z` | `include_retried_jobs` | Include all retried job executions in each build's jobs list. Without this parameter, you'll only see the most recently run job for each step. _Example:_ `?include_retried_jobs=true` | `meta_data` | Filters the results by the given meta-data. _Example:_ `?meta_data[some-key]=some-value` | `state` | Filters the results by the given [build state](/docs/pipelines/configure/notify#build-states). The `finished` state is a shortcut to automatically search for builds with `passed`, `failed`, `blocked`, `canceled` states. _Valid states:_ `creating`, `scheduled`, `running`, `passed`, `failing`, `failed`, `blocked`, `canceling`, `canceled`, `skipped`, `not_run`, `finished` _Example:_ `?state=passed` returns all `passed` builds _Example:_ `?state[]=scheduled&state[]=running` returns all `scheduled` and `running` builds Required scope: `read_builds` Success response: `200 OK` ##### Get a build ```bash curl -H "Authorization: Bearer $TOKEN" \ -X GET "https://api.buildkite.com/v2/organizations/{org.slug}/pipelines/{pipeline.slug}/builds/{number}" ``` > 📘 > Note that this URL requires using a _build number_ (for example, `3`), not a build ID (for example, `01908131-7d9f-495e-a17b-80ed31276810`). ```json { "id": "f62a1b4d-10f9-4790-bc1c-e2c3a0c80983", "graphql_id": "QnVpbGQtLS1mYmQ2Zjk3OS0yOTRhLTQ3ZjItOTU0Ni1lNTk0M2VlMTAwNzE=", "url": "https://api.buildkite.com/v2/organizations/my-great-org/pipelines/my-pipeline/builds/2", "web_url": "https://buildkite.com/my-great-org/my-pipeline/builds/2", "number": 2, "state": "passed", "cancel_reason": null, "blocked": false, "message": "Bumping to version 0.2-beta.6", "commit": "abcd0b72a1e580e90712cdd9eb26d3fb41cd09c8", "branch": "main", "env": { }, "source": "webhook", "creator": { "id": "3d3c3bf0-7d58-4afe-8fe7-b3017d5504de", "name": "Keith Pitt", "email": "keith@buildkite.com", "avatar_url": "https://www.gravatar.com/avatar/e14f55d3f939977cecbf51b64ff6f861", "created_at": "2015-05-22T12:36:45.309Z" }, "jobs": [ { "id": "b63254c0-3271-4a98-8270-7cfbd6c2f14e", "graphql_id": "VXNlci0tLThmNzFlOWI1LTczMDEtNDI4ZS1hMjQ1LWUwOWI0YzI0OWRiZg==", "type": "script", "name": "RSpec", "step_key": "rspec", "group_key": "tests", "step": { "id": "018c0f56-c87c-47e9-95ee-aa47397b4496", "signature": { "value": "eyJhbGciOiJFUzI1NiIsImtpZCI6InlvdSBzbHkgZG9nISB5b3UgY2F1Z2h0IG1lIG1vbm9sb2d1aW5nISJ9..m9LBvNgbzmO5JuZ4Bwoheyn7uqLf3TN1EdFwv_l_nMT2qh0_2EVs30SAEc-Ajjkq18MQk3cgU36AodLPl3_hBg", "algorithm": "EdDSA", "signed_fields": [ "command", "env", "matrix", "plugins", "repository_url" ] } }, "agent_query_rules": ["*"], "state": "passed", "web_url": "https://buildkite.com/my-great-org/my-pipeline/builds/2#b63254c0-3271-4a98-8270-7cfbd6c2f14e", "log_url": "https://api.buildkite.com/v2/organizations/my-great-org/pipelines/my-pipeline/builds/2/jobs/b63254c0-3271-4a98-8270-7cfbd6c2f14e/log", "raw_log_url": "https://api.buildkite.com/v2/organizations/my-great-org/pipelines/my-pipeline/builds/2/jobs/b63254c0-3271-4a98-8270-7cfbd6c2f14e/log.txt", "command": "bundle exec rspec", "soft_failed": false, "exit_status": 0, "artifact_paths": "", "agent": { "id": "0b461f65-e7be-4c80-888a-ef11d81fd971", "graphql_id": "QWdlbnQtLS1mOTBhNzliNC01YjJlLTQzNzEtYjYxZS03OTA4ZDAyNmUyN2E=", "url": "https://api.buildkite.com/v2/organizations/my-great-org/agents/my-agent", "web_url": "https://buildkite.com/organizations/my-great-org/agents/0b461f65-e7be-4c80-888a-ef11d81fd971", "name": "my-agent", "connection_state": "connected", "hostname": "localhost", "ip_address": "144.132.19.12", "user_agent": "buildkite-agent/1.0.0 (linux; amd64)", "creator": { "id": "3d3c3bf0-7d58-4afe-8fe7-b3017d5504de", "name": "Keith Pitt", "email": "keith@buildkite.com", "avatar_url": "https://www.gravatar.com/avatar/e14f55d3f939977cecbf51b64ff6f861", "created_at": "2015-05-09T21:05:59.874Z" }, "created_at": "2015-05-09T21:05:59.874Z" }, "created_at": "2015-05-09T21:05:59.874Z", "scheduled_at": "2015-05-09T21:05:59.874Z", "runnable_at": "2015-05-09T21:06:59.874Z", "started_at": "2015-05-09T21:07:59.874Z", "finished_at": "2015-05-09T21:08:59.874Z", "retried": false, "retried_in_job_id": null, "retries_count": 1, "retry_source": { "job_id": "0194b92a-4d74-46bb-a1bf-61c73c5642af", "retry_type": "manual" }, "retry_type": null, "parallel_group_index": null, "parallel_group_total": null, "matrix": null, "cluster_id": null, "cluster_url": null, "cluster_queue_id": null, "cluster_queue_url": null } ], "created_at": "2015-05-09T21:05:59.874Z", "scheduled_at": "2015-05-09T21:05:59.874Z", "started_at": "2015-05-09T21:05:59.874Z", "finished_at": "2015-05-09T21:08:59.874Z", "meta_data": { }, "pull_request": { }, "rebuilt_from": { "id": "812135b3-eee7-408c-9f63-760538b96bd5", "number": 1, "url": "https://api.buildkite.com/v2/organizations/my-great-org/pipelines/my-pipeline/builds/1" }, "pipeline": { "id": "849411f9-9e6d-4739-a0d8-e247088e9b52", "graphql_id": "UGlwZWxpbmUtLS1lOTM4ZGQxYy03MDgwLTQ4ZmQtOGQyMC0yNmQ4M2E0ZjNkNDg=", "url": "https://api.buildkite.com/v2/organizations/my-great-org/pipelines/my-pipeline", "name": "Great Pipeline", "slug": "great-pipeline", "repository": "git@github.com:my-great-org/my-pipeline", "provider": { "id": "github", "webhook_url": "https://webhook.buildkite.com/deliver/xxx" }, "skip_queued_branch_builds": false, "skip_queued_branch_builds_filter": null, "cancel_running_branch_builds": false, "cancel_running_branch_builds_filter": null, "builds_url": "https://api.buildkite.com/v2/organizations/my-great-org/pipelines/my-pipeline/builds", "badge_url": "https://badge.buildkite.com/58b3da999635d0ad2daae5f784e56d264343eb02526f129bfb.svg", "created_at": "2013-09-03 13:24:38 UTC", "scheduled_builds_count": 0, "running_builds_count": 0, "scheduled_jobs_count": 0, "running_jobs_count": 0, "waiting_jobs_count": 0 } } ``` > 📘 Webhook URL > The response only includes a webhook URL in `pipeline.provider.webhook_url` if the user has edit permissions for the pipeline. Otherwise, the field returns with an empty string. Unlike [build states](/docs/pipelines/configure/notify#build-states) for notifications, when a build is blocked, the `state` of a build does not return the value `blocked`. Instead, the build `state` retains its last value (for example, `passed`) and the `blocked` field value will be `true`. When a job belongs to a [group step](/docs/pipelines/configure/step-types/group-step), the job object includes a `group_key` field. The value corresponds to the group step's `key` attribute, allowing you to identify which jobs belong to which logical groups in your pipeline. When a job is a [trigger step](/docs/pipelines/configure/step-types/trigger-step), the job object includes `async` and `triggered_build` fields. `triggered_build` contains the `id`, `number`, `url`, and `web_url` of the build that was triggered, or `null` if the build has not yet been created. ```json { "id": "b63254c0-3271-4a98-8270-7cfbd6c2f14e", "graphql_id": "Sm9iLS0tMTQ4YWQ0MzgtM2E2My00YWIxLWIzMjItNzIxM2Y3YzJhMWFi", "type": "trigger", "name": "Deploy to production", "step_key": "deploy-production", "state": "passed", "async": false, "web_url": "https://buildkite.com/my-great-org/my-pipeline/builds/1#b63254c0-3271-4a98-8270-7cfbd6c2f14e", "created_at": "2015-05-09T21:05:59.874Z", "scheduled_at": "2015-05-09T21:05:59.874Z", "runnable_at": "2015-05-09T21:06:59.874Z", "started_at": "2015-05-09T21:07:59.874Z", "finished_at": "2015-05-09T21:08:59.874Z", "triggered_build": { "id": "f62a1b4d-10f9-4790-bc1c-e2c3a0c80983", "number": 15, "url": "https://api.buildkite.com/v2/organizations/my-great-org/pipelines/deploy-pipeline/builds/15", "web_url": "https://buildkite.com/my-great-org/deploy-pipeline/builds/15" } } ``` Optional [query string parameters](/docs/api#query-string-parameters): | `include_retried_jobs` | Include all retried job executions in each build's jobs list. Without this parameter, you'll see only the most recently run job for each step. _Example:_ `?include_retried_jobs=true` | `include_test_engine` | Include all Test Engine-related data for the build in the response. Without this parameter, you'll only see all Buildkite Pipelines-related build data in the response. _Example:_ `?include_test_engine=true` Required scope: `read_builds` Success response: `200 OK` ##### Create a build ```bash curl -H "Authorization: Bearer $TOKEN" \ -X POST "https://api.buildkite.com/v2/organizations/{org.slug}/pipelines/{pipeline.slug}/builds" \ -H "Content-Type: application/json" \ -d '{ "commit": "abcd0b72a1e580e90712cdd9eb26d3fb41cd09c8", "branch": "main", "message": "Testing all the things \:rocket\:", "author": { "name": "Keith Pitt", "email": "me@keithpitt.com" }, "env": { "MY_ENV_VAR": "some_value" }, "meta_data": { "some build data": "value", "other build data": true } }' ``` ```json { "id": "f62a1b4d-10f9-4790-bc1c-e2c3a0c80983", "graphql_id": "QnVpbGQtLS1mYmQ2Zjk3OS0yOTRhLTQ3ZjItOTU0Ni1lNTk0M2VlMTAwNzE=", "url": "https://api.buildkite.com/v2/organizations/my-great-org/pipelines/my-pipeline/builds/1", "web_url": "https://buildkite.com/my-great-org/my-pipeline/builds/1", "number": 1, "state": "scheduled", "cancel_reason": "reason for a canceled build", "blocked": false, "message": "Testing all the things \:rocket\:", "commit": "abcd0b72a1e580e90712cdd9eb26d3fb41cd09c8", "branch": "main", "env": { }, "source": "webhook", "creator": { "id": "3d3c3bf0-7d58-4afe-8fe7-b3017d5504de", "name": "Keith Pitt", "email": "keith@buildkite.com", "avatar_url": "https://www.gravatar.com/avatar/e14f55d3f939977cecbf51b64ff6f861", "created_at": "2015-05-22T12:36:45.309Z" }, "jobs": [ { "id": "b63254c0-3271-4a98-8270-7cfbd6c2f14e", "type": "script", "name": ":package:", "step_key": "package", "step": { "id": "018c0f56-c87c-47e9-95ee-aa47397b4496", "signature": { "value": "eyJhbGciOiJFUzI1NiIsImtpZCI6InlvdSBzbHkgZG9nISB5b3UgY2F1Z2h0IG1lIG1vbm9sb2d1aW5nISJ9..m9LBvNgbzmO5JuZ4Bwoheyn7uqLf3TN1EdFwv_l_nMT2qh0_2EVs30SAEc-Ajjkq18MQk3cgU36AodLPl3_hBg", "algorithm": "EdDSA", "signed_fields": [ "command", "env", "matrix", "plugins", "repository_url" ] } }, "agent_query_rules": ["*"], "state": "scheduled", "web_url": "https://buildkite.com/my-great-org/my-pipeline/builds/1#b63254c0-3271-4a98-8270-7cfbd6c2f14e", "log_url": "https://api.buildkite.com/v2/organizations/my-great-org/pipelines/my-pipeline/builds/1/jobs/b63254c0-3271-4a98-8270-7cfbd6c2f14e/log", "raw_log_url": "https://api.buildkite.com/v2/organizations/my-great-org/pipelines/my-pipeline/builds/1/jobs/b63254c0-3271-4a98-8270-7cfbd6c2f14e/log.txt", "command": "scripts/build.sh", "soft_failed": false, "exit_status": 0, "artifact_paths": "", "agent": { "id": "0b461f65-e7be-4c80-888a-ef11d81fd971", "graphql_id": "QWdlbnQtLS1mOTBhNzliNC01YjJlLTQzNzEtYjYxZS03OTA4ZDAyNmUyN2E=", "url": "https://api.buildkite.com/v2/organizations/my-great-org/agents/my-agent", "web_url": "https://buildkite.com/organizations/my-great-org/agents/0b461f65-e7be-4c80-888a-ef11d81fd971", "name": "my-agent", "connection_state": "connected", "hostname": "localhost", "ip_address": "144.132.19.12", "user_agent": "buildkite-agent/1.0.0 (linux; amd64)", "creator": { "id": "3d3c3bf0-7d58-4afe-8fe7-b3017d5504de", "graphql_id": "VXNlci0tLThmNzFlOWI1LTczMDEtNDI4ZS1hMjQ1LWUwOWI0YzI0OWRiZg==", "name": "Keith Pitt", "email": "keith@buildkite.com", "avatar_url": "https://www.gravatar.com/avatar/e14f55d3f939977cecbf51b64ff6f861", "created_at": "2015-05-09T21:05:59.874Z" }, "created_at": "2015-05-09T21:05:59.874Z" }, "created_at": "2015-05-09T21:05:59.874Z", "scheduled_at": "2015-05-09T21:05:59.874Z", "runnable_at": "2015-05-09T21:06:59.874Z", "started_at": "2015-05-09T21:07:59.874Z", "finished_at": "2015-05-09T21:08:59.874Z", "retried": false, "retried_in_job_id": null, "retries_count": null, "retry_source": null, "retry_type": null, "parallel_group_index": null, "parallel_group_total": null, "matrix": null, "cluster_id": null, "cluster_url": null, "cluster_queue_id": null, "cluster_queue_url": null } ], "created_at": "2015-05-09T21:05:59.874Z", "scheduled_at": "2015-05-09T21:05:59.874Z", "started_at": "2015-05-09T21:05:59.874Z", "finished_at": "2015-05-09T21:05:59.874Z", "meta_data": { }, "pull_request": { }, "pipeline": { "id": "849411f9-9e6d-4739-a0d8-e247088e9b52", "graphql_id": "UGlwZWxpbmUtLS1lOTM4ZGQxYy03MDgwLTQ4ZmQtOGQyMC0yNmQ4M2E0ZjNkNDg=", "url": "https://api.buildkite.com/v2/organizations/my-great-org/pipelines/my-pipeline", "name": "Great Pipeline", "slug": "great-pipeline", "repository": "git@github.com:my-great-org/my-pipeline", "provider": { "id": "github", "webhook_url": "https://webhook.buildkite.com/deliver/xxx", "settings": {} }, "skip_queued_branch_builds": false, "skip_queued_branch_builds_filter": null, "cancel_running_branch_builds": false, "cancel_running_branch_builds_filter": null, "builds_url": "https://api.buildkite.com/v2/organizations/my-great-org/pipelines/my-pipeline/builds", "badge_url": "https://badge.buildkite.com/58b3da999635d0ad2daae5f784e56d264343eb02526f129bfb.svg", "created_at": "2015-05-09T21:05:59.874Z", "scheduled_builds_count": 0, "running_builds_count": 0, "scheduled_jobs_count": 0, "running_jobs_count": 0, "waiting_jobs_count": 0 } } ``` > 📘 Webhook URL > The response only includes a webhook URL in `pipeline.provider.webhook_url` if the user has edit permissions for the pipeline. Otherwise, the field returns with an empty string. Required [request body properties](/docs/api#request-body-properties): | `commit` | Ref, SHA or tag to be built. _Example:_ `"HEAD"` _Note:_Before running builds on tags, make sure your agent is [fetching git tags](/docs/pipelines/source-control/github#running-builds-on-git-tags). | `branch` | Branch the commit belongs to. This allows you to take advantage of your pipeline and step-level branch filtering rules. _Example:_ `"main"` Optional [request body properties](/docs/api#request-body-properties): | `author` | A JSON object with a `"name"` and `"email"` key to show who created this build. _Default value: the user making the API request_. | `clean_checkout` | Force the agent to remove any existing build directory and perform a fresh checkout. _Default value:_ `false`. | `env` | Environment variables to be made available to the build. _Default value:_ `{}`. | `ignore_pipeline_branch_filters` | Run the build regardless of the pipeline's branch filtering rules. Step branch filtering rules will still apply. _Default value:_ `false`. | `message` | Message for the build. _Example:_ `"Testing all the things \:rocket\:"` | `meta_data` | A JSON object of meta-data to make available to the build. _Default value:_ `{}`. | `pull_request_base_branch` | For a pull request build, the base branch of the pull request. _Example:_ `"main"` | `pull_request_id` | For a pull request build, the pull request number. _Example:_ `42` | `pull_request_labels` | For a pull request build, a JSON array of labels assigned to the pull request. _Example:_ `["bug", "ui"]` | `pull_request_repository` | For a pull request build, the git repository of the pull request. _Example:_ `"git://github.com/my-org/my-repo.git"` Required scope: `write_builds` Success response: `201 Created` Error responses: | `422 Unprocessable Entity` | `{ "message": "Validation Failed", "errors": [ ... ] }` | `422 Unprocessable Entity` | `{ "message": "Reason that the build could not be created" }` ##### Cancel a build Cancels the build if its state is either `scheduled`, `running`, or `failing`. ```bash curl -H "Authorization: Bearer $TOKEN" \ -X PUT "https://api.buildkite.com/v2/organizations/{org.slug}/pipelines/{pipeline.slug}/builds/{number}/cancel" ``` > 📘 > Note that this URL requires using a _build number_ (for example, `3`), not a build ID (for example, `01908131-7d9f-495e-a17b-80ed31276810`). ```json { "id": "f62a1b4d-10f9-4790-bc1c-e2c3a0c80983", "graphql_id": "QnVpbGQtLS1mYmQ2Zjk3OS0yOTRhLTQ3ZjItOTU0Ni1lNTk0M2VlMTAwNzE=", "url": "https://api.buildkite.com/v2/organizations/my-great-org/pipelines/my-pipeline/builds/1", "web_url": "https://buildkite.com/my-great-org/my-pipeline/builds/1", "number": 1, "state": "canceled", "cancel_reason": "reason for a canceled build", "blocked": false, "message": "Bumping to version 0.2-beta.6", "commit": "abcd0b72a1e580e90712cdd9eb26d3fb41cd09c8", "branch": "main", "env": { }, "source": "webhook", "creator": { "id": "3d3c3bf0-7d58-4afe-8fe7-b3017d5504de", "graphql_id": "QnVpbGQtLS1mYmQ2Zjk3OS0yOTRhLTQ3ZjItOTU0Ni1lNTk0M2VlMTAwNzE=", "name": "Keith Pitt", "email": "keith@buildkite.com", "avatar_url": "https://www.gravatar.com/avatar/e14f55d3f939977cecbf51b64ff6f861", "created_at": "2015-05-22T12:36:45.309Z" }, "jobs": [ { "id": "b63254c0-3271-4a98-8270-7cfbd6c2f14e", "graphql_id": "Sm9iLS0tMTQ4YWQ0MzgtM2E2My00YWIxLWIzMjItNzIxM2Y3YzJhMWFi", "type": "script", "name": ":package:", "step_key": "package", "step": { "id": "018c0f56-c87c-47e9-95ee-aa47397b4496", "signature": { "value": "eyJhbGciOiJFUzI1NiIsImtpZCI6InlvdSBzbHkgZG9nISB5b3UgY2F1Z2h0IG1lIG1vbm9sb2d1aW5nISJ9..m9LBvNgbzmO5JuZ4Bwoheyn7uqLf3TN1EdFwv_l_nMT2qh0_2EVs30SAEc-Ajjkq18MQk3cgU36AodLPl3_hBg", "algorithm": "EdDSA", "signed_fields": [ "command", "env", "matrix", "plugins", "repository_url" ] } }, "agent_query_rules": ["*"], "state": "scheduled", "web_url": "https://buildkite.com/my-great-org/my-pipeline/builds/1#b63254c0-3271-4a98-8270-7cfbd6c2f14e", "log_url": "https://api.buildkite.com/v2/organizations/my-great-org/pipelines/my-pipeline/builds/1/jobs/b63254c0-3271-4a98-8270-7cfbd6c2f14e/log", "raw_log_url": "https://api.buildkite.com/v2/organizations/my-great-org/pipelines/my-pipeline/builds/1/jobs/b63254c0-3271-4a98-8270-7cfbd6c2f14e/log.txt", "command": "scripts/build.sh", "soft_failed": false, "exit_status": 0, "artifact_paths": "", "agent": { "id": "0b461f65-e7be-4c80-888a-ef11d81fd971", "graphql_id": "QWdlbnQtLS1mOTBhNzliNC01YjJlLTQzNzEtYjYxZS03OTA4ZDAyNmUyN2E=", "url": "https://api.buildkite.com/v2/organizations/my-great-org/agents/my-agent", "web_url": "https://buildkite.com/organizations/my-great-org/agents/0b461f65-e7be-4c80-888a-ef11d81fd971", "name": "my-agent", "connection_state": "connected", "hostname": "localhost", "ip_address": "144.132.19.12", "user_agent": "buildkite-agent/1.0.0 (linux; amd64)", "creator": { "id": "3d3c3bf0-7d58-4afe-8fe7-b3017d5504de", "graphql_id": "VXNlci0tLThmNzFlOWI1LTczMDEtNDI4ZS1hMjQ1LWUwOWI0YzI0OWRiZg==", "name": "Keith Pitt", "email": "keith@buildkite.com", "avatar_url": "https://www.gravatar.com/avatar/e14f55d3f939977cecbf51b64ff6f861", "created_at": "2015-05-09T21:05:59.874Z" }, "created_at": "2015-05-09T21:05:59.874Z" }, "created_at": "2015-05-09T21:05:59.874Z", "scheduled_at": "2015-05-09T21:05:59.874Z", "runnable_at": "2015-05-09T21:06:59.874Z", "started_at": "2015-05-09T21:07:59.874Z", "finished_at": "2015-05-09T21:08:59.874Z", "retried": false, "retried_in_job_id": null, "retries_count": null, "retry_source": null, "retry_type": null, "parallel_group_index": null, "parallel_group_total": null, "matrix": null, "cluster_id": null, "cluster_url": null, "cluster_queue_id": null, "cluster_queue_url": null } ], "created_at": "2015-05-09T21:05:59.874Z", "scheduled_at": "2015-05-09T21:05:59.874Z", "started_at": "2015-05-09T21:05:59.874Z", "finished_at": "2015-05-09T21:05:59.874Z", "meta_data": { }, "pull_request": { }, "pipeline": { "id": "849411f9-9e6d-4739-a0d8-e247088e9b52", "graphql_id": "UGlwZWxpbmUtLS1lOTM4ZGQxYy03MDgwLTQ4ZmQtOGQyMC0yNmQ4M2E0ZjNkNDg=", "url": "https://api.buildkite.com/v2/organizations/my-great-org/pipelines/my-pipeline", "name": "Great Pipeline", "slug": "great-pipeline", "repository": "git@github.com:my-great-org/my-pipeline", "provider": { "id": "github", "webhook_url": "https://webhook.buildkite.com/deliver/xxx" }, "skip_queued_branch_builds": false, "skip_queued_branch_builds_filter": null, "cancel_running_branch_builds": false, "cancel_running_branch_builds_filter": null, "builds_url": "https://api.buildkite.com/v2/organizations/my-great-org/pipelines/my-pipeline/builds", "badge_url": "https://badge.buildkite.com/58b3da999635d0ad2daae5f784e56d264343eb02526f129bfb.svg", "created_at": "2013-09-03 13:24:38 UTC", "scheduled_builds_count": 0, "running_builds_count": 0, "scheduled_jobs_count": 0, "running_jobs_count": 0, "waiting_jobs_count": 0 } } ``` > 📘 Webhook URL > The response only includes a webhook URL in `pipeline.provider.webhook_url` if the user has edit permissions for the pipeline. Otherwise, the field returns with an empty string. Required scope: `write_builds` Success response: `200 OK` Error responses: | `422 Unprocessable Entity` | `{ "message": "Reason why the build could not be canceled" }` ##### Rebuild a build Returns the newly created build. ```bash curl -H "Authorization: Bearer $TOKEN" \ -X PUT "https://api.buildkite.com/v2/organizations/{org.slug}/pipelines/{pipeline.slug}/builds/{number}/rebuild" ``` > 📘 > Note that this URL requires using a _build number_ (for example, `3`), not a build ID (for example, `01908131-7d9f-495e-a17b-80ed31276810`). ```json { "id": "f62a1b4d-10f9-4790-bc1c-e2c3a0c80983", "graphql_id": "QnVpbGQtLS1mYmQ2Zjk3OS0yOTRhLTQ3ZjItOTU0Ni1lNTk0M2VlMTAwNzE=", "url": "https://api.buildkite.com/v2/organizations/my-great-org/pipelines/my-pipeline/builds/1", "web_url": "https://buildkite.com/my-great-org/my-pipeline/builds/1", "number": 2, "state": "scheduled", "cancel_reason": "reason for a canceled build", "blocked": false, "message": "Bumping to version 0.2-beta.6", "commit": "abcd0b72a1e580e90712cdd9eb26d3fb41cd09c8", "branch": "main", "env": { }, "source": "api", "creator": { "id": "3d3c3bf0-7d58-4afe-8fe7-b3017d5504de", "graphql_id": "VXNlci0tLThmNzFlOWI1LTczMDEtNDI4ZS1hMjQ1LWUwOWI0YzI0OWRiZg==", "name": "Keith Pitt", "email": "keith@buildkite.com", "avatar_url": "https://www.gravatar.com/avatar/e14f55d3f939977cecbf51b64ff6f861", "created_at": "2015-05-22T12:36:45.309Z" }, "jobs": [ { "id": "b63254c0-3271-4a98-8270-7cfbd6c2f14e", "graphql_id": "Sm9iLS0tMTQ4YWQ0MzgtM2E2My00YWIxLWIzMjItNzIxM2Y3YzJhMWFi", "type": "script", "name": ":package:", "step_key": "package", "step": { "id": "018c0f56-c87c-47e9-95ee-aa47397b4496", "signature": { "value": "eyJhbGciOiJFUzI1NiIsImtpZCI6InlvdSBzbHkgZG9nISB5b3UgY2F1Z2h0IG1lIG1vbm9sb2d1aW5nISJ9..m9LBvNgbzmO5JuZ4Bwoheyn7uqLf3TN1EdFwv_l_nMT2qh0_2EVs30SAEc-Ajjkq18MQk3cgU36AodLPl3_hBg", "algorithm": "EdDSA", "signed_fields": [ "command", "env", "matrix", "plugins", "repository_url" ] } }, "agent_query_rules": ["*"], "state": "scheduled", "web_url": "https://buildkite.com/my-great-org/my-pipeline/builds/1#b63254c0-3271-4a98-8270-7cfbd6c2f14e", "log_url": "https://api.buildkite.com/v2/organizations/my-great-org/pipelines/my-pipeline/builds/1/jobs/b63254c0-3271-4a98-8270-7cfbd6c2f14e/log", "raw_log_url": "https://api.buildkite.com/v2/organizations/my-great-org/pipelines/my-pipeline/builds/1/jobs/b63254c0-3271-4a98-8270-7cfbd6c2f14e/log.txt", "command": "scripts/build.sh", "soft_failed": false, "exit_status": 0, "artifact_paths": "", "agent": { "id": "0b461f65-e7be-4c80-888a-ef11d81fd971", "graphql_id": "QWdlbnQtLS1mOTBhNzliNC01YjJlLTQzNzEtYjYxZS03OTA4ZDAyNmUyN2E=", "url": "https://api.buildkite.com/v2/organizations/my-great-org/agents/my-agent", "web_url": "https://buildkite.com/organizations/my-great-org/agents/0b461f65-e7be-4c80-888a-ef11d81fd971", "name": "my-agent", "connection_state": "connected", "hostname": "localhost", "ip_address": "144.132.19.12", "user_agent": "buildkite-agent/1.0.0 (linux; amd64)", "creator": { "id": "3d3c3bf0-7d58-4afe-8fe7-b3017d5504de", "graphql_id": "VXNlci0tLThmNzFlOWI1LTczMDEtNDI4ZS1hMjQ1LWUwOWI0YzI0OWRiZg==", "name": "Keith Pitt", "email": "keith@buildkite.com", "avatar_url": "https://www.gravatar.com/avatar/e14f55d3f939977cecbf51b64ff6f861", "created_at": "2015-05-09T21:05:59.874Z" }, "created_at": "2015-05-09T21:05:59.874Z" }, "created_at": "2015-05-09T21:05:59.874Z", "scheduled_at": "2015-05-09T21:05:59.874Z", "runnable_at": "2015-05-09T21:06:59.874Z", "started_at": "2015-05-09T21:07:59.874Z", "finished_at": "2015-05-09T21:08:59.874Z", "retried": false, "retried_in_job_id": null, "retries_count": null, "retry_source": null, "retry_type": null, "parallel_group_index": null, "parallel_group_total": null, "matrix": null, "cluster_id": null, "cluster_url": null, "cluster_queue_id": null, "cluster_queue_url": null } ], "created_at": "2015-05-09T21:05:59.874Z", "scheduled_at": "2015-05-09T21:05:59.874Z", "started_at": "2015-05-09T21:05:59.874Z", "finished_at": "2015-05-09T21:05:59.874Z", "meta_data": { }, "pull_request": { }, "pipeline": { "id": "849411f9-9e6d-4739-a0d8-e247088e9b52", "graphql_id": "UGlwZWxpbmUtLS1lOTM4ZGQxYy03MDgwLTQ4ZmQtOGQyMC0yNmQ4M2E0ZjNkNDg=", "url": "https://api.buildkite.com/v2/organizations/my-great-org/pipelines/my-pipeline", "name": "Great Pipeline", "slug": "great-pipeline", "repository": "git@github.com:my-great-org/my-pipeline", "provider": { "id": "github", "webhook_url": "https://webhook.buildkite.com/deliver/xxx" }, "skip_queued_branch_builds": false, "skip_queued_branch_builds_filter": null, "cancel_running_branch_builds": false, "cancel_running_branch_builds_filter": null, "builds_url": "https://api.buildkite.com/v2/organizations/my-great-org/pipelines/my-pipeline/builds", "badge_url": "https://badge.buildkite.com/58b3da999635d0ad2daae5f784e56d264343eb02526f129bfb.svg", "created_at": "2013-09-03 13:24:38 UTC", "scheduled_builds_count": 0, "running_builds_count": 0, "scheduled_jobs_count": 0, "running_jobs_count": 0, "waiting_jobs_count": 0 } } ``` > 📘 Webhook URL > The response only includes a webhook URL in `pipeline.provider.webhook_url` if the user has edit permissions for the pipeline. Otherwise, the field returns with an empty string. Required scope: `write_builds` Success response: `200 OK` ##### Retry failed jobs for a build Queues failed jobs to be retried in a build. ```bash curl -H "Authorization: Bearer $TOKEN" \ -X PUT "https://api.buildkite.com/v2/organizations/{org.slug}/pipelines/{pipeline.slug}/builds/{number}/retry_failed_jobs" -H "Content-Type: application/json" \ -d '{ "states": "failed,soft_failed" }' ``` > 📘 > Note that this URL requires using a _build number_ (for example, `3`), not a build ID (for example, `01908131-7d9f-495e-a17b-80ed31276810`). ```json { "id": "f62a1b4d-10f9-4790-bc1c-e2c3a0c80983", "graphql_id": "QnVpbGQtLS1mYmQ2Zjk3OS0yOTRhLTQ3ZjItOTU0Ni1lNTk0M2VlMTAwNzE=", "url": "https://api.buildkite.com/v2/organizations/my-great-org/pipelines/my-pipeline/builds/1", "web_url": "https://buildkite.com/my-great-org/my-pipeline/builds/1", "number": 2, "state": "scheduled", "cancel_reason": "reason for a canceled build", "blocked": false, "message": "Bumping to version 0.2-beta.6", "commit": "abcd0b72a1e580e90712cdd9eb26d3fb41cd09c8", "branch": "main", "env": { }, "source": "api", "creator": { "id": "3d3c3bf0-7d58-4afe-8fe7-b3017d5504de", "graphql_id": "VXNlci0tLThmNzFlOWI1LTczMDEtNDI4ZS1hMjQ1LWUwOWI0YzI0OWRiZg==", "name": "Keith Pitt", "email": "keith@buildkite.com", "avatar_url": "https://www.gravatar.com/avatar/e14f55d3f939977cecbf51b64ff6f861", "created_at": "2015-05-22T12:36:45.309Z" }, "jobs": [ { "id": "b63254c0-3271-4a98-8270-7cfbd6c2f14e", "graphql_id": "Sm9iLS0tMTQ4YWQ0MzgtM2E2My00YWIxLWIzMjItNzIxM2Y3YzJhMWFi", "type": "script", "name": ":package:", "step_key": "package", "step": { "id": "018c0f56-c87c-47e9-95ee-aa47397b4496", "signature": { "value": "eyJhbGciOiJFUzI1NiIsImtpZCI6InlvdSBzbHkgZG9nISB5b3UgY2F1Z2h0IG1lIG1vbm9sb2d1aW5nISJ9..m9LBvNgbzmO5JuZ4Bwoheyn7uqLf3TN1EdFwv_l_nMT2qh0_2EVs30SAEc-Ajjkq18MQk3cgU36AodLPl3_hBg", "algorithm": "EdDSA", "signed_fields": [ "command", "env", "matrix", "plugins", "repository_url" ] } }, "agent_query_rules": ["*"], "state": "scheduled", "web_url": "https://buildkite.com/my-great-org/my-pipeline/builds/1#b63254c0-3271-4a98-8270-7cfbd6c2f14e", "log_url": "https://api.buildkite.com/v2/organizations/my-great-org/pipelines/my-pipeline/builds/1/jobs/b63254c0-3271-4a98-8270-7cfbd6c2f14e/log", "raw_log_url": "https://api.buildkite.com/v2/organizations/my-great-org/pipelines/my-pipeline/builds/1/jobs/b63254c0-3271-4a98-8270-7cfbd6c2f14e/log.txt", "command": "scripts/build.sh", "soft_failed": false, "exit_status": 0, "artifact_paths": "", "agent": { "id": "0b461f65-e7be-4c80-888a-ef11d81fd971", "graphql_id": "QWdlbnQtLS1mOTBhNzliNC01YjJlLTQzNzEtYjYxZS03OTA4ZDAyNmUyN2E=", "url": "https://api.buildkite.com/v2/organizations/my-great-org/agents/my-agent", "web_url": "https://buildkite.com/organizations/my-great-org/agents/0b461f65-e7be-4c80-888a-ef11d81fd971", "name": "my-agent", "connection_state": "connected", "hostname": "localhost", "ip_address": "144.132.19.12", "user_agent": "buildkite-agent/1.0.0 (linux; amd64)", "creator": { "id": "3d3c3bf0-7d58-4afe-8fe7-b3017d5504de", "graphql_id": "VXNlci0tLThmNzFlOWI1LTczMDEtNDI4ZS1hMjQ1LWUwOWI0YzI0OWRiZg==", "name": "Keith Pitt", "email": "keith@buildkite.com", "avatar_url": "https://www.gravatar.com/avatar/e14f55d3f939977cecbf51b64ff6f861", "created_at": "2015-05-09T21:05:59.874Z" }, "created_at": "2015-05-09T21:05:59.874Z" }, "created_at": "2015-05-09T21:05:59.874Z", "scheduled_at": "2015-05-09T21:05:59.874Z", "runnable_at": "2015-05-09T21:06:59.874Z", "started_at": "2015-05-09T21:07:59.874Z", "finished_at": "2015-05-09T21:08:59.874Z", "retried": false, "retried_in_job_id": null, "retries_count": null, "retry_source": null, "retry_type": null, "parallel_group_index": null, "parallel_group_total": null, "matrix": null, "cluster_id": null, "cluster_url": null, "cluster_queue_id": null, "cluster_queue_url": null } ], "created_at": "2015-05-09T21:05:59.874Z", "scheduled_at": "2015-05-09T21:05:59.874Z", "started_at": "2015-05-09T21:05:59.874Z", "finished_at": "2015-05-09T21:05:59.874Z", "meta_data": { }, "pull_request": { }, "pipeline": { "id": "849411f9-9e6d-4739-a0d8-e247088e9b52", "graphql_id": "UGlwZWxpbmUtLS1lOTM4ZGQxYy03MDgwLTQ4ZmQtOGQyMC0yNmQ4M2E0ZjNkNDg=", "url": "https://api.buildkite.com/v2/organizations/my-great-org/pipelines/my-pipeline", "name": "Great Pipeline", "slug": "great-pipeline", "repository": "git@github.com:my-great-org/my-pipeline", "provider": { "id": "github", "webhook_url": "https://webhook.buildkite.com/deliver/xxx" }, "skip_queued_branch_builds": false, "skip_queued_branch_builds_filter": null, "cancel_running_branch_builds": false, "cancel_running_branch_builds_filter": null, "builds_url": "https://api.buildkite.com/v2/organizations/my-great-org/pipelines/my-pipeline/builds", "badge_url": "https://badge.buildkite.com/58b3da999635d0ad2daae5f784e56d264343eb02526f129bfb.svg", "created_at": "2013-09-03 13:24:38 UTC", "scheduled_builds_count": 0, "running_builds_count": 0, "scheduled_jobs_count": 0, "running_jobs_count": 0, "waiting_jobs_count": 0 }, "retried_jobs_count": 2 } ``` > 📘 Webhook URL > The response only includes a webhook URL in `pipeline.provider.webhook_url` if the user has edit permissions for the pipeline. Otherwise, this field returns an empty string. This request is asynchronous, meaning that jobs are queued to be retried, but the request does not wait for the jobs to be completed before returning a response. The `retried_jobs_count` field in the response indicates how many jobs were queued to be retried. Optional [request body properties](/docs/api#request-body-properties): | `states` | Controls which failure types are retried. A comma-separated list of one or more of `canceled`, `expired`, `failed`, `soft_failed`, `timed_out`. If this property is omitted (or its value is empty), then all retryable failed jobs are retried. _Example:_ `"failed,soft_failed"` Required scope: `write_builds` Success response: `202 Accepted` Error responses: | `400 Invalid` | `{ "message": "Invalid states: invalid. Valid states are canceled, expired, failed, soft_failed, timed_out." }` --- ### Overview URL: https://buildkite.com/docs/apis/rest-api/clusters #### Clusters API The clusters API endpoint lets you create and manage [clusters](#clusters) in your organization, along with the following other aspects associated with clusters: - [Queues](/docs/apis/rest-api/clusters/queues) - [Agent tokens](/docs/apis/rest-api/clusters/agent-tokens) - [Cluster maintainers](/docs/apis/rest-api/clusters/maintainers) - [Buildkite secrets](/docs/apis/rest-api/clusters/secrets) ##### Clusters A [Buildkite cluster](/docs/pipelines/security/clusters) is an isolated set of agents and pipelines within an organization. ###### Cluster data model | `id` | ID of the cluster | `graphql_id` | [GraphQL ID](/docs/apis/graphql-api#graphql-ids) of the cluster | `default_queue_id` | ID of the cluster's default queue. Agents that connect to the cluster without specifying a queue will accept jobs from this queue. | `name` | Name of the cluster | `description` | Description of the cluster | `emoji` | Emoji for the cluster using the [emoji syntax](/docs/pipelines/emojis) | `color` | Color hex code for the cluster | `maintainers` | The maintainers of the cluster | `url` | Canonical API URL of the cluster | `web_url` | URL of the cluster on Buildkite | `queues_url` | API URL of the cluster's queues | `default_queue_url` | API URL of the cluster's default queue | `created_at` | When the cluster was created | `created_by` | User who created the cluster ###### List clusters Returns a [paginated list](/docs/apis/rest-api#pagination) of an organization's clusters. ```bash curl -H "Authorization: Bearer $TOKEN" \ -X GET "https://api.buildkite.com/v2/organizations/{org.slug}/clusters" ``` ```json [ { "id": "42f1a7da-812d-4430-93d8-1cc7c33a6bcf", "graphql_id": "Q2x1c3Rlci0tLTQyZjFhN2RhLTgxMmQtNDQzMC05M2Q4LTFjYzdjMzNhNmJjZg==", "default_queue_id": "01885682-55a7-44f5-84f3-0402fb452e66", "name": "Open Source", "description": "A place for safely running our open source builds", "emoji": "\:technologist\:", "color": "#FFE0F1", "maintainers": { "users": [], "teams": [] }, "url": "http://api.buildkite.com/v2/organizations/acme-inc/clusters/42f1a7da-812d-4430-93d8-1cc7c33a6bcf", "web_url": "http://buildkite.com/organizations/acme-inc/clusters/42f1a7da-812d-4430-93d8-1cc7c33a6bcf", "default_queue_url": "http://api.buildkite.com/v2/organizations/acme-inc/clusters/42f1a7da-812d-4430-93d8-1cc7c33a6bcf/queues/42f1a7da-812d-4430-93d8-1cc7c33a6bcf", "queues_url": "http://buildkite.com/organizations/acme-inc/clusters/42f1a7da-812d-4430-93d8-1cc7c33a6bcf/queues", "created_at": "2023-05-03T04:17:55.867Z", "created_by": { "id": "3d3c3bf0-7d58-4afe-8fe7-b3017d5504de", "graphql_id": "VXNlci0tLTNkM2MzYmYwLTdkNTgtNGFmZS04ZmU3LWIzMDE3ZDU1MDRkZQo=", "name": "Sam Kim", "email": "sam@example.com", "avatar_url": "https://www.gravatar.com/avatar/example", "created_at": "2013-08-29T10:10:03.000Z" } } ] ``` Required scope: `read_clusters` Success response: `200 OK` ###### Get a cluster ```bash curl -H "Authorization: Bearer $TOKEN" \ -X GET "https://api.buildkite.com/v2/organizations/{org.slug}/clusters/{id}" ``` ```json { "id": "42f1a7da-812d-4430-93d8-1cc7c33a6bcf", "graphql_id": "Q2x1c3Rlci0tLTQyZjFhN2RhLTgxMmQtNDQzMC05M2Q4LTFjYzdjMzNhNmJjZg==", "default_queue_id": "01885682-55a7-44f5-84f3-0402fb452e66", "name": "Open Source", "description": "A place for safely running our open source builds", "emoji": "\:technologist\:", "color": "#FFE0F1", "maintainers": { "users": [ { "id": "56c210cb-474c-47a7-b4ef-5761a1cb91c1", "actor": { "id": "da206b36-e5ae-4f4a-aca6-07dd478f3a48", "graphql_id": "VXNlci0tLWU1N2ZiYTBmLWFiMTQtNGNjMC1iYjViLTY5NTc3NGZmYmZiZQ==", "name": "John Smith", "email": "john.smith@example.com", "type": "user" } } ], "teams": [ { "id": "77ec8d4c-edb3-430e-baba-488757a418e2", "actor": { "id": "c5e09619-8648-4896-a936-9d0b8b7b3fe9", "graphql_id": "VGVhbS0tLWM1ZTA5NjE5LTg2NDgtNDg5Ni1hOTM2LTlkMGI4YjdiM2ZlOQ==", "name": "Fearless Frontenders", "slug": "fearless-frontenders", } } ] }, "url": "http://api.buildkite.com/v2/organizations/acme-inc/clusters/42f1a7da-812d-4430-93d8-1cc7c33a6bcf", "web_url": "http://buildkite.com/organizations/acme-inc/clusters/42f1a7da-812d-4430-93d8-1cc7c33a6bcf", "default_queue_url": "http://api.buildkite.com/v2/organizations/acme-inc/clusters/42f1a7da-812d-4430-93d8-1cc7c33a6bcf/queues/42f1a7da-812d-4430-93d8-1cc7c33a6bcf", "queues_url": "http://buildkite.com/organizations/acme-inc/clusters/42f1a7da-812d-4430-93d8-1cc7c33a6bcf/queues", "created_at": "2023-05-03T04:17:55.867Z", "created_by": { "id": "3d3c3bf0-7d58-4afe-8fe7-b3017d5504de", "graphql_id": "VXNlci0tLTNkM2MzYmYwLTdkNTgtNGFmZS04ZmU3LWIzMDE3ZDU1MDRkZQo=", "name": "Sam Kim", "email": "sam@example.com", "avatar_url": "https://www.gravatar.com/avatar/example", "created_at": "2013-08-29T10:10:03.000Z" } } ``` Required scope: `read_clusters` Success response: `200 OK` ###### Create a cluster ```bash curl -H "Authorization: Bearer $TOKEN" \ -X POST "https://api.buildkite.com/v2/organizations/{org.slug}/clusters" \ -H "Content-Type: application/json" \ -d '{ "name": "Open Source", "description": "A place for safely running our open source builds", "emoji": "\:technologist\:", "color": "#FFE0F1", }' ``` ```json { "id": "42f1a7da-812d-4430-93d8-1cc7c33a6bcf", "graphql_id": "Q2x1c3Rlci0tLTQyZjFhN2RhLTgxMmQtNDQzMC05M2Q4LTFjYzdjMzNhNmJjZg==", "default_queue_id": null, "name": "Open Source", "description": "A place for safely running our open source builds", "emoji": "\:technologist\:", "color": "#FFE0F1", "maintainers": { "users": [], "teams": [] }, "url": "http://api.buildkite.com/v2/organizations/acme-inc/clusters/42f1a7da-812d-4430-93d8-1cc7c33a6bcf", "web_url": "http://buildkite.com/organizations/acme-inc/clusters/42f1a7da-812d-4430-93d8-1cc7c33a6bcf", "default_queue_url": null, "queues_url": "http://buildkite.com/organizations/acme-inc/clusters/42f1a7da-812d-4430-93d8-1cc7c33a6bcf/queues", "created_at": "2023-05-03T04:17:55.867Z", "created_by": { "id": "3d3c3bf0-7d58-4afe-8fe7-b3017d5504de", "graphql_id": "VXNlci0tLTNkM2MzYmYwLTdkNTgtNGFmZS04ZmU3LWIzMDE3ZDU1MDRkZQo=", "name": "Sam Kim", "email": "sam@example.com", "avatar_url": "https://www.gravatar.com/avatar/example", "created_at": "2013-08-29T10:10:03.000Z" } } ``` Required [request body properties](/docs/api#request-body-properties): | `name` | Name for the cluster. _Example:_ `"Open Source"` Optional [request body properties](/docs/api#request-body-properties): | `description` | Description for the cluster. _Example:_ `"A place for safely running our open source builds"` | `emoji` | Emoji for the cluster using the [emoji syntax](/docs/pipelines/emojis) _Example:_ `"\:technologist\:"` | `color` | Color hex code for the cluster. _Example:_ `"#FFE0F1"` | `maintainers` | An array of one or more hashes of representing users or teams to grant maintainer permissions to for this cluster. _Example:_ ` [{ "user": "282a043f-4d4f-4db5-ac9a-58673ae02caf" }, { "team": "0da645b7-9840-428f-bd80-0b92ee274480" }] ` Required scope: `write_clusters` Success response: `201 Created` Error responses: | `422 Unprocessable Entity` | `{ "message": "Validation failed: Reason for failure" }` ###### Update a cluster ```bash curl -H "Authorization: Bearer $TOKEN" \ -X PUT "https://api.buildkite.com/v2/organizations/{org.slug}/clusters/{id}" \ -H "Content-Type: application/json" \ -d '{ "name": "Open Source" }' ``` ```json { "id": "42f1a7da-812d-4430-93d8-1cc7c33a6bcf", "graphql_id": "Q2x1c3Rlci0tLTQyZjFhN2RhLTgxMmQtNDQzMC05M2Q4LTFjYzdjMzNhNmJjZg==", "default_queue_id": "01885682-55a7-44f5-84f3-0402fb452e66", "name": "Open Source", "description": "A place for safely running our open source builds", "emoji": "\:technologist\:", "color": "#FFE0F1", "maintainers": { "users": [], "teams": [] }, "url": "http://api.buildkite.com/v2/organizations/acme-inc/clusters/42f1a7da-812d-4430-93d8-1cc7c33a6bcf", "web_url": "http://buildkite.com/organizations/acme-inc/clusters/42f1a7da-812d-4430-93d8-1cc7c33a6bcf", "default_queue_url": "http://api.buildkite.com/v2/organizations/acme-inc/clusters/42f1a7da-812d-4430-93d8-1cc7c33a6bcf/queues/42f1a7da-812d-4430-93d8-1cc7c33a6bcf", "queues_url": "http://buildkite.com/organizations/acme-inc/clusters/42f1a7da-812d-4430-93d8-1cc7c33a6bcf/queues", "created_at": "2023-05-03T04:17:55.867Z", "created_by": { "id": "3d3c3bf0-7d58-4afe-8fe7-b3017d5504de", "graphql_id": "VXNlci0tLTNkM2MzYmYwLTdkNTgtNGFmZS04ZmU3LWIzMDE3ZDU1MDRkZQo=", "name": "Sam Kim", "email": "sam@example.com", "avatar_url": "https://www.gravatar.com/avatar/example", "created_at": "2013-08-29T10:10:03.000Z" } } ``` [Request body properties](/docs/api#request-body-properties): | `name` | Name for the cluster. _Example:_ `"Open Source"` | `description` | Description for the cluster. _Example:_ `"A place for safely running our open source builds"` | `emoji` | Emoji for the cluster using the [emoji syntax](/docs/pipelines/emojis). _Example:_ `"\:technologist\:"` | `color` | Color hex code for the cluster. _Example:_ `"#FFE0F1"` | `default_queue_id` | ID of the queue to set as the cluster's default queue. Agents that connect to the cluster without specifying a queue will accept jobs from this queue. _Example:_ `"01885682-55a7-44f5-84f3-0402fb452e66"` Required scope: `write_clusters` Success response: `200 OK` Error responses: | `422 Unprocessable Entity` | `{ "message": "Validation failed: Reason for failure" }` ###### Delete a cluster Delete a cluster along with any queues and tokens that belong to it. ```bash curl -H "Authorization: Bearer $TOKEN" \ -X DELETE "https://api.buildkite.com/v2/organizations/{org.slug}/clusters/{id}" ``` Required scope: `write_clusters` Success response: `204 No Content` Error responses: | `422 Unprocessable Entity` | `{ "message": "Reason the cluster couldn't be deleted" }` --- ### Queues URL: https://buildkite.com/docs/apis/rest-api/clusters/queues #### Queues [Queues](/docs/agent/queues/managing) define discrete groups of agents within a [Buildkite cluster](/docs/pipelines/security/clusters). Pipelines in that cluster can target queues to run jobs on agents assigned to those queues. ##### Queue data model | `id` | ID of the queue | `graphql_id` | [GraphQL ID](/docs/apis/graphql-api#graphql-ids) of the queue | `key` | The queue key | `description` | Description of the queue | `url` | Canonical API URL of the queue | `web_url` | URL of the queue on Buildkite | `cluster_url` | API URL of the cluster the queue belongs to | `dispatch_paused` | Indicates whether the queue has paused dispatching jobs to associated agents | `dispatch_paused_by` | User who paused the queue | `dispatch_paused_at` | When the queue was paused | `dispatch_paused_note` | The note left when the queue was paused | `hosted_agents.agent_image_ref` | The custom image URL configured for the queue's hosted agents. Only present on Buildkite hosted queues. This field is a [private preview](/docs/agent/buildkite-hosted/linux/custom-base-images#use-an-agent-image-specify-a-custom-image-for-a-queue) feature. | `created_at` | When the queue was created | `created_by` | User who created the queue ##### List queues Returns a [paginated list](/docs/apis/rest-api#pagination) of a cluster's queues. ```bash curl -H "Authorization: Bearer $TOKEN" \ -X GET "https://api.buildkite.com/v2/organizations/{org.slug}/clusters/{cluster.id}/queues" ``` ```json [ { "id": "01885682-55a7-44f5-84f3-0402fb452e66", "graphql_id": "Q2x1c3Rlci0tLTQyZjFhN2RhLTgxMmQtNDQzMC05M2Q4LTFjYzdjMzNhNmJjZg==", "key": "default", "description": "The default queue for this cluster", "url": "http://api.buildkite.com/v2/organizations/test/clusters/42f1a7da-812d-4430-93d8-1cc7c33a6bcf/queues/01885682-55a7-44f5-84f3-0402fb452e66", "web_url": "http://buildkite.com/organizations/test/clusters/42f1a7da-812d-4430-93d8-1cc7c33a6bcf/queues/01885682-55a7-44f5-84f3-0402fb452e66", "cluster_url": "http://api.buildkite.com/v2/organizations/test/clusters/42f1a7da-812d-4430-93d8-1cc7c33a6bcf", "retry_agent_affinity": "prefer-warmest", "dispatch_paused": false, "dispatch_paused_by": null, "dispatch_paused_at": null, "dispatch_paused_note": null, "created_at": "2023-05-03T04:17:55.867Z", "created_by": { "id": "0187dfd4-92cf-4b01-907b-1146c8525dde", "graphql_id": "VXNlci0tLTAxODdkZmQ0LTkyY2YtNGIwMS05MDdiLTExNDZjODUyNWRkZQ==", "name": "Sam Kim", "email": "sam@example.com", "avatar_url": "https://www.gravatar.com/avatar/example", "created_at": "2023-05-03T04:17:43.118Z" } } ] ``` Required scope: `read_clusters` Success response: `200 OK` ##### Get a queue ```bash curl -H "Authorization: Bearer $TOKEN" \ -X GET "https://api.buildkite.com/v2/organizations/{org.slug}/clusters/{cluster.id}/queues/{queue.id}" ``` ```json { "id": "01885682-55a7-44f5-84f3-0402fb452e66", "graphql_id": "Q2x1c3Rlci0tLTQyZjFhN2RhLTgxMmQtNDQzMC05M2Q4LTFjYzdjMzNhNmJjZg==", "key": "default", "description": "The default queue for this cluster", "url": "http://api.buildkite.com/v2/organizations/test/clusters/42f1a7da-812d-4430-93d8-1cc7c33a6bcf/queues/01885682-55a7-44f5-84f3-0402fb452e66", "web_url": "http://buildkite.com/organizations/test/clusters/42f1a7da-812d-4430-93d8-1cc7c33a6bcf/queues/01885682-55a7-44f5-84f3-0402fb452e66", "cluster_url": "http://api.buildkite.com/v2/organizations/test/clusters/42f1a7da-812d-4430-93d8-1cc7c33a6bcf", "retry_agent_affinity": "prefer-warmest", "dispatch_paused": false, "dispatch_paused_by": null, "dispatch_paused_at": null, "dispatch_paused_note": null, "created_at": "2023-05-03T04:17:55.867Z", "created_by": { "id": "0187dfd4-92cf-4b01-907b-1146c8525dde", "graphql_id": "VXNlci0tLTAxODdkZmQ0LTkyY2YtNGIwMS05MDdiLTExNDZjODUyNWRkZQ==", "name": "Sam Kim", "email": "sam@example.com", "avatar_url": "https://www.gravatar.com/avatar/example", "created_at": "2023-05-03T04:17:43.118Z" } } ``` Required scope: `read_clusters` Success response: `200 OK` ##### Create a self-hosted queue ```bash curl -H "Authorization: Bearer $TOKEN" \ -X POST "https://api.buildkite.com/v2/organizations/{org.slug}/clusters/{cluster.id}/queues" \ -H "Content-Type: application/json" \ -d '{ "key": "default", "description": "The default queue for this cluster" }' ``` ```json { "id": "01885682-55a7-44f5-84f3-0402fb452e66", "graphql_id": "Q2x1c3Rlci0tLTQyZjFhN2RhLTgxMmQtNDQzMC05M2Q4LTFjYzdjMzNhNmJjZg==", "key": "default", "description": "The default queue for this cluster", "url": "http://api.buildkite.com/v2/organizations/test/clusters/42f1a7da-812d-4430-93d8-1cc7c33a6bcf/queues/01885682-55a7-44f5-84f3-0402fb452e66", "web_url": "http://buildkite.com/organizations/test/clusters/42f1a7da-812d-4430-93d8-1cc7c33a6bcf/queues/01885682-55a7-44f5-84f3-0402fb452e66", "cluster_url": "http://api.buildkite.com/v2/organizations/test/clusters/42f1a7da-812d-4430-93d8-1cc7c33a6bcf", "retry_agent_affinity": "prefer-warmest", "dispatch_paused": false, "dispatch_paused_by": null, "dispatch_paused_at": null, "dispatch_paused_note": null, "created_at": "2023-05-03T04:17:55.867Z", "created_by": { "id": "0187dfd4-92cf-4b01-907b-1146c8525dde", "graphql_id": "VXNlci0tLTAxODdkZmQ0LTkyY2YtNGIwMS05MDdiLTExNDZjODUyNWRkZQ==", "name": "Sam Kim", "email": "sam@example.com", "avatar_url": "https://www.gravatar.com/avatar/example", "created_at": "2023-05-03T04:17:43.118Z" } } ``` Required [request body properties](/docs/api#request-body-properties): | `key` | Key for the queue. _Example:_ `"default"` Optional [request body properties](/docs/api#request-body-properties): | `description` | Description for the queue. _Example:_ `"The default queue for this cluster"` `retry_agent_affinity` | When a job is retried, this setting controls how agents are selected for these retries. This value must be either `prefer-warmest` (default), which preferences retries on agents that have recently finished jobs, or `prefer-different`, which preferences retries on different agents, if they're available. If this property is omitted, then the value `prefer-warmest` is used. _Example:_ `"prefer-different"` Required scope: `write_clusters` Success response: `201 Created` Error responses: | `422 Unprocessable Entity` | `{ "message": "Validation failed: Reason for failure" }` ##### Create a Buildkite hosted queue ```bash curl -H "Authorization: Bearer $TOKEN" \ -X POST "https://api.buildkite.com/v2/organizations/{org.slug}/clusters/{cluster.id}/queues" \ -H "Content-Type: application/json" \ -d '{ "key": "default", "description": "Queue of hosted Buildkite agents", "hostedAgents": { "instanceShape": "LINUX_AMD64_2X4" } }' ``` ```json { "id": "01885682-55a7-44f5-84f3-0402fb452e66", "graphql_id": "Q2x1c3Rlci0tLTQyZjFhN2RhLTgxMmQtNDQzMC05M2Q4LTFjYzdjMzNhNmJjZg==", "key": "default", "description": "Queue of hosted Buildkite agents", "url": "http://api.buildkite.com/v2/organizations/test/clusters/42f1a7da-812d-4430-93d8-1cc7c33a6bcf/queues/01885682-55a7-44f5-84f3-0402fb452e66", "web_url": "http://buildkite.com/organizations/test/clusters/42f1a7da-812d-4430-93d8-1cc7c33a6bcf/queues/01885682-55a7-44f5-84f3-0402fb452e66", "cluster_url": "http://api.buildkite.com/v2/organizations/test/clusters/42f1a7da-812d-4430-93d8-1cc7c33a6bcf", "dispatch_paused": false, "dispatch_paused_by": null, "dispatch_paused_at": null, "dispatch_paused_note": null, "created_at": "2023-05-03T04:17:55.867Z", "created_by": { "id": "0187dfd4-92cf-4b01-907b-1146c8525dde", "graphql_id": "VXNlci0tLTAxODdkZmQ0LTkyY2YtNGIwMS05MDdiLTExNDZjODUyNWRkZQ==", "name": "Sam Kim", "email": "sam@example.com", "avatar_url": "https://www.gravatar.com/avatar/example", "created_at": "2023-05-03T04:17:43.118Z" }, "hosted": true, "hosted_agents": { "instance_shape": { "machine_type": "linux", "architecture": "amd64", "cpu": 2, "memory": 4, "name": "LINUX_AMD64_2X4" } } } ``` Required [request body properties](/docs/api#request-body-properties): | `key` | Key for the queue. _Example:_ `"default"` Optional [request body properties](/docs/api#request-body-properties): | `description` | Description for the queue. _Example:_ `"The default queue for this cluster"` | `hostedAgents` | Configures this queue to use [Buildkite hosted agents](/docs/agent/buildkite-hosted), along with its _instance shape_. This makes the queue a [Buildkite hosted queue](/docs/agent/queues/managing#create-a-buildkite-hosted-queue). _Example:_ ` { "instanceShape": "LINUX_AMD64_2X4" } ` `instanceShape` (required when `hostedAgents` is specified): Describes the machine type, architecture, CPU, and RAM to provision for Buildkite hosted agent instances running jobs in this queue. Learn more about the instance shapes available for [Linux](#instance-shape-values-for-linux) and [macOS](#instance-shape-values-for-macos) hosted agents. Required scope: `write_clusters` Success response: `201 Created` Error responses: | `422 Unprocessable Entity` | `{ "message": "Validation failed: Reason for failure" }` ##### Update a queue ```bash curl -H "Authorization: Bearer $TOKEN" \ -X PUT "https://api.buildkite.com/v2/organizations/{org.slug}/clusters/{cluster.id}/queues/{id}" \ -H "Content-Type: application/json" \ -d '{ "description": "The default queue for this cluster" }' ``` ```json { "id": "01885682-55a7-44f5-84f3-0402fb452e66", "graphql_id": "Q2x1c3Rlci0tLTQyZjFhN2RhLTgxMmQtNDQzMC05M2Q4LTFjYzdjMzNhNmJjZg==", "key": "default", "description": "The default queue for this cluster", "url": "http://api.buildkite.com/v2/organizations/test/clusters/42f1a7da-812d-4430-93d8-1cc7c33a6bcf/queues/01885682-55a7-44f5-84f3-0402fb452e66", "web_url": "http://buildkite.com/organizations/test/clusters/42f1a7da-812d-4430-93d8-1cc7c33a6bcf/queues/01885682-55a7-44f5-84f3-0402fb452e66", "cluster_url": "http://api.buildkite.com/v2/organizations/test/clusters/42f1a7da-812d-4430-93d8-1cc7c33a6bcf", "dispatch_paused": false, "dispatch_paused_by": null, "dispatch_paused_at": null, "dispatch_paused_note": null, "created_at": "2023-05-03T04:17:55.867Z", "created_by": { "id": "0187dfd4-92cf-4b01-907b-1146c8525dde", "graphql_id": "VXNlci0tLTAxODdkZmQ0LTkyY2YtNGIwMS05MDdiLTExNDZjODUyNWRkZQ==", "name": "Sam Kim", "email": "sam@example.com", "avatar_url": "https://www.gravatar.com/avatar/example", "created_at": "2023-05-03T04:17:43.118Z" } } ``` [Request body properties](/docs/api#request-body-properties): | `description` | Description for the queue. _Example:_ `"The default queue for this cluster"` | `retry_agent_affinity` | Controls how agents are selected for retries. Must be one of `prefer-warmest` or `prefer-different`. _Example:_ `"prefer-warmest"` | `hostedAgents` | Updates the _instance shape_ for an existing [Buildkite hosted queue](/docs/agent/queues/managing#create-a-buildkite-hosted-queue), which in turn manages [Buildkite hosted agents](/docs/agent/buildkite-hosted). _Example:_ ` { "instanceShape": "LINUX_AMD64_2X4" } ` `instanceShape` (required when `hostedAgents` is specified): Describes the machine type, architecture, CPU, and RAM to provision for Buildkite hosted agent instances running jobs in this queue. It is only possible to change the _size_ of the current instance shape assigned to this queue. It is not possible to change the current instance shape's machine type (from macOS to Linux, or vice versa), or for a Linux machine, its architecture (from AMD64 to ARM64, or vice versa). Learn more about the instance shapes available for [Linux](#instance-shape-values-for-linux) and [macOS](#instance-shape-values-for-macos) Buildkite hosted agents. `agentImageRef` (optional, [private preview](/docs/agent/buildkite-hosted/linux/custom-base-images#use-an-agent-image-specify-a-custom-image-for-a-queue)): A custom image URL to use for agents in this queue. When set, this overrides the [default agent image](/docs/agent/buildkite-hosted/linux/custom-base-images#use-an-agent-image-set-the-default-image-for-a-queue) selected through the Buildkite interface. The image must be publicly available or pushed to the [internal container registry](/docs/pipelines/hosted-agents/internal-container-registry). Contact [support@buildkite.com](mailto:support@buildkite.com) to enable this feature for your organization. Also be aware that this property must be specified with `instanceShape`, even if you are not changing its value. In such circumstances, specify this property's current value. _Example:_ `{ "instanceShape": "LINUX_AMD64_2X4", "agentImageRef": "my-custom-image:latest" }` Required scope: `write_clusters` Success response: `200 OK` Error responses: | `422 Unprocessable Entity` | `{ "message": "Validation failed: Reason for failure" }` ##### Delete a queue ```bash curl -H "Authorization: Bearer $TOKEN" \ -X DELETE "https://api.buildkite.com/v2/organizations/{org.slug}/clusters/{cluster.id}/queues/{id}" ``` Required scope: `write_clusters` Success response: `204 No Content` Error responses: | `422 Unprocessable Entity` | `{ "message": "Reason the queue couldn't be deleted" }` ##### Pause a queue [Pause a queue](/docs/agent/queues/managing#pause-and-resume-a-queue) to prevent jobs from being dispatched to agents associated with the queue. ```bash curl -H "Authorization: Bearer $TOKEN" \ -X POST "https://api.buildkite.com/v2/organizations/{org.slug}/clusters/{cluster.id}/queues/{id}/pause_dispatch" \ -H "Content-Type: application/json" \ -d '{ "dispatch_paused_note": "Paused while we investigate a security issue" }' ``` ```json { "id": "01885682-55a7-44f5-84f3-0402fb452e66", "graphql_id": "Q2x1c3Rlci0tLTQyZjFhN2RhLTgxMmQtNDQzMC05M2Q4LTFjYzdjMzNhNmJjZg==", "key": "default", "description": "The default queue for this cluster", "url": "http://api.buildkite.com/v2/organizations/test/clusters/42f1a7da-812d-4430-93d8-1cc7c33a6bcf/queues/01885682-55a7-44f5-84f3-0402fb452e66", "web_url": "http://buildkite.com/organizations/test/clusters/42f1a7da-812d-4430-93d8-1cc7c33a6bcf/queues/01885682-55a7-44f5-84f3-0402fb452e66", "cluster_url": "http://api.buildkite.com/v2/organizations/test/clusters/01885682-55a7-44f5-84f3-0402fb452e66", "dispatch_paused": true, "dispatch_paused_by": { "id": "0187dfd4-92cf-4b01-907b-1146c8525dde", "graphql_id": "VXNlci0tLTAxODdkZmQ0LTkyY2YtNGIwMS05MDdiLTExNDZjODUyNWRkZQ==", "name": "Sam Kim", "email": "sam@example.com", "avatar_url": "https://www.gravatar.com/avatar/example", "created_at": "2023-05-03T04:17:43.118Z" }, "dispatch_paused_at": "2023-05-03T04:19:43.118Z", "dispatch_paused_note": "Paused while we investigate a security issue", "created_at": "2023-05-03T04:17:55.867Z", "created_by": { "id": "0187dfd4-92cf-4b01-907b-1146c8525dde", "graphql_id": "VXNlci0tLTAxODdkZmQ0LTkyY2YtNGIwMS05MDdiLTExNDZjODUyNWRkZQ==", "name": "Sam Kim", "email": "sam@example.com", "avatar_url": "https://www.gravatar.com/avatar/example", "created_at": "2023-05-03T04:17:43.118Z" } } ``` Optional [request body properties](/docs/api#request-body-properties): | `note` | Note explaining why the queue is paused. The note will display on the queue page and any affected builds. _Example:_ `"Paused while we investigate a security issue"` Required scope: `write_clusters` Success response: `200 OK` Error responses: | `422 Unprocessable Entity` | `{ "message": "Cluster queue is already paused" }` ##### Resume a paused queue ```bash curl -H "Authorization: Bearer $TOKEN" \ -X POST "https://api.buildkite.com/v2/organizations/{org.slug}/clusters/{cluster.id}/queues/{id}/resume_dispatch" \ -H "Content-Type: application/json" ``` ```json { "id": "01885682-55a7-44f5-84f3-0402fb452e66", "graphql_id": "Q2x1c3Rlci0tLTQyZjFhN2RhLTgxMmQtNDQzMC05M2Q4LTFjYzdjMzNhNmJjZg==", "key": "default", "description": "The default queue for this cluster", "url": "http://api.buildkite.com/v2/organizations/test/clusters/42f1a7da-812d-4430-93d8-1cc7c33a6bcf/queues/01885682-55a7-44f5-84f3-0402fb452e66", "web_url": "http://buildkite.com/organizations/test/clusters/42f1a7da-812d-4430-93d8-1cc7c33a6bcf/queues/01885682-55a7-44f5-84f3-0402fb452e66", "cluster_url": "http://api.buildkite.com/v2/organizations/test/clusters/01885682-55a7-44f5-84f3-0402fb452e66", "dispatch_paused": false, "dispatch_paused_by": null, "dispatch_paused_at": null, "dispatch_paused_note": null, "created_at": "2023-05-03T04:17:55.867Z", "created_by": { "id": "0187dfd4-92cf-4b01-907b-1146c8525dde", "graphql_id": "VXNlci0tLTAxODdkZmQ0LTkyY2YtNGIwMS05MDdiLTExNDZjODUyNWRkZQ==", "name": "Sam Kim", "email": "sam@example.com", "avatar_url": "https://www.gravatar.com/avatar/example", "created_at": "2023-05-03T04:17:43.118Z" } } ``` Required scope: `write_clusters` Success response: `200 OK` Error responses: | `422 Unprocessable Entity` | `{ "message": "Cluster queue is not paused" }` ##### Instance shape values for Linux Specify the appropriate **Instance shape** for the `instanceShape` value in your REST API call. | Instance shape | Size | Architecture | vCPU | Memory | Disk space | `LINUX_AMD64_2X4` | Small | AMD64 | 2 | 4 GB | 47 GB | `LINUX_AMD64_4X16` | Medium | AMD64 | 4 | 16 GB | 95 GB | `LINUX_AMD64_8X32` | Large | AMD64 | 8 | 32 GB | 158 GB | `LINUX_AMD64_16X64` | Extra Large | AMD64 | 16 | 64 GB | 284 GB | `LINUX_ARM64_2X4` | Small | ARM642 | 4 GB | 47 GB | `LINUX_ARM64_4X16` | Medium | ARM64 | 4 | 16 GB | 95 GB | `LINUX_ARM64_8X32` | Large | ARM64 | 8 | 32 GB | 158 GB | `LINUX_ARM64_16X64` | Extra Large | ARM64 | 16 | 64 GB | 284 GB ##### Instance shape values for macOS Specify the appropriate **Instance shape** for the `instanceShape` value in your REST API call. | Instance shape | Size | vCPU | Memory | Disk space | `MACOS_ARM64_M4_6X28` | Medium | 6 | 28 GB | 182 GB | `MACOS_ARM64_M4_12X56` | Large | 12 | 56 GB | 294 GB **Note:** Shapes `MACOS_M2_4X7`, `MACOS_M2_6X14`, `MACOS_M2_12X28`, `MACOS_M4_12X56` were deprecated and removed on July 1, 2025. --- ### Agent tokens URL: https://buildkite.com/docs/apis/rest-api/clusters/agent-tokens #### Agent tokens An agent token is used to [connect agents to a Buildkite cluster](/docs/pipelines/security/clusters/manage#connect-agents-to-a-cluster). ##### Token data model | `id` | The ID of the agent token. | `graphql_id` | The [GraphQL ID](/docs/apis/graphql-api#graphql-ids) of the token. | `description` | The description of the token. | `allowed_ip_addresses` | A list of permitted [CIDR-notation](https://en.wikipedia.org/wiki/Classless_Inter-Domain_Routing) IPv4 addresses that agents must be accessible through, to access this token and connect to your Buildkite cluster. | `url` | The canonical API URL of the token. | `cluster_url` | The API URL of the Buildkite cluster that the token belongs to. | `created_at` | The date and time when the token was created. | `created_by` | The user who created the token. | `expires_at` | The ISO8601 timestamp at which point the token expires and prevents agents configured with this token from re-connecting to their Buildkite cluster. ##### List tokens Returns a [paginated list](/docs/apis/rest-api#pagination) of a cluster's agent tokens. ```bash curl -H "Authorization: Bearer $TOKEN" \ -X GET "https://api.buildkite.com/v2/organizations/{org.slug}/clusters/{cluster.id}/tokens" ``` ```json [ { "id": "b6001416-0e1e-41c6-9dbe-3d96766f451a", "graphql_id": "Q2x1c3RlclRva2VuLS0tYjYwMDE0MTYtMGUxZS00MWM2LTlkYmUtM2Q5Njc2NmY0NTFh", "description": "Windows agents", "allowed_ip_addresses": "202.144.0.0/24", "expires_at" : "2026-01-01T00:00:00Z", "url": "http://api.buildkite.com/v2/organizations/test/clusters/e4f44564-d3ea-45eb-87c2-6506643b852a/tokens/b6001416-0e1e-41c6-9dbe-3d96766f451a", "cluster_url": "http://api.buildkite.com/v2/organizations/test/clusters/e4f44564-d3ea-45eb-87c2-6506643b852a", "created_at": "2023-05-26T04:21:41.350Z", "created_by": { "id": "3d3c3bf0-7d58-4afe-8fe7-b3017d5504de", "graphql_id": "VXNlci0tLTNkM2MzYmYwLTdkNTgtNGFmZS04ZmU3LWIzMDE3ZDU1MDRkZQo=", "name": "Sam Kim", "email": "sam@example.com", "avatar_url": "https://www.gravatar.com/avatar/example", "created_at": "2013-08-29T10:10:03.000Z" } } ] ``` Required scope: `read_clusters` Success response: `200 OK` ##### Get a token ```bash curl -H "Authorization: Bearer $TOKEN" \ -X GET "https://api.buildkite.com/v2/organizations/{org.slug}/clusters/{cluster.id}/tokens/{token.id}" ``` ```json { "id": "b6001416-0e1e-41c6-9dbe-3d96766f451a", "graphql_id": "Q2x1c3RlclRva2VuLS0tYjYwMDE0MTYtMGUxZS00MWM2LTlkYmUtM2Q5Njc2NmY0NTFh", "description": "Windows agents", "allowed_ip_addresses": "202.144.0.0/24", "expires_at" : "2026-01-01T00:00:00Z", "url": "http://api.buildkite.com/v2/organizations/test/clusters/e4f44564-d3ea-45eb-87c2-6506643b852a/tokens/b6001416-0e1e-41c6-9dbe-3d96766f451a", "cluster_url": "http://api.buildkite.com/v2/organizations/test/clusters/e4f44564-d3ea-45eb-87c2-6506643b852a", "created_at": "2023-05-26T04:21:41.350Z", "created_by": { "id": "3d3c3bf0-7d58-4afe-8fe7-b3017d5504de", "graphql_id": "VXNlci0tLTNkM2MzYmYwLTdkNTgtNGFmZS04ZmU3LWIzMDE3ZDU1MDRkZQo=", "name": "Sam Kim", "email": "sam@example.com", "avatar_url": "https://www.gravatar.com/avatar/example", "created_at": "2013-08-29T10:10:03.000Z" } } ``` Required scope: `read_clusters` Success response: `200 OK` ##### Create a token > 📘 Token visibility > To ensure the security of tokens, the value is only included in the response for the request to create the token. Subsequent responses do not contain the token value. ```bash curl -H "Authorization: Bearer $TOKEN" \ -X POST "https://api.buildkite.com/v2/organizations/{org.slug}/clusters/{cluster.id}/tokens" \ -H "Content-Type: application/json" \ -d '{ "description": "Windows agents", "expires_at": "2025-01-01T00:00:00Z", "allowed_ip_addresses": "202.144.0.0/24" }' ``` ```json { "id": "b6001416-0e1e-41c6-9dbe-3d96766f451a", "graphql_id": "Q2x1c3RlclRva2VuLS0tYjYwMDE0MTYtMGUxZS00MWM2LTlkYmUtM2Q5Njc2NmY0NTFh", "description": "Windows agents", "expires_at": "2025-01-01T00:00:00Z", "allowed_ip_addresses": "202.144.0.0/24", "url": "http://api.buildkite.com/v2/organizations/test/clusters/e4f44564-d3ea-45eb-87c2-6506643b852a/tokens/b6001416-0e1e-41c6-9dbe-3d96766f451a", "cluster_url": "http://api.buildkite.com/v2/organizations/test/clusters/e4f44564-d3ea-45eb-87c2-6506643b852a", "created_at": "2023-05-26T04:21:41.350Z", "created_by": { "id": "3d3c3bf0-7d58-4afe-8fe7-b3017d5504de", "graphql_id": "VXNlci0tLTNkM2MzYmYwLTdkNTgtNGFmZS04ZmU3LWIzMDE3ZDU1MDRkZQo=", "name": "Sam Kim", "email": "sam@example.com", "avatar_url": "https://www.gravatar.com/avatar/example", "created_at": "2013-08-29T10:10:03.000Z" }, "token": "igo6HEj5fxQbgBTDoDzNaZzT" } ``` Required [request body properties](/docs/api#request-body-properties): | `description` | Description for the token. _Example:_ `"Windows agents"` Optional [request body properties](/docs/api#request-body-properties): | `expires_at` | The ISO8601 timestamp at which point the token expires and prevents agents configured with this token from re-connecting to their Buildkite cluster. _Example:_ `2025-01-01T00:00:00Z` Required scope: `write_clusters` Success response: `201 Created` Error responses: | `422 Unprocessable Entity` | `{ "message": "Validation failed: Reason for failure" }` ##### Update a token ```bash curl -H "Authorization: Bearer $TOKEN" \ -X PUT "https://api.buildkite.com/v2/organizations/{org.slug}/clusters/{cluster.id}/tokens/{id}" \ -H "Content-Type: application/json" \ -d '{ "description": "Windows agents", "expires_at": "2025-01-01T00:00:00Z", "allowed_ip_addresses": "202.144.0.0/24" }' ``` ```json { "id": "b6001416-0e1e-41c6-9dbe-3d96766f451a", "graphql_id": "Q2x1c3RlclRva2VuLS0tYjYwMDE0MTYtMGUxZS00MWM2LTlkYmUtM2Q5Njc2NmY0NTFh", "description": "Windows agents", "allowed_ip_addresses": "202.144.0.0/24", "expires_at" : "2026-01-01T00:00:00Z", "url": "http://api.buildkite.com/v2/organizations/test/clusters/e4f44564-d3ea-45eb-87c2-6506643b852a/tokens/b6001416-0e1e-41c6-9dbe-3d96766f451a", "cluster_url": "http://api.buildkite.com/v2/organizations/test/clusters/e4f44564-d3ea-45eb-87c2-6506643b852a", "created_at": "2023-05-26T04:21:41.350Z", "created_by": { "id": "3d3c3bf0-7d58-4afe-8fe7-b3017d5504de", "graphql_id": "VXNlci0tLTNkM2MzYmYwLTdkNTgtNGFmZS04ZmU3LWIzMDE3ZDU1MDRkZQo=", "name": "Sam Kim", "email": "sam@example.com", "avatar_url": "https://www.gravatar.com/avatar/example", "created_at": "2013-08-29T10:10:03.000Z" } } ``` [Request body properties](/docs/api#request-body-properties): | `description` | Description for the token. _Example:_ `"Windows agents"` Required scope: `write_clusters` Success response: `200 OK` Error responses: | `422 Unprocessable Entity` | `{ "message": "Validation failed: Reason for failure" }` ##### Revoke a token ```bash curl -H "Authorization: Bearer $TOKEN" \ -X DELETE "https://api.buildkite.com/v2/organizations/{org.slug}/clusters/{cluster.id}/tokens/{id}" ``` Required scope: `write_clusters` Success response: `204 No Content` Error responses: | `422 Unprocessable Entity` | `{ "message": "Reason the token couldn't be revoked" }` --- ### Cluster maintainers URL: https://buildkite.com/docs/apis/rest-api/clusters/maintainers #### Cluster maintainers Cluster maintainers permissions can be assigned to a list of [Users or teams](/docs/platform/team-management/permissions), or both. This grants assignees the ability to manage the [Buildkite clusters they maintain](/docs/pipelines/security/clusters/manage#manage-maintainers-on-a-cluster). ##### Cluster maintainer data model | `id` | The ID of the cluster maintainer assignment. | `actor` | Metadata on the assigned User or Team ##### List cluster maintainers Returns a list of [maintainers](/docs/pipelines/security/clusters/manage#manage-maintainers-on-a-cluster) on a [cluster](/docs/pipelines/security/clusters). ```bash curl -H "Authorization: Bearer $TOKEN" \ -X GET "https://api.buildkite.com/v2/organizations/{org.slug}/clusters/{cluster.id}/maintainers" ``` ```json [ { "id": "f6cf1097-c9c5-4492-885f-a2d3281a07dd", "actor": { "id": "01973824-0c57-45ae-a440-638fceb3ec06", "graphql_id": "VXNlci0tLTAxOTczODI0LTBjNTctNDVhZS1hNDQwLTYzOGZjZWIzZWMwNg==", "name": "Staff", "email": "staff@example.com", "type": "user" } }, { "id": "282a043f-4d4f-4db5-ac9a-58673ae02caf", "actor": { "id": "0da645b7-9840-428f-bd80-0b92ee274480", "graphql_id": "VGVhbS0tLTBkYTY0NWI3LTk4NDAtNDI4Zi1iZDgwLTBiOTJlZTI3NDQ4MA==", "slug": "Developers", "type": "team" } } ] ``` Required scope: `read_clusters` Success response: `200 OK` ##### Get a cluster maintainer ```bash curl -H "Authorization: Bearer $TOKEN" \ -X GET "https://api.buildkite.com/v2/organizations/{org.slug}/clusters/{cluster.id}/maintainers/{id}" ``` ```json { "id": "282a043f-4d4f-4db5-ac9a-58673ae02caf", "actor": { "id": "0da645b7-9840-428f-bd80-0b92ee274480", "graphql_id": "VGVhbS0tLTBkYTY0NWI3LTk4NDAtNDI4Zi1iZDgwLTBiOTJlZTI3NDQ4MA==", "slug": "Developers", "type": "team" } } ``` Required scope: `read_clusters` Success response: `200 OK` ##### Create a cluster maintainer Assigns [cluster maintainer](/docs/pipelines/security/clusters/manage#manage-maintainers-on-a-cluster) permissions to a [user or team](/docs/platform/team-management/permissions). ```bash curl -H "Authorization: Bearer $TOKEN" \ -X POST "https://api.buildkite.com/v2/organizations/{org.slug}/clusters/{cluster.id}/maintainers" \ -H "Content-Type: application/json" \ -d '{ "team": "0da645b7-9840-428f-bd80-0b92ee274480" }' ``` ```json { "id": "282a043f-4d4f-4db5-ac9a-58673ae02caf", "actor": { "id": "0da645b7-9840-428f-bd80-0b92ee274480", "graphql_id": "VGVhbS0tLTBkYTY0NWI3LTk4NDAtNDI4Zi1iZDgwLTBiOTJlZTI3NDQ4MA==", "slug": "Developers", "type": "team" } } ``` ###### Cluster maintainer permission target Cluster maintainer permissions can be targeted to either a [user or team](/docs/platform/team-management/permissions) by specifying either a `user` or `team` field as the target in the request body, along with the target's UUID for its value. | Target | Value | Example request body | `user` | UUID of the user | `{ "user: "282a043f-4d4f-4db5-ac9a-58673ae02caf" }` | `team` | UUID of the team | `{ "team: "0da645b7-9840-428f-bd80-0b92ee274480" }` Required scope: `write_clusters` Success response: `201 Created` Error responses: | `422 Unprocessable Entity` | `{ "message": "Validation failed: Reason for failure" }` ##### Remove a cluster maintainer Remove cluster maintainer permissions from a [user or team](/docs/platform/team-management/permissions). ```bash curl -H "Authorization: Bearer $TOKEN" \ -X DELETE "https://api.buildkite.com/v2/organizations/{org.slug}/clusters/{id}" ``` Required scope: `write_clusters` Success response: `204 No Content` Error responses: | `422 Unprocessable Entity` | `{ "message": "Validation failed: Reason cluster maintainer permission could not be deleted" }` --- ### Buildkite secrets URL: https://buildkite.com/docs/apis/rest-api/clusters/secrets #### Secrets [Buildkite secrets](/docs/pipelines/security/secrets/buildkite-secrets) is an encrypted key-value store secrets management service. Secrets are scoped within a [Buildkite cluster](/docs/pipelines/security/clusters) and can be accessed by agents within that cluster using the [`buildkite-agent secret get` command](/docs/agent/cli/reference/secret) or by defining `secrets` within a pipeline YAML configuration. Access to secrets is controlled through [access policies](/docs/pipelines/security/secrets/buildkite-secrets/access-policies). ##### Secret data model | `id` | ID of the secret | `graphql_id` | GraphQL ID of the secret | `key` | A unique identifier for the secret | `value` | The encrypted secret value. This field is never returned by the API | `description` | Description of the secret | `policy` | YAML policy defining access rules for the secret | `url` | Canonical API URL of the secret | `cluster_url` | API URL of the cluster this secret belongs to | `created_at` | When the secret was created | `created_by` | User who created the secret | `updated_at` | When the secret was last updated | `updated_by` | User who last updated the secret | `last_read_at` | When the secret was last accessed by a build | `organization` | Organization this secret belongs to ##### List secrets Returns a [paginated list](/docs/apis/rest-api#pagination) of a cluster's secrets. ```bash curl -H "Authorization: Bearer $TOKEN" \ -X GET "https://api.buildkite.com/v2/organizations/{org.slug}/clusters/{cluster.id}/secrets" ``` ```json [ { "id": "9bf7650d-52ba-40e6-a18e-7a34a109f8bc", "key": "MY_SECRET", "description": "My secret description", "policy": "- pipeline_slug: my-pipeline\n build_branch: main", "created_at": "2025-10-01T06:51:21.067Z", "created_by": { "id": "01987d6e-44a6-415c-85d1-c247c938e8d5", "name": "Staff", "email": "test+staff@example.com" }, "updated_at": "2025-10-01T06:51:21.173Z", "updated_by": null, "last_read_at": null, "url": "http://api.buildkite.com/v2/organizations/{org.slug}/clusters/{cluster.id}/secrets/9bf7650d-52ba-40e6-a18e-7a34a109f8bc", "cluster_url": "http://api.buildkite.com/v2/organizations/{org.slug}/clusters/{cluster.id}", "organization": { "id": "0198e45b-c0d5-4a0b-8e37-e140af750d2d", "slug": "my-org", "url": "http://api.buildkite.com/v2/organizations/my-org", "web_url": "http://buildkite.com/my-org" } } ] ``` Required scope: `read_secret_details` Success response: `200 OK` ##### Get a secret ```bash curl -H "Authorization: Bearer $TOKEN" \ -X GET "https://api.buildkite.com/v2/organizations/{org.slug}/clusters/{cluster.id}/secrets/{id}" ``` ```json { "id": "9bf7650d-52ba-40e6-a18e-7a34a109f8bc", "key": "MY_SECRET", "description": "My secret description", "policy": "- pipeline_slug: my-pipeline\n build_branch: main", "created_at": "2025-10-01T06:51:21.067Z", "created_by": { "id": "01987d6e-44a6-415c-85d1-c247c938e8d5", "name": "Staff", "email": "test+staff@example.com" }, "updated_at": "2025-10-01T06:51:21.173Z", "updated_by": null, "last_read_at": null, "url": "http://api.buildkite.com/v2/organizations/{org.slug}/clusters/{cluster.id}/secrets/9bf7650d-52ba-40e6-a18e-7a34a109f8bc", "cluster_url": "http://api.buildkite.com/v2/organizations/{org.slug}/clusters/{cluster.id}", "organization": { "id": "0198e45b-c0d5-4a0b-8e37-e140af750d2d", "slug": "my-org", "url": "http://api.buildkite.com/v2/organizations/my-org", "web_url": "http://buildkite.com/my-org" } } ``` Required scope: `read_secret_details` Success response: `200 OK` ##### Create a secret ```bash curl -H "Authorization: Bearer $TOKEN" \ -X POST "https://api.buildkite.com/v2/organizations/{org.slug}/clusters/{cluster.id}/secrets" \ -H "Content-Type: application/json" \ -d '{ "key": "MY_SECRET", "value": "secret-value", "description": "My secret description", "policy": "- pipeline_slug: my-pipeline\n build_branch: main" }' ``` ```json { "id": "30f93dd5-bc23-4a14-8ad3-fd1920ea8eb5", "key": "MY_SECRET", "description": "My secret description", "policy": "- pipeline_slug: my-pipeline\n build_branch: main", "created_at": "2025-10-01T07:43:38.648Z", "created_by": { "id": "01987d6e-44a6-415c-85d1-c247c938e8d5", "name": "Staff", "email": "test+staff@example.com" }, "updated_at": "2025-10-01T07:43:38.708Z", "updated_by": null, "last_read_at": null, "url": "http://api.buildkite.com/v2/organizations/{org.slug}/clusters/{cluster.id}/secrets/30f93dd5-bc23-4a14-8ad3-fd1920ea8eb5", "cluster_url": "http://api.buildkite.com/v2/organizations/{org.slug}/clusters/{cluster.id}", "organization": { "id": "0198e45b-c0d5-4a0b-8e37-e140af750d2d", "slug": "my-org", "url": "http://api.buildkite.com/v2/organizations/my-org", "web_url": "http://buildkite.com/my-org" } } ``` Required [request body properties](/docs/api#request-body-properties): | `key` | A unique identifier for the secret. Must start with a letter and only contain letters, numbers, and underscores. Cannot start with `buildkite` or `bk` (case insensitive). Maximum length is 255 characters. Must be unique within the cluster _Example:_ `"MY_SECRET"` Optional [request body properties](/docs/api#request-body-properties): | `value` | The secret value to encrypt and store. Must be less than 8 kilobytes. Cannot be blank. _Example:_ `"secret-value"` | `description` | A description of the secret _Example:_ `"My secret description"` | `policy` | YAML policy defining access rules. See [Access policies for Buildkite secrets](/docs/pipelines/security/secrets/buildkite-secrets/access-policies) for details on policy structure and available claims _Example:_ `"- pipeline_slug: my-pipeline\n build_branch: main"` Required scope: `write_secrets` Success response: `201 Created` ##### Update a secret's description and access policy Updates a secret's description and access policy. To update its value instead, see [Update a secret's value](#update-a-secrets-value). ```bash curl -H "Authorization: Bearer $TOKEN" \ -X PUT "https://api.buildkite.com/v2/organizations/{org.slug}/clusters/{cluster.id}/secrets/{id}" \ -H "Content-Type: application/json" \ -d '{ "description": "Updated description", "policy": "- pipeline_slug: my-pipeline\n build_branch: production" }' ``` ```json { "id": "30f93dd5-bc23-4a14-8ad3-fd1920ea8eb5", "key": "MY_SECRET", "description": "Updated description", "policy": "- pipeline_slug: my-pipeline\n build_branch: production", "created_at": "2025-10-01T07:43:38.648Z", "created_by": { "id": "01987d6e-44a6-415c-85d1-c247c938e8d5", "name": "Staff", "email": "test+staff@example.com" }, "updated_at": "2025-10-01T07:43:46.949Z", "updated_by": { "id": "01987d6e-44a6-415c-85d1-c247c938e8d5", "name": "Staff", "email": "test+staff@example.com" }, "last_read_at": null, "url": "http://api.buildkite.com/v2/organizations/{org.slug}/clusters/{cluster.id}/secrets/30f93dd5-bc23-4a14-8ad3-fd1920ea8eb5", "cluster_url": "http://api.buildkite.com/v2/organizations/{org.slug}/clusters/{cluster.id}", "organization": { "id": "0198e45b-c0d5-4a0b-8e37-e140af750d2d", "slug": "my-org", "url": "http://api.buildkite.com/v2/organizations/my-org", "web_url": "http://buildkite.com/my-org" } } ``` Optional [request body properties](/docs/api#request-body-properties): | `description` | A description of the secret _Example:_ `"Updated description"` | `policy` | YAML policy defining access rules. See [Access policies for Buildkite secrets](/docs/pipelines/security/secrets/buildkite-secrets/access-policies) for details on policy structure and available claims _Example:_ `"- pipeline_slug: my-pipeline\n build_branch: production"` Unpermitted [request body properties](/docs/api#request-body-properties): | `key` | Attempting to update the `key` parameter returns an error: `"The key parameter cannot be updated."` | `value` | Attempting to update the `value` parameter returns an error: `"The value parameter cannot be updated on this endpoint."` Required scope: `write_secrets` Success response: `200 OK` ##### Update a secret's value Updates a secret's encrypted value only. To update the secret's other details, see [Update a secret's description and access policy](#update-a-secrets-description-and-access-policy). ```bash curl -H "Authorization: Bearer $TOKEN" \ -X PUT "https://api.buildkite.com/v2/organizations/{org.slug}/clusters/{cluster.id}/secrets/{id}/value" \ -H "Content-Type: application/json" \ -d '{"value": "new-secret-value"}' ``` ```json { "id": "30f93dd5-bc23-4a14-8ad3-fd1920ea8eb5", "key": "MY_SECRET", "description": "Updated description", "policy": "- pipeline_slug: my-pipeline\n build_branch: production", "created_at": "2025-10-01T07:43:38.648Z", "created_by": { "id": "01987d6e-44a6-415c-85d1-c247c938e8d5", "name": "Staff", "email": "test+staff@example.com" }, "updated_at": "2025-10-01T07:44:09.081Z", "updated_by": { "id": "01987d6e-44a6-415c-85d1-c247c938e8d5", "name": "Staff", "email": "test+staff@example.com" }, "last_read_at": null, "url": "http://api.buildkite.com/v2/organizations/{org.slug}/clusters/{cluster.id}/secrets/30f93dd5-bc23-4a14-8ad3-fd1920ea8eb5", "cluster_url": "http://api.buildkite.com/v2/organizations/{org.slug}/clusters/{cluster.id}", "organization": { "id": "0198e45b-c0d5-4a0b-8e37-e140af750d2d", "slug": "my-org", "url": "http://api.buildkite.com/v2/organizations/my-org", "web_url": "http://buildkite.com/my-org" } } ``` Required [request body properties](/docs/api#request-body-properties): | `value` | The new secret value to encrypt and store. Must be less than 8 kilobytes. Cannot be blank. _Example:_ `"new-secret-value"` Required scope: `write_secrets` Success response: `200 OK` ##### Delete a secret ```bash curl -H "Authorization: Bearer $TOKEN" \ -X DELETE "https://api.buildkite.com/v2/organizations/{org.slug}/clusters/{cluster.id}/secrets/{id}" ``` Required scope: `write_secrets` Success response: `204 No Content` --- ### Jobs URL: https://buildkite.com/docs/apis/rest-api/jobs #### Jobs API A job is the execution of a command step during a build. Jobs run the commands, scripts, or plugins defined in the step. A job can be in various states during its lifecycle, such as `pending`, `scheduled`, `running`, `finished`, `failed`, `canceled`, and others. These states represent the execution state of the job as it progresses through the build system. ##### Retry a job Retries a `failed` OR `timed_out` OR a job whose step has the [manual retry after passing attribute set to true](/docs/pipelines/configure/retry#retry-attributes-manual-retry-attributes) (that is, `permit_on_passed: true`). You can only retry each `job.id` once. To retry a "second time" use the new `job.id` returned in the first retry query. ```bash curl -H "Authorization: Bearer $TOKEN" \ -X PUT "https://api.buildkite.com/v2/organizations/{org.slug}/pipelines/{pipeline.slug}/builds/{build.number}/jobs/{job.id}/retry" ``` ```json { "id": "b63254c0-3271-4a98-8270-7cfbd6c2f14e", "graphql_id": "Sm9iLS0tMTQ4YWQ0MzgtM2E2My00YWIxLWIzMjItNzIxM2Y3YzJhMWFi", "type": "script", "name": ":package:", "step_key": "package", "agent_query_rules": ["*"], "state": "scheduled", "build_url": "https://buildkite.com/my-great-org/my-pipeline/builds/1", "web_url": "https://buildkite.com/my-great-org/my-pipeline/builds/1#b63254c0-3271-4a98-8270-7cfbd6c2f14e", "log_url": "https://api.buildkite.com/v2/organizations/my-great-org/pipelines/my-pipeline/builds/1/jobs/b63254c0-3271-4a98-8270-7cfbd6c2f14e/log", "raw_log_url": "https://api.buildkite.com/v2/organizations/my-great-org/pipelines/my-pipeline/builds/1/jobs/b63254c0-3271-4a98-8270-7cfbd6c2f14e/log.txt", "artifacts_url": "", "command": "scripts/build.sh", "soft_failed": false, "exit_status": 0, "artifact_paths": "", "agent": null, "created_at": "2015-05-09T21:05:59.874Z", "scheduled_at": "2015-05-09T21:05:59.874Z", "runnable_at": null, "started_at": null, "finished_at": null, "retried": false, "retried_in_job_id": null, "retries_count": 1, "retry_type": null, "parallel_group_index": null, "parallel_group_total": null, "priority": { "number": 0 } } ``` Required scope: `write_builds` Success response: `200 OK` Error responses: | `400 Bad Request` | `{ "message": "Only failed, timed out or canceled jobs can be retried" }` ##### Reprioritize a job Reprioritizes a job by changing its [priority value](/docs/pipelines/configure/workflows/job-priority). This affects the order in which jobs are picked up by agents. ```bash curl -H "Authorization: Bearer $TOKEN" \ -X PUT "https://api.buildkite.com/v2/organizations/{org.slug}/pipelines/{pipeline.slug}/builds/{build.number}/jobs/{job.id}/reprioritize" \ -H "Content-Type: application/json" \ -d '{"priority": 5}' ``` ```json { "id": "b63254c0-3271-4a98-8270-7cfbd6c2f14e", "graphql_id": "Sm9iLS0tMTQ4YWQ0MzgtM2E2My00YWIxLWIzMjItNzIxM2Y3YzJhMWFi", "type": "script", "name": ":package:", "step_key": "package", "agent_query_rules": ["*"], "state": "scheduled", "build_url": "https://buildkite.com/my-great-org/my-pipeline/builds/1", "web_url": "https://buildkite.com/my-great-org/my-pipeline/builds/1#b63254c0-3271-4a98-8270-7cfbd6c2f14e", "log_url": "https://api.buildkite.com/v2/organizations/my-great-org/pipelines/my-pipeline/builds/1/jobs/b63254c0-3271-4a98-8270-7cfbd6c2f14e/log", "raw_log_url": "https://api.buildkite.com/v2/organizations/my-great-org/pipelines/my-pipeline/builds/1/jobs/b63254c0-3271-4a98-8270-7cfbd6c2f14e/log.txt", "artifacts_url": "", "command": "scripts/build.sh", "soft_failed": false, "exit_status": 0, "artifact_paths": "", "agent": null, "created_at": "2015-05-09T21:05:59.874Z", "scheduled_at": "2015-05-09T21:05:59.874Z", "runnable_at": null, "started_at": null, "finished_at": null, "retried": false, "retried_in_job_id": null, "retries_count": 0, "retry_type": null, "parallel_group_index": null, "parallel_group_total": null, "priority": { "number": 5 } } ``` Required [request body properties](/docs/api#request-body-properties): | `priority` | An integer value representing the job's priority. Higher values indicate higher priority. _Example: 5_ Required scope: `write_builds` Success response: `200 OK` Error responses: | `400 Bad Request` | `{ "message": "Priority must be an integer" }` ##### Unblock a job Unblocks a build's "Block pipeline" job. The job's `unblockable` property indicates whether it is able to be unblocked, and the `unblock_url` property points to this endpoint. ```bash curl -H "Authorization: Bearer $TOKEN" \ -X PUT "https://api.buildkite.com/v2/organizations/{org.slug}/pipelines/{pipeline.slug}/builds/{build.number}/jobs/{job.id}/unblock" \ -H "Content-Type: application/json" \ -d '{ "fields": { "name": "Liam Neeson", "email": "liam@evilbatmanvillans.com" } }' ``` ```json { "id": "ded35de2-7de0-4da8-8daa-b4ce0b7f1064", "graphql_id": "Sm9iLS0tZGM5YTg5MmQtM2I5Ny00MzgyLWEzYzItNWJhZmU5M2RlZWI1", "type": "manual", "label": "Deploy", "state": "unblocked", "web_url": null, "unblocked_by": { "id": "cfbb422f-2e4a-41b5-86f0-59e813b3d6e2", "graphql_id": "VXNlci0tLTBmYTQzYjY2LWI5N2YtNDc0Yi04Y2YxLWIxMzQ5NWIxYjRjMQ==", "name": "Liam Neeson", "email": "liam@evilbatmanvillans.com", "avatar_url": "https://www.gravatar.com/avatar/e14f55d3f939977cecbf51b64ff6f861", "created_at": "2015-05-09T21:05:59.874Z" }, "unblocked_at": "2015-05-09T21:06:10.264Z", "unblockable": false, "unblock_url": "https://buildkite.com/my-great-org/my-pipeline/builds/1#ded35de2-7de0-4da8-8daa-b4ce0b7f1064" } ``` Optional [request body properties](/docs/api#request-body-properties): | `unblocker` | The user id of the person activating the job. _Default value: the user making the API request_. | `fields` | The values for the [block step's fields](/docs/pipelines/configure/step-types/block-step#block-step-attributes). _Example:_ `{"release-name": "Flying Dolpin"}` Required scope: `write_builds` Success response: `200 OK` Error responses: | `400 Bad Request` | `{ "message": "This job type cannot be unblocked" }` | `422 Unprocessable Entity` | `{ "message": "Unblocker is not a valid user id for this organization"}` ##### Get a job's log output ```bash curl -H "Authorization: Bearer $TOKEN" \ -X GET "https://api.buildkite.com/v2/organizations/{org.slug}/pipelines/{pipeline.slug}/builds/{build.number}/jobs/{job.id}/log" ``` ```json { "url": "https://api.buildkite.com/v2/organizations/my-great-org/pipelines/my-pipeline/builds/1/jobs/b63254c0-3271-4a98-8270-7cfbd6c2f14e/log", "content": "This is the job's log output", "size": 28, "header_times": [1563337899810051000,1563337899811015000,1563337905336878000,1563337906589603000,156333791038291900] } ``` Required scope: `read_build_logs` Success response: `200 OK` Alternative formats (using `Accept` header or file extension): | `text/plain` | `.txt` | The job's raw log content | `text/html` | `.html` | The job's log content as rendered by [Terminal](http://buildkite.github.io/terminal-to-html/) ##### Delete a job's log output ```bash curl -H "Authorization: Bearer $TOKEN" \ -X DELETE "https://api.buildkite.com/v2/organizations/{org.slug}/pipelines/{pipeline.slug}/builds/{build.number}/jobs/{job.id}/log" ``` Required scope: `write_build_logs` Success response: `204 No Content` ##### Get a job's environment variables ```bash curl -H "Authorization: Bearer $TOKEN" \ -X GET "https://api.buildkite.com/v2/organizations/{org.slug}/pipelines/{pipeline.slug}/builds/{build.number}/jobs/{job.id}/env" ``` ```json { "env": { "CI": "true", "BUILDKITE": "true", "BUILDKITE_TAG": "", "BUILDKITE_REPO": "git@github.com:my-great-org/my-repo.git", "BUILDKITE_BRANCH": "main", "BUILDKITE_COMMIT": "a65572555600c07c7ee79a2bd909220e1ca5485b", "BUILDKITE_JOB_ID": "bde076a8-bc2c-4fda-9652-10220a56d638", "BUILDKITE_COMMAND": "buildkite-agent pipeline upload", "BUILDKITE_MESSAGE": "\:llama\:", "BUILDKITE_BUILD_ID": "c4e312cb-e734-4f0a-a5bd-1cac2535c57e", "BUILDKITE_BUILD_URL": "https://buildkite.com/my-great-org/my-pipeline/builds/15", "BUILDKITE_AGENT_NAME": "ci-1", "BUILDKITE_COMMAND": "buildkite-agent pipeline upload", "BUILDKITE_BUILD_NUMBER": "15", "BUILDKITE_ORGANIZATION_SLUG": "my-great-org", "BUILDKITE_PIPELINE_SLUG": "my-pipeline", "BUILDKITE_PULL_REQUEST": "false", "BUILDKITE_BUILD_CREATOR": "Keith Pitt", "BUILDKITE_REPO_SSH_HOST": "github.com", "BUILDKITE_ARTIFACT_PATHS": "", "BUILDKITE_PIPELINE_PROVIDER": "github", "BUILDKITE_BUILD_CREATOR_EMAIL": "keith@buildkite.com", "BUILDKITE_AGENT_META_DATA_LOCAL": "true" } } ``` Required scope: `read_job_env` Success response: `200 OK` Alternative formats (using `Accept` header or file extension): | `text/plain` | `.txt` | The job's environment in a `KEY=VALUE` format suitable for parsing by tools such as [dotenv](https://github.com/bkeepers/dotenv) --- ### Pipeline templates URL: https://buildkite.com/docs/apis/rest-api/pipeline-templates #### Pipeline templates API > 📘 Enterprise plan feature > [Pipeline templates](/docs/pipelines/governance/templates) are only available on an [Enterprise](https://buildkite.com/pricing) plan. The pipeline templates API endpoint allows admins to create and manage pipeline templates for an organization. Non-admins can only read or assign pipeline templates marked as `available` by organization admins. ##### Pipeline template data model | `uuid` | UUID of the pipeline template | `graphql_id` | [GraphQL ID of the pipeline template](/docs/apis/graphql-api#graphql-ids) | `name` | Name of the pipeline template | `description` | Description of the pipeline template | `configuration` | YAML step configuration for the pipeline template | `available` | When set to `true`, non-admins can assign the pipeline template to pipelines _Default:_ `false` | `url` | Canonical API URL of the pipeline template | `web_url` | URL of the pipeline template on Buildkite | `created_at` | When the pipeline template was created | `created_by` | [User](/docs/apis/rest-api/user) who created the pipeline template | `updated_at` | When the pipeline template was created | `updated_by` | [User](/docs/apis/rest-api/user) who last updated the pipeline template ##### List pipeline templates Returns a list of an organization's pipeline templates. ```bash curl -H "Authorization: Bearer $TOKEN" \ -X GET "https://api.buildkite.com/v2/organizations/{org.slug}/pipeline-templates" ``` ```json [ { "uuid": "018a86cc-db73-7d15-8c68-5023cf8d64c3", "graphql_id": "UGlwZWxpbmVUZW1wbGF0ZS0tLTAxOGE4NmNjLWRiNzMtN2QxNS04YzY4LTUwMjNjZjhkNjRjMw==", "name": "Build template", "description": "Shared build steps configuration", "configuration": "steps:\n - label: \":hammer: Build\"\n command: \"scripts/build.sh\"", "available": false, "url": "http:///api.buildkite.com/v2/organizations/acme-inc/pipeline-templates/018a86cc-db73-7d15-8c68-5023cf8d64c3", "web_url": "http://www.buildkite.com/organizations/acme-inc/pipeline-templates/018a86cc-db73-7d15-8c68-5023cf8d64c3", "created_at": "2023-05-03T04:17:55.867Z", "created_by": { "id": "3d3c3bf0-7d58-4afe-8fe7-b3017d5504de", "graphql_id": "VXNlci0tLTNkM2MzYmYwLTdkNTgtNGFmZS04ZmU3LWIzMDE3ZDU1MDRkZQo=", "name": "Sam Kim", "email": "sam@example.com", "avatar_url": "https://www.gravatar.com/avatar/example", "created_at": "2013-05-03T04:17:55.867Z" }, "updated_at": "2023-06-12T04:17:55.867Z", "updated_by": { "id": "3d3c3bf0-7d58-4afe-8fe7-b3017d5504de", "graphql_id": "VXNlci0tLTNkM2MzYmYwLTdkNTgtNGFmZS04ZmU3LWIzMDE3ZDU1MDRkZQo=", "name": "Sam Kim", "email": "sam@example.com", "avatar_url": "https://www.gravatar.com/avatar/example", "created_at": "2013-05-03T04:17:55.867Z" } } ] ``` Required scope: `read_pipeline_templates` Success response: `200 OK` ##### Get a pipeline template ```bash curl -H "Authorization: Bearer $TOKEN" \ -X GET "https://api.buildkite.com/v2/organizations/{org.slug}/pipeline-templates/{uuid}" ``` ```json { "uuid": "018a86cc-db73-7d15-8c68-5023cf8d64c3", "graphql_id": "UGlwZWxpbmVUZW1wbGF0ZS0tLTAxOGE4NmNjLWRiNzMtN2QxNS04YzY4LTUwMjNjZjhkNjRjMw==", "name": "Build template", "description": "Shared build steps configuration", "configuration": "steps:\n - label: \":hammer: Build\"\n command: \"scripts/build.sh\"", "available": false, "url": "http:///api.buildkite.com/v2/organizations/acme-inc/pipeline-templates/018a86cc-db73-7d15-8c68-5023cf8d64c3", "web_url": "http://www.buildkite.com/organizations/acme-inc/pipeline-templates/018a86cc-db73-7d15-8c68-5023cf8d64c3", "created_at": "2023-05-03T04:17:55.867Z", "created_by": { "id": "3d3c3bf0-7d58-4afe-8fe7-b3017d5504de", "graphql_id": "VXNlci0tLTNkM2MzYmYwLTdkNTgtNGFmZS04ZmU3LWIzMDE3ZDU1MDRkZQo=", "name": "Sam Kim", "email": "sam@example.com", "avatar_url": "https://www.gravatar.com/avatar/example", "created_at": "2023-08-29T10:10:03.000Z" }, "updated_at": "2023-06-12T04:17:55.867Z", "updated_by": { "id": "3d3c3bf0-7d58-4afe-8fe7-b3017d5504de", "graphql_id": "VXNlci0tLTNkM2MzYmYwLTdkNTgtNGFmZS04ZmU3LWIzMDE3ZDU1MDRkZQo=", "name": "Sam Kim", "email": "sam@example.com", "avatar_url": "https://www.gravatar.com/avatar/example", "created_at": "2013-05-03T04:17:55.867Z" } } ``` Required scope: `read_pipeline_templates` Success response: `200 OK` ##### Create a pipeline template ```bash curl -H "Authorization: Bearer $TOKEN" \ -X POST "https://api.buildkite.com/v2/organizations/{org.slug}/pipeline-templates" \ -H "Content-Type: application/json" \ -d '{ "name": ":hammer: Build", "description": "Shared build steps configuration", "configuration": "steps:\n - label: \":hammer: Build\"\n command: \"scripts/build.sh\"", "available": true }' ``` ```json { "uuid": "018a86cc-db73-7d15-8c68-5023cf8d64c3", "graphql_id": "UGlwZWxpbmVUZW1wbGF0ZS0tLTAxOGE4NmNjLWRiNzMtN2QxNS04YzY4LTUwMjNjZjhkNjRjMw==", "name": "Build template", "description": "Shared build steps configuration", "configuration": "steps:\n - label: \":hammer: Build\"\n command: \"scripts/build.sh\"", "available": true, "url": "http:///api.buildkite.com/v2/organizations/acme-inc/pipeline-templates/018a86cc-db73-7d15-8c68-5023cf8d64c3", "web_url": "http://www.buildkite.com/organizations/acme-inc/pipeline-templates/018a86cc-db73-7d15-8c68-5023cf8d64c3", "created_at": "2023-05-03T04:17:55.867Z", "created_by": { "id": "3d3c3bf0-7d58-4afe-8fe7-b3017d5504de", "graphql_id": "VXNlci0tLTNkM2MzYmYwLTdkNTgtNGFmZS04ZmU3LWIzMDE3ZDU1MDRkZQo=", "name": "Sam Kim", "email": "sam@example.com", "avatar_url": "https://www.gravatar.com/avatar/example", "created_at": "2023-08-29T10:10:03.000Z" }, "updated_at": "2023-06-12T04:17:55.867Z", "updated_by": { "id": "3d3c3bf0-7d58-4afe-8fe7-b3017d5504de", "graphql_id": "VXNlci0tLTNkM2MzYmYwLTdkNTgtNGFmZS04ZmU3LWIzMDE3ZDU1MDRkZQo=", "name": "Sam Kim", "email": "sam@example.com", "avatar_url": "https://www.gravatar.com/avatar/example", "created_at": "2013-05-03T04:17:55.867Z" } } ``` Required [request body properties](/docs/api#request-body-properties): | `name` | Name for the pipeline template. _Example:_ `"Build template"` | `configuration` | YAML step configuration for the pipeline template. _Example:_ `"steps:\n - command: "scripts/build.sh"` Optional [request body properties](/docs/api#request-body-properties): | `description` | Description for the pipeline template. _Example:_ `"Shared build steps configuration"` | `available` | When set to `true`, non-admins can assign the pipeline template to pipelines. _Example:_ `false` Required scope: `write_pipeline_templates` Success response: `201 Created` Error responses: | `422 Unprocessable Entity` | `{ "message": "Validation failed: Reason for failure" }` ##### Update a pipeline template ```bash curl -H "Authorization: Bearer $TOKEN" \ -X PATCH "https://api.buildkite.com/v2/organizations/{org.slug}/pipeline-templates/{uuid}" \ -H "Content-Type: application/json" \ -d '{ "available": true }' ``` ```json { "uuid": "018a86cc-db73-7d15-8c68-5023cf8d64c3", "graphql_id": "UGlwZWxpbmVUZW1wbGF0ZS0tLTAxOGE4NmNjLWRiNzMtN2QxNS04YzY4LTUwMjNjZjhkNjRjMw==", "name": "Build template", "description": "Shared build steps configuration", "configuration": "steps:\n - label: \":hammer: Build\"\n command: \"scripts/build.sh\"", "available": true, "url": "http:///api.buildkite.com/v2/organizations/acme-inc/pipeline-templates/018a86cc-db73-7d15-8c68-5023cf8d64c3", "web_url": "http://www.buildkite.com/organizations/acme-inc/pipeline-templates/018a86cc-db73-7d15-8c68-5023cf8d64c3", "created_at": "2023-05-03T04:17:55.867Z", "created_by": { "id": "3d3c3bf0-7d58-4afe-8fe7-b3017d5504de", "graphql_id": "VXNlci0tLTNkM2MzYmYwLTdkNTgtNGFmZS04ZmU3LWIzMDE3ZDU1MDRkZQo=", "name": "Sam Kim", "email": "sam@example.com", "avatar_url": "https://www.gravatar.com/avatar/example", "created_at": "2023-08-29T10:10:03.000Z" }, "updated_at": "2023-06-12T04:17:55.867Z", "updated_by": { "id": "3d3c3bf0-7d58-4afe-8fe7-b3017d5504de", "graphql_id": "VXNlci0tLTNkM2MzYmYwLTdkNTgtNGFmZS04ZmU3LWIzMDE3ZDU1MDRkZQo=", "name": "Sam Kim", "email": "sam@example.com", "avatar_url": "https://www.gravatar.com/avatar/example", "created_at": "2013-05-03T04:17:55.867Z" } } ``` Optional [request body properties](/docs/api#request-body-properties): | `name` | Name for the pipeline template. _Example:_ `"Build template"` | `description` | Description for the pipeline template. _Example:_ `"Shared build steps configuration"` | `configuration` | YAML step configuration for the pipeline template. _Example:_ `"steps:\n - command: "scripts/build.sh"` | `available` | When set to `true`, non-admins can assign the pipeline template to pipelines. _Example:_ `false` Required scope: `write_pipeline_templates` Success response: `200 OK` Error responses: | `422 Unprocessable Entity` | `{ "message": "Validation failed: Reason for failure" }` ##### Delete a pipeline template >📘 > A pipeline template can only be deleted when it is not assigned to any pipelines. Ensure you remove the pipeline template from all pipelines before trying to delete it. ```bash curl -H "Authorization: Bearer $TOKEN" \ -X DELETE "https://api.buildkite.com/v2/organizations/{org.slug}/pipeline-templates/{uuid}" ``` Required scope: `write_pipeline_templates` Success response: `204 No Content` Error responses: | `422 Unprocessable Entity` | `{ "message": "Reason the pipeline template couldn't be deleted" }` --- ### Rules URL: https://buildkite.com/docs/apis/rest-api/rules #### Rules API The rules API endpoint lets you create and manage rules in your organization. ##### Rules [_Rules_](/docs/pipelines/security/clusters/rules) is a Buildkite feature that can do the following: - Grant access between Buildkite resources that would normally be restricted by [cluster](/docs/pipelines/security/clusters), [visibility](/docs/pipelines/configure/public-pipelines), or [permissions](/docs/platform/team-management/permissions). - Allows an action between a source resource and a target resource across your Buildkite organization. For example, allowing one pipeline's builds to trigger another pipeline's builds. ###### List rules Returns a [paginated list](/docs/apis/rest-api#pagination) of an organization's rules. ```bash curl -H "Authorization: Bearer $TOKEN" \ -X GET "https://api.buildkite.com/v2/organizations/{org.slug}/rules" ``` ```json [ { "uuid": "42f1a7da-812d-4430-93d8-1cc7c33a6bcf", "graphql_id": "Q2x1c3Rlci0tLTQyZjFhN2RhLTgxMmQtNDQzMC05M2Q4LTFjYzdjMzNhNmJjZg==", "organization_uuid": "f02d6a6f-7a0e-481d-9d6d-89b427aec48d", "url": "http://api.buildkite.com/v2/organizations/acme-inc/rules/42f1a7da-812d-4430-93d8-1cc7c33a6bcf", "type": "pipeline.trigger_build.pipeline", "source_type": "pipeline", "source_uuid": "16f3b56f-4934-4546-923c-287859851332", "target_type": "pipeline", "target_uuid": "d07d5d84-d1bd-479c-902c-ce8a01ce5aac", "effect": "allow", "action": "trigger_build", "created_at": "2024-08-26T03:22:45.555Z", "created_by": { "id": "3d3c3bf0-7d58-4afe-8fe7-b3017d5504de", "graphql_id": "VXNlci0tLTNkM2MzYmYwLTdkNTgtNGFmZS04ZmU3LWIzMDE3ZDU1MDRkZQo=", "name": "Sam Kim", "email": "sam@example.com", "avatar_url": "https://www.gravatar.com/avatar/example", "created_at": "2013-08-29T10:10:03.000Z" } } ] ``` Required scope: `read_rules` Success response: `200 OK` ###### Get a rule ```bash curl -H "Authorization: Bearer $TOKEN" \ -X GET "https://api.buildkite.com/v2/organizations/{org.slug}/rules/{uuid}" ``` ```json { "uuid": "42f1a7da-812d-4430-93d8-1cc7c33a6bcf", "graphql_id": "Q2x1c3Rlci0tLTQyZjFhN2RhLTgxMmQtNDQzMC05M2Q4LTFjYzdjMzNhNmJjZg==", "organization_uuid": "f02d6a6f-7a0e-481d-9d6d-89b427aec48d", "url": "http://api.buildkite.com/v2/organizations/acme-inc/rules/42f1a7da-812d-4430-93d8-1cc7c33a6bcf", "type": "pipeline.trigger_build.pipeline", "source_type": "pipeline", "source_uuid": "16f3b56f-4934-4546-923c-287859851332", "target_type": "pipeline", "target_uuid": "d07d5d84-d1bd-479c-902c-ce8a01ce5aac", "effect": "allow", "action": "trigger_build", "created_at": "2024-08-26T03:22:45.555Z", "created_by": { "id": "3d3c3bf0-7d58-4afe-8fe7-b3017d5504de", "graphql_id": "VXNlci0tLTNkM2MzYmYwLTdkNTgtNGFmZS04ZmU3LWIzMDE3ZDU1MDRkZQo=", "name": "Sam Kim", "email": "sam@example.com", "avatar_url": "https://www.gravatar.com/avatar/example", "created_at": "2013-08-29T10:10:03.000Z" } } ``` Required scope: `read_rules` Success response: `200 OK` ###### Create a rule ```bash curl -H "Authorization: Bearer $TOKEN" \ -X POST "https://api.buildkite.com/v2/organizations/{org.slug}/rules" \ -H "Content-Type: application/json" \ -d '{ "type": "pipeline.trigger_build.pipeline", "value": { "source_pipeline": "16f3b56f-4934-4546-923c-287859851332", "target_pipeline": "d07d5d84-d1bd-479c-902c-ce8a01ce5aac" } }' ``` ```json { "uuid": "42f1a7da-812d-4430-93d8-1cc7c33a6bcf", "graphql_id": "Q2x1c3Rlci0tLTQyZjFhN2RhLTgxMmQtNDQzMC05M2Q4LTFjYzdjMzNhNmJjZg==", "organization_uuid": "f02d6a6f-7a0e-481d-9d6d-89b427aec48d", "url": "http://api.buildkite.com/v2/organizations/acme-inc/rules/42f1a7da-812d-4430-93d8-1cc7c33a6bcf", "type": "pipeline.trigger_build.pipeline", "source_type": "pipeline", "source_uuid": "16f3b56f-4934-4546-923c-287859851332", "target_type": "pipeline", "target_uuid": "d07d5d84-d1bd-479c-902c-ce8a01ce5aac", "effect": "allow", "action": "trigger_build", "created_at": "2024-08-26T03:22:45.555Z", "created_by": { "id": "3d3c3bf0-7d58-4afe-8fe7-b3017d5504de", "graphql_id": "VXNlci0tLTNkM2MzYmYwLTdkNTgtNGFmZS04ZmU3LWIzMDE3ZDU1MDRkZQo=", "name": "Sam Kim", "email": "sam@example.com", "avatar_url": "https://www.gravatar.com/avatar/example", "created_at": "2013-08-29T10:10:03.000Z" } } ``` Required [request body properties](/docs/api#request-body-properties): | `type` | The rule type. Must match one of the [available rule types](/docs/pipelines/security/clusters/rules#rule-types). _Example:_ `"pipeline.trigger_build.pipeline"` or `"pipeline.artifacts_read.pipeline"` | `value` | A JSON object containing the value fields for the rule. `source_pipeline` and `target_pipeline` fields accept either a pipeline UUID or a pipeline slug. _Example:_ `{"source_pipeline": "16f3b56f-4934-4546-923c-287859851332", "target_pipeline": "d07d5d84-d1bd-479c-902c-ce8a01ce5aac"}` Required scope: `write_rules` Success response: `201 Created` Error responses: | `422 Unprocessable Entity` | `{ "message": "Reason for failure" }` ###### Delete a rule Delete a rule. ```bash curl -H "Authorization: Bearer $TOKEN" \ -X DELETE "https://api.buildkite.com/v2/organizations/{org.slug}/rules/{uuid}" ``` Required scope: `write_rules` Success response: `204 No Content` Error responses: | `422 Unprocessable Entity` | `{ "message": "Reason the rule couldn't be deleted" }` --- ### Schedules URL: https://buildkite.com/docs/apis/rest-api/pipeline-schedules #### Pipeline schedules API The pipeline schedules API endpoint allows you to manage [scheduled builds](/docs/pipelines/configure/workflows/scheduled-builds) for a pipeline. Pipeline schedules automatically create builds at specified intervals, such as nightly builds or hourly integration tests. ##### Pipeline schedule data model | `id` | UUID of the pipeline schedule. | `graphql_id` | [GraphQL ID](/docs/apis/graphql-api#graphql-ids) of the pipeline schedule. | `url` | Canonical API URL of the pipeline schedule. | `label` | Label describing the pipeline schedule. | `cronline` | The interval used to trigger builds. Either a [predefined interval](/docs/pipelines/configure/workflows/scheduled-builds#schedule-intervals) (such as `@hourly` or `@daily`) or a [crontab time syntax](/docs/pipelines/configure/workflows/scheduled-builds#schedule-intervals-crontab-time-syntax) string. | `message` | Message used for the builds created by the pipeline schedule. | `commit` | Commit used for the builds created by the pipeline schedule. Defaults to `HEAD`. | `branch` | Branch used for the builds created by the pipeline schedule. Defaults to the pipeline's default branch. | `env` | JSON object of environment variables to set on the builds created by the pipeline schedule. | `enabled` | Whether the pipeline schedule is enabled. | `next_build_at` | When the next build will be created. | `failed_message` | Failure message from the most recent failed attempt to create a build, or `null` if the most recent attempt succeeded. | `failed_at` | When the most recent failed attempt to create a build occurred, or `null` if the most recent attempt succeeded. | `created_at` | When the pipeline schedule was created. | `created_by` | [User](/docs/apis/rest-api/user) who created the pipeline schedule. | `pipeline` | Reference to the parent pipeline, including its `id`, `slug`, and API `url`. ##### List pipeline schedules Returns a [paginated list](/docs/apis/rest-api#pagination) of the schedules for a pipeline, with the most recently created first. _Example request:_ ```bash curl -H "Authorization: Bearer $TOKEN" \ -X GET "https://api.buildkite.com/v2/organizations/{org.slug}/pipelines/{pipeline.slug}/schedules" ``` _Example response:_ ```json [ { "id": "b3a1e9f2-7c4d-4f1a-9e6c-2d8a5f7b1c3d", "graphql_id": "UGlwZWxpbmVTY2hlZHVsZS0tLWIzYTFlOWYyLTdjNGQtNGYxYS05ZTZjLTJkOGE1ZjdiMWMzZA==", "url": "https://api.buildkite.com/v2/organizations/acme-inc/pipelines/my-pipeline/schedules/b3a1e9f2-7c4d-4f1a-9e6c-2d8a5f7b1c3d", "label": "Nightly build", "cronline": "@daily", "message": "Nightly scheduled build", "commit": "HEAD", "branch": "main", "env": { "DEPLOY_ENV": "staging" }, "enabled": true, "next_build_at": "2024-01-02T00:00:00.000Z", "failed_message": null, "failed_at": null, "created_at": "2024-01-01T12:00:00.000Z", "created_by": { "id": "3d3c3bf0-7d58-4afe-8fe7-b3017d5504de", "graphql_id": "VXNlci0tLTNkM2MzYmYwLTdkNTgtNGFmZS04ZmU3LWIzMDE3ZDU1MDRkZQo=", "name": "Sam Kim", "email": "sam@example.com", "avatar_url": "https://www.gravatar.com/avatar/example", "created_at": "2013-05-03T04:17:55.867Z" }, "pipeline": { "id": "9d1d1e9c-5e8f-4f9a-9b0c-1a2b3c4d5e6f", "slug": "my-pipeline", "url": "https://api.buildkite.com/v2/organizations/acme-inc/pipelines/my-pipeline" } } ] ``` Required scope: `read_pipelines` Success response: `200 OK` ##### Get a pipeline schedule Returns the details for a single pipeline schedule. _Example request:_ ```bash curl -H "Authorization: Bearer $TOKEN" \ -X GET "https://api.buildkite.com/v2/organizations/{org.slug}/pipelines/{pipeline.slug}/schedules/{id}" ``` _Example response:_ ```json { "id": "b3a1e9f2-7c4d-4f1a-9e6c-2d8a5f7b1c3d", "graphql_id": "UGlwZWxpbmVTY2hlZHVsZS0tLWIzYTFlOWYyLTdjNGQtNGYxYS05ZTZjLTJkOGE1ZjdiMWMzZA==", "url": "https://api.buildkite.com/v2/organizations/acme-inc/pipelines/my-pipeline/schedules/b3a1e9f2-7c4d-4f1a-9e6c-2d8a5f7b1c3d", "label": "Nightly build", "cronline": "@daily", "message": "Nightly scheduled build", "commit": "HEAD", "branch": "main", "env": { "DEPLOY_ENV": "staging" }, "enabled": true, "next_build_at": "2024-01-02T00:00:00.000Z", "failed_message": null, "failed_at": null, "created_at": "2024-01-01T12:00:00.000Z", "created_by": { "id": "3d3c3bf0-7d58-4afe-8fe7-b3017d5504de", "graphql_id": "VXNlci0tLTNkM2MzYmYwLTdkNTgtNGFmZS04ZmU3LWIzMDE3ZDU1MDRkZQo=", "name": "Sam Kim", "email": "sam@example.com", "avatar_url": "https://www.gravatar.com/avatar/example", "created_at": "2013-05-03T04:17:55.867Z" }, "pipeline": { "id": "9d1d1e9c-5e8f-4f9a-9b0c-1a2b3c4d5e6f", "slug": "my-pipeline", "url": "https://api.buildkite.com/v2/organizations/acme-inc/pipelines/my-pipeline" } } ``` Required scope: `read_pipelines` Success response: `200 OK` ##### Create a pipeline schedule Creates a new pipeline schedule. _Example request:_ ```bash curl -H "Authorization: Bearer $TOKEN" \ -X POST "https://api.buildkite.com/v2/organizations/{org.slug}/pipelines/{pipeline.slug}/schedules" \ -H "Content-Type: application/json" \ -d '{ "label": "Nightly build", "cronline": "@daily", "message": "Nightly scheduled build", "commit": "HEAD", "branch": "main", "env": { "DEPLOY_ENV": "staging" }, "enabled": true }' ``` _Example response:_ ```json { "id": "b3a1e9f2-7c4d-4f1a-9e6c-2d8a5f7b1c3d", "graphql_id": "UGlwZWxpbmVTY2hlZHVsZS0tLWIzYTFlOWYyLTdjNGQtNGYxYS05ZTZjLTJkOGE1ZjdiMWMzZA==", "url": "https://api.buildkite.com/v2/organizations/acme-inc/pipelines/my-pipeline/schedules/b3a1e9f2-7c4d-4f1a-9e6c-2d8a5f7b1c3d", "label": "Nightly build", "cronline": "@daily", "message": "Nightly scheduled build", "commit": "HEAD", "branch": "main", "env": { "DEPLOY_ENV": "staging" }, "enabled": true, "next_build_at": "2024-01-02T00:00:00.000Z", "failed_message": null, "failed_at": null, "created_at": "2024-01-01T12:00:00.000Z", "created_by": { "id": "3d3c3bf0-7d58-4afe-8fe7-b3017d5504de", "graphql_id": "VXNlci0tLTNkM2MzYmYwLTdkNTgtNGFmZS04ZmU3LWIzMDE3ZDU1MDRkZQo=", "name": "Sam Kim", "email": "sam@example.com", "avatar_url": "https://www.gravatar.com/avatar/example", "created_at": "2013-05-03T04:17:55.867Z" }, "pipeline": { "id": "9d1d1e9c-5e8f-4f9a-9b0c-1a2b3c4d5e6f", "slug": "my-pipeline", "url": "https://api.buildkite.com/v2/organizations/acme-inc/pipelines/my-pipeline" } } ``` Required [request body properties](/docs/api#request-body-properties): | `cronline` | The interval used to trigger builds. Either a [predefined interval](/docs/pipelines/configure/workflows/scheduled-builds#schedule-intervals) or a [crontab time syntax](/docs/pipelines/configure/workflows/scheduled-builds#schedule-intervals-crontab-time-syntax) string. _Example:_ `"@daily"` Optional [request body properties](/docs/api#request-body-properties): | `label` | Label describing the pipeline schedule. _Example:_ `"Nightly build"` | `message` | Message used for the builds created by the pipeline schedule. _Example:_ `"Nightly scheduled build"` | `commit` | Commit used for the builds created by the pipeline schedule. _Default:_ `"HEAD"` | `branch` | Branch used for the builds created by the pipeline schedule. _Default:_ the pipeline's default branch | `env` | JSON object of environment variables to set on the builds created by the pipeline schedule. _Example:_ `{ "DEPLOY_ENV": "staging" }` | `enabled` | Whether the pipeline schedule is enabled. _Default:_ `true` Required scope: `write_pipelines` Success response: `201 Created` Error responses: | `422 Unprocessable Entity` | `{ "message": "Validation failed: Reason for failure" }` ##### Update a pipeline schedule Updates the pipeline schedule. _Example request:_ ```bash curl -H "Authorization: Bearer $TOKEN" \ -X PUT "https://api.buildkite.com/v2/organizations/{org.slug}/pipelines/{pipeline.slug}/schedules/{id}" \ -H "Content-Type: application/json" \ -d '{ "cronline": "@hourly", "enabled": false }' ``` _Example response:_ ```json { "id": "b3a1e9f2-7c4d-4f1a-9e6c-2d8a5f7b1c3d", "graphql_id": "UGlwZWxpbmVTY2hlZHVsZS0tLWIzYTFlOWYyLTdjNGQtNGYxYS05ZTZjLTJkOGE1ZjdiMWMzZA==", "url": "https://api.buildkite.com/v2/organizations/acme-inc/pipelines/my-pipeline/schedules/b3a1e9f2-7c4d-4f1a-9e6c-2d8a5f7b1c3d", "label": "Nightly build", "cronline": "@hourly", "message": "Nightly scheduled build", "commit": "HEAD", "branch": "main", "env": { "DEPLOY_ENV": "staging" }, "enabled": false, "next_build_at": null, "failed_message": null, "failed_at": null, "created_at": "2024-01-01T12:00:00.000Z", "created_by": { "id": "3d3c3bf0-7d58-4afe-8fe7-b3017d5504de", "graphql_id": "VXNlci0tLTNkM2MzYmYwLTdkNTgtNGFmZS04ZmU3LWIzMDE3ZDU1MDRkZQo=", "name": "Sam Kim", "email": "sam@example.com", "avatar_url": "https://www.gravatar.com/avatar/example", "created_at": "2013-05-03T04:17:55.867Z" }, "pipeline": { "id": "9d1d1e9c-5e8f-4f9a-9b0c-1a2b3c4d5e6f", "slug": "my-pipeline", "url": "https://api.buildkite.com/v2/organizations/acme-inc/pipelines/my-pipeline" } } ``` Optional [request body properties](/docs/api#request-body-properties): | `label` | Label describing the pipeline schedule. _Example:_ `"Nightly build"` | `cronline` | The interval used to trigger builds. _Example:_ `"@hourly"` | `message` | Message used for the builds created by the pipeline schedule. _Example:_ `"Nightly scheduled build"` | `commit` | Commit used for the builds created by the pipeline schedule. _Example:_ `"HEAD"` | `branch` | Branch used for the builds created by the pipeline schedule. _Example:_ `"main"` | `env` | JSON object of environment variables to set on the builds created by the pipeline schedule. _Example:_ `{ "DEPLOY_ENV": "staging" }` | `enabled` | Whether the pipeline schedule is enabled. Re-enabling a previously failed schedule clears its `failed_message` and `failed_at` values. _Example:_ `true` Required scope: `write_pipelines` Success response: `200 OK` Error responses: | `422 Unprocessable Entity` | `{ "message": "Validation failed: Reason for failure" }` ##### Delete a pipeline schedule Deletes a pipeline schedule. _Example request:_ ```bash curl -H "Authorization: Bearer $TOKEN" \ -X DELETE "https://api.buildkite.com/v2/organizations/{org.slug}/pipelines/{pipeline.slug}/schedules/{id}" ``` Required scope: `write_pipelines` Success response: `204 No Content` --- ### Overview URL: https://buildkite.com/docs/apis/rest-api/teams #### Teams API The teams API endpoint allows you to view and manage teams within an organization. ##### Team data model | `id` | ID of the team | `name` | Name of the team | `slug` | URL slug of the team | `description` | Description of the team | `privacy` | Privacy setting of the team (`visible`, `secret`) | `default` | Whether users join this team by default (`true`, `false`) | `created_at` | Time of when the team was created | `created_by` | User who created the team ##### List teams Returns a [paginated list](/docs/apis/rest-api#pagination) of an organization's teams. ```bash curl -H "Authorization: Bearer $TOKEN" \ -X GET "https://api.buildkite.com/v2/organizations/{org.slug}/teams" ``` ```json [ { "id": "c5e09619-8648-4896-a936-9d0b8b7b3fe9", "graphql_id": "VGVhbS0tLWM1ZTA5NjE5LTg2NDgtNDg5Ni1hOTM2LTlkMGI4YjdiM2ZlOQ==", "name": "Fearless Frontenders", "slug": "fearless-frontenders", "description": "", "created_at": "2023-03-14T00:45:16.215Z", "privacy": "secret", "default": true, "created_by": { "id": "8s7ce846-f6c0-4360-8133-389b03c7c46a", "graphql_id": "VXNlci0tLTg3NWNlODQ2LWY2YzAtNDM2MC04MTMzLTM4OWIwM2M3YzQ2YQ==", "name": "Peter Pettigrew", "email": "pp@hogwarts.co.uk", "avatar_url": "https://www.gravatar.com/avatar/aa9e3513ea543edb9143cbcca425e56c", "created_at": "2022-01-18T02:51:30.983Z" } }, ] ``` Optional [query string parameters](/docs/api#query-string-parameters): | `user_id` | Filters the results to teams that have the given user as a member. _Example:_ `?user_id=5acb99cf-d349-4189-b361-d1b9f36d70d7` Required scope: `read_teams` Success response: `200 OK` ##### Get a team ```bash curl -H "Authorization: Bearer $TOKEN" \ -X GET "https://api.buildkite.com/v2/organizations/{org.slug}/teams/{team.uuid}" ``` ```json { "id": "c5e09619-8648-4896-a936-9d0b8b7b3fe9", "graphql_id": "VGVhbS0tLWM1ZTA5NjE5LTg2NDgtNDg5Ni1hOTM2LTlkMGI4YjdiM2ZlOQ==", "name": "Fearless Frontenders", "slug": "fearless-frontenders", "description": "", "created_at": "2023-03-14T00:45:16.215Z", "privacy": "secret", "default": true, "created_by": { "id": "8s7ce846-f6c0-4360-8133-389b03c7c46a", "graphql_id": "VXNlci0tLTg3NWNlODQ2LWY2YzAtNDM2MC04MTMzLTM4OWIwM2M3YzQ2YQ==", "name": "Peter Pettigrew", "email": "pp@hogwarts.co.uk", "avatar_url": "https://www.gravatar.com/avatar/aa9e3513ea543edb9143cbcca425e56c", "created_at": "2022-01-18T02:51:30.983Z" } } ``` Required scope: `view_teams` Success response: `200 OK` ##### Create a team Creates a team. ```bash curl -H "Authorization: Bearer $TOKEN" \ -X POST "https://api.buildkite.com/v2/organizations/{org.slug}/teams" \ -H "Content-Type: application/json" \ -d '{ "name": "Barefoot Backenders", "description": "Backend engineers at Acme Inc", "privacy": "secret", "is_default_team": false, "default_member_role": "member", "members_can_create_pipelines": true, "members_can_create_suites": true, "members_can_create_registries": true, "members_can_destroy_registries": false, "members_can_destroy_packages": false }' ``` ```json { "name": "Barefoot Backenders", "description": "Backend engineers at Acme Inc", "privacy": "secret", "is_default_team": false, "default_member_role": "member", "members_can_create_pipelines": true, "members_can_create_suites": true, "members_can_create_registries": true, "members_can_destroy_registries": false, "members_can_destroy_packages": false } ``` Required [request body properties](/docs/api#request-body-properties): | `name` | Name of the team | `description` | Description of the team | `privacy` | Privacy setting of the team (`visible`, `secret`) | `is_default_team` | Whether new organization members are assigned to this team by default (`true`, `false`) | `default_member_role` | The default role assigned to members of this team (`member`, `maintainer`) | `members_can_create_pipelines` | Whether or not team members can create new pipelines (`true`, `false`) | `members_can_create_suites` | Whether or not team members can create new test suites (`true`, `false`) | `members_can_create_registries` | Whether or not team members can create new registries (`true`, `false`) | `members_can_destroy_registries` | Whether or not team members can destroy registries (`true`, `false`) | `members_can_destroy_packages` | Whether or not team members can destroy packages (`true`, `false`) Required scope: `write_teams` Success response: `201 Created` Error responses: | `422 Unprocessable Entity` | `{ "message": "Validation failed: Reason for failure" }` ##### Update a team Updates a team. ```bash curl -H "Authorization: Bearer $TOKEN" \ -X PATCH "https://api.buildkite.com/v2/organizations/{org.slug}/teams/{team.uuid}" \ -H "Content-Type: application/json" \ -d '{ "name": "New and Improved Backenders V2!", "description": "Updated backend engineers team at Acme Inc", "privacy": "visible", "is_default_team": true, "default_member_role": "maintainer", "members_can_create_pipelines": false, "members_can_create_suites": false, "members_can_create_registries": true, "members_can_destroy_registries": false, "members_can_destroy_packages": false }' ``` ```json { "name": "New and Improved Backenders V2!", "description": "Updated backend engineers team at Acme Inc", "privacy": "visible", "is_default_team": true, "default_member_role": "maintainer", "members_can_create_pipelines": false, "members_can_create_suites": false, "members_can_create_registries": true, "members_can_destroy_registries": false, "members_can_destroy_packages": false } ``` Required [request body properties](/docs/api#request-body-properties): | `name` | Name of the team | `description` | Description of the team | `privacy` | Privacy setting of the team (`visible`, `secret`) | `is_default_team` | Whether new organization members are assigned to this team by default (`true`, `false`) | `default_member_role` | The default role assigned to members of this team (`member`, `maintainer`) | `members_can_create_pipelines` | Whether or not team members can create new pipelines (`true`, `false`) | `members_can_create_suites` | Whether or not team members can create new test suites (`true`, `false`) | `members_can_create_registries` | Whether or not team members can create new registries (`true`, `false`) | `members_can_destroy_registries` | Whether or not team members can destroy registries (`true`, `false`) | `members_can_destroy_packages` | Whether or not team members can destroy packages (`true`, `false`) Required scope: `write_teams` Success response: `200 OK` Error responses: | `422 Unprocessable Entity` | `{ "message": "Validation failed: Reason for failure" }` ##### Delete a team Remove a team. ```bash curl -H "Authorization: Bearer $TOKEN" \ -X DELETE "https://api.buildkite.com/v2/organizations/{org.slug}/teams/{team.uuid}" ``` Required scope: `write_teams` Success response: `204 No Content` Error responses: | `422 Unprocessable Entity` | `{ "message": "Reason the team couldn't be deleted" }` ##### Related endpoints - [Team members](/docs/apis/rest-api/teams/members) - manage members of a team - [Team pipelines](/docs/apis/rest-api/teams/pipelines) - assign pipelines to teams - [Team suites](/docs/apis/rest-api/teams/suites) - assign test suites to teams --- ### Members URL: https://buildkite.com/docs/apis/rest-api/teams/members #### Team members API The team members API endpoint allows users to review, create, update, and delete members associated with a team in your organization. ##### Team member data model | `user_name` | The name of the user | `user_id` | The UUID of the user | `created_at` | When the team and user association was created | `role` | The role the member has within the team - `member` or `maintainer` ##### List team members Returns a list of a team's associated members. ```bash curl -H "Authorization: Bearer $TOKEN" \ -X GET "https://api.buildkite.com/v2/organizations/{org.slug}/teams/{team.uuid}/members" ``` ```json [ { "role": "member", "created_at": "2023-03-14T00:49:55.534Z", "user_id": "978ce846-f6c0-4360-8133-389b03cus7a", "user_name": "Severus Snape" }, { "role": "member", "created_at": "2023-03-14T00:49:55.534Z", "user_id": "3878ce86-f6c0-4360-8133-389b0372", "user_name": "Draco Malfoy" }, ] ``` Required scope: `view_teams` Success response: `200 OK` ##### Get a team member ```bash curl -H "Authorization: Bearer $TOKEN" \ -X GET "https://api.buildkite.com/v2/organizations/{org.slug}/teams/{team.uuid}/members/{user.uuid}" ``` ```json { "role": "member", "created_at": "2023-12-15T00:23:23.823Z", "user_id": "018c6030-b459-45b2-a844-951f0fc8a4e7", "user_name": "Dolores Umbridge" } ``` Required scope: `view_teams` Success response: `200 OK` ##### Create a team member Creates an association between a team and a user. ```bash curl -H "Authorization: Bearer $TOKEN" \ -X POST "https://api.buildkite.com/v2/organizations/{org.slug}/teams/{team.uuid}/members/" \ -H "Content-Type: application/json" \ -d '{ "user_id": "6030-b459-45b2-a844-951f0fs727", "role": "maintainer" }' ``` ```json { "role": "maintainer", "created_at": "2023-12-14T00:43:04.675Z", "user_id": "875ce846-f6c0-4360-8133-389b03c7c46a", "user_name": "Professor Quirrel" } ``` Required [request body properties](/docs/api#request-body-properties): | `user_id` | The UUID of the user. | `role` | The role the member has within the team - `member` or `maintainer` Required scope: `write_teams` Success response: `201 Created` Error responses: | `422 Unprocessable Entity` | `{ "message": "Validation failed: Reason for failure" }` ##### Update a team member Updates an association between a team and a user. ```bash curl -H "Authorization: Bearer $TOKEN" \ -X PATCH "https://api.buildkite.com/v2/organizations/{org.slug}/teams/{team.uuid}/members/{user.uuid}" \ -H "Content-Type: application/json" \ -d '{ "role": "member" }' ``` ```json { "role": "member", "created_at": "2023-12-15T00:23:23.823Z", "user_id": "027c6030-b459-45b2-a844-951f0fc8a4e7", "user_name": "Ron Weasley" } ``` Required [request body properties](/docs/api#request-body-properties): | `role` | The role the member has within the team - `member` or `maintainer` Required scope: `write_teams` Success response: `200 OK` Error responses: | `422 Unprocessable Entity` | `{ "message": "Validation failed: Reason for failure" }` ##### Delete a team member Remove the association between a team and a user. ```bash curl -H "Authorization: Bearer $TOKEN" \ -X DELETE "https://api.buildkite.com/v2/organizations/{org.slug}/teams/{team.uuid}/members/{user.uuid}" ``` Required scope: `write_teams` Success response: `204 No Content` Error responses: | `422 Unprocessable Entity` | `{ "message": "Reason the team member couldn't be deleted" }` --- ### Pipelines URL: https://buildkite.com/docs/apis/rest-api/teams/pipelines #### Team pipelines API The team pipelines API endpoint allows users to review, create, update, and delete pipelines associated with a team in your organization. ##### Team pipeline data model | `pipeline_id` | UUID of the pipeline | `access_level` | The access levels that users have to the associated pipeline - `read_only`, `build_and_read`, `manage_build_and_read` | `pipeline_url` | URL of the pipeline | `created_at` | When the team and pipeline association was created ##### List team pipelines Returns a [paginated list](/docs/apis/rest-api#pagination) of a team's associated pipelines. ```bash curl -H "Authorization: Bearer $TOKEN" \ -X GET "https://api.buildkite.com/v2/organizations/{org.slug}/teams/{team.uuid}/pipelines" ``` ```json [ { "access_level": "manage_build_and_read", "created_at": "2023-12-12T21:57:40.306Z", "pipeline_id": "018c5ad7-28f1-45d4-867e-b59fa04511b2", "pipeline_url": "http://api.buildkite.com/v2/organizations/acme-inc/pipelines/test-pipeline" }, ] ``` Required scope: `view_teams` Success response: `200 OK` ##### Get a team pipeline ```bash curl -H "Authorization: Bearer $TOKEN" \ -X GET "https://api.buildkite.com/v2/organizations/{org.slug}/teams/{team.uuid}/pipelines/{pipeline.uuid}" ``` ```json { "access_level": "read_only", "created_at": "2023-12-12T21:57:40.306Z", "pipeline_id": "018c5ad7-28f1-45d4-867e-b59fa04511b2", "pipeline_url": "http://api.buildkite.com/v2/organizations/acme-inc/pipelines/test-pipeline" } ``` Required scope: `view_teams` Success response: `200 OK` ##### Create a team pipeline Creates an association between a team and a pipeline. ```bash curl -H "Authorization: Bearer $TOKEN" \ -X POST "https://api.buildkite.com/v2/organizations/{org.slug}/teams/{team.uuid}/pipelines/" \ -H "Content-Type: application/json" \ -d '{ "pipeline_id": "pipeline.uuid", "access_level": "read_only" }' ``` ```json { "access_level": "read", "created_at": "2023-12-12T21:57:40.306Z", "pipeline_id": "018c5ad7-28f1-45d4-867e-b59fa04511b2", "pipeline_url": "http://api.buildkite.com/v2/organizations/acme-inc/pipelines/test-pipeline" } ``` Required [request body properties](/docs/api#request-body-properties): | `pipeline_id` | The UUID of the pipeline. | `access_level` | The access level for the pipeline - `read_only`, `build_and_read` or `manage_build_and_read`. Required scope: `write_teams` Success response: `201 Created` Error responses: | `422 Unprocessable Entity` | `{ "message": "Validation failed: Reason for failure" }` ##### Update a team pipeline Updates an association between a team and a pipeline. ```bash curl -H "Authorization: Bearer $TOKEN" \ -X PATCH "https://api.buildkite.com/v2/organizations/{org.slug}/teams/{team.uuid}/pipelines/{pipeline.uuid}" \ -H "Content-Type: application/json" \ -d '{ "access_level": "read_only" }' ``` ```json { "access_level": "read_only", "created_at": "2023-12-12T21:57:40.306Z", "pipeline_id": "018c5ad7-28f1-45d4-867e-b59fa04511b2", "pipeline_url": "http://api.buildkite.com/v2/organizations/acme-inc/pipelines/test-pipeline" } ``` Required [request body properties](/docs/api#request-body-properties): | `access_level` | The access level for the pipeline - `read_only`, `build_and_read` or `manage_build_and_read`. Required scope: `write_teams` Success response: `200 OK` Error responses: | `422 Unprocessable Entity` | `{ "message": "Validation failed: Reason for failure" }` ##### Delete a team pipeline Remove the association between a team and a pipeline. ```bash curl -H "Authorization: Bearer $TOKEN" \ -X DELETE "https://api.buildkite.com/v2/organizations/{org.slug}/teams/{team.uuid}/pipelines/{pipeline.uuid}" ``` Required scope: `write_teams` Success response: `204 No Content` Error responses: | `422 Unprocessable Entity` | `{ "message": "Reason the team pipeline couldn't be deleted" }` --- ### Suites URL: https://buildkite.com/docs/apis/rest-api/teams/suites #### Team suites API The team suites API endpoint allows users to review, create, update, and delete test suites associated with a team in your organization. ##### Team suite data model | `suite_id` | UUID of the suite | `suite_url` | URL of the suite | `created_at` | When the team and suite association was created | `access_level` | The access levels that user has to the associated suite - `edit`, `read` ##### List team suites Returns a list of a team's associated suites. ```bash curl -H "Authorization: Bearer $TOKEN" \ -X GET "https://api.buildkite.com/v2/organizations/{org.slug}/teams/{team.uuid}/suites" ``` ```json [ { "access_level": [ "read" ], "created_at": "2024-01-11T04:24:21.352Z", "suite_id": "19f3973d-1e0b-43f1-b490-22be52abd99a", "suite_url": "https://api.buildkite.com/v2/analytics/organizations/acme-corp/suites/suite-dreams" }, { "access_level": [ "read", "edit" ], "created_at": "2024-01-11T04:24:21.352Z", "suite_id": "19f3973d-1e0b-43f1-b490-22besa5299a", "suite_url": "https://api.buildkite.com/v2/analytics/organizations/acme-corp/suites/suite-and-sour" } ] ``` Required scope: `view_teams` Success response: `200 OK` ##### Get a team suite ```bash curl -H "Authorization: Bearer $TOKEN" \ -X GET "https://api.buildkite.com/v2/organizations/{org.slug}/teams/{team.uuid}/suites/{suite.uuid}" ``` ```json { "access_level": [ "read" ], "created_at": "2024-01-11T04:24:21.352Z", "suite_id": "19f3973d-1e0b-43f1-b490-22besa5299a", "suite_url": "https://api.buildkite.com/v2/analytics/organizations/acme-corp/suites/suite-and-sour" } ``` Required scope: `view_teams` Success response: `200 OK` ##### Create a team suite Creates an association between a team and a suite. ```bash curl -H "Authorization: Bearer $TOKEN" \ -X POST "https://api.buildkite.com/v2/organizations/{org.slug}/teams/{team.uuid}/suites/" \ -H "Content-Type: application/json" \ -d '{ "suite_id": suite.uuid, "access_level": ["read", "edit"] }' ``` ```json { "access_level": [ "read", "edit" ], "created_at": "2024-01-11T04:39:18.638Z", "suite_id": "192k973d-1e0b-43f1-b490-22be52abd99a", "suite_url": "https://api.buildkite.com/v2/analytics/organizations/acme-inc/suites/suiteheart" } ``` Required [request body properties](/docs/api#request-body-properties): | `suite_id` | The UUID of the suite. | `access_level` | The access levels for team members to the associated suite - `read`, `edit` Required scope: `write_teams` Success response: `201 Created` Error responses: | `422 Unprocessable Entity` | `{ "message": "Validation failed: Reason for failure" }` ##### Update a team suite Updates an association between a team and a suite. ```bash curl -H "Authorization: Bearer $TOKEN" \ -X PATCH "https://api.buildkite.com/v2/organizations/{org.slug}/teams/{team.uuid}/suites/{suite.uuid}" \ -H "Content-Type: application/json" \ -d '{ "access_level": ["edit", "read"]" }' ``` ```json { "access_level": [ "edit", "read" ], "created_at": "2024-01-11T04:56:53.516Z", "suite_id": "19f3973d-1e0b-43f1-b490-22be52abd99a", "suite_url": "https://api.buildkite.com/v2/analytics/organizations/acme-inc/suites/suiteness" } ``` Required [request body properties](/docs/api#request-body-properties): | `access_level` | The access level for the suite - `read` or `edit` Required scope: `write_teams` Success response: `200 OK` Error responses: | `422 Unprocessable Entity` | `{ "message": "Validation failed: Reason for failure" }` ##### Delete a team suite Remove the association between a team and a suite. ```bash curl -H "Authorization: Bearer $TOKEN" \ -X DELETE "https://api.buildkite.com/v2/organizations/{org.slug}/teams/{team.uuid}/suites/{suite.uuid}/" ``` Required scope: `write_teams` Success response: `204 No Content` Error responses: | `422 Unprocessable Entity` | `{ "message": "Reason the team suite couldn't be deleted" }` --- ### Flaky tests URL: https://buildkite.com/docs/apis/rest-api/test-engine/flaky-tests #### Flaky tests API To retrieve flaky tests using the API, use the [list tests endpoint](/docs/apis/rest-api/test-engine/tests#list-tests) with the `label=flaky` query parameter: ```bash curl -H "Authorization: Bearer $TOKEN" \ -X GET "https://api.buildkite.com/v2/analytics/organizations/{org.slug}/suites/{suite.slug}/tests?label=flaky" ``` Required scope: `read_suites` Success response: `200 OK` ##### Legacy flaky tests endpoint (deprecated) > 🚧 This endpoint is deprecated > Use the [list tests endpoint](/docs/apis/rest-api/test-engine/tests#list-tests) with `?label=flaky` instead. The legacy flaky tests endpoint is still available but no longer recommended. It does not return the same data as the Buildkite Test Engine UI. ```bash curl -H "Authorization: Bearer $TOKEN" \ -X GET "https://api.buildkite.com/v2/analytics/organizations/{org.slug}/suites/{suite.slug}/flaky-tests" ``` ```json [ { "id": "01867216-8478-7fde-a55a-0300f88bb49b", "web_url": "https://buildkite.com/organizations/my_great_org/analytics/suites/my_suite_name/tests/01867216-8478-7fde-a55a-0300f88bb49b", "scope": "User#email", "name": "is correctly formatted", "location": "./spec/models/user_spec.rb:42", "file_name": "./spec/models/user_spec.rb", "instances": 1, "latest_occurrence_at": "2024-07-15T00:07:02.547Z", "most_recent_instance_at": "2024-07-15T00:07:02.547Z", "last_resolved_at": null, "ownership_team_ids": ["4c15a4c7-6674-4585-b592-4adcc8630383", "d30fd7ba-82d8-487f-9d98-6e1a057bcca8"] } ] ``` Optional [query string parameters](/docs/api#query-string-parameters): | `search` | Returns flaky tests with a `name` or `scope` that contains the search string. Users with the [Ruby test collector](/docs/test-engine/test-collection/ruby-collectors) installed can also filter results by `location`. _Example:_ `?search="User#find_email"`, `?search="/billing_spec"` | `branch` | Returns flaky tests for flakes detected one or more times on the branch whose name is specified by the `branch` value. _Example:_ `?branch=main` | `period` | Filters the results by the given time `period`. Valid values are `1hour`, `4hours`, `1day`, `7days`, `14days`, and `28days`. The default period when no `period` value is specified is `7days`. _Example:_ `?period=28days` Required scope: `read_suites` Success response: `200 OK` --- ### Quarantine URL: https://buildkite.com/docs/apis/rest-api/test-engine/quarantine #### Quarantine API Buildkite customers on the [Pro or Enterprise plans](https://buildkite.com/pricing) can access the Buildkite Test Engine quarantine feature. Before using the API calls on this page, ensure that test state management has been enabled for your suite (through your test suite's **Settings** > **Test state** page), and that the relevant **Lifecycle** states have been selected on this page. ##### Update test state ###### Skip test ```bash curl -H "Authorization: Bearer $TOKEN" \ -X PUT "https://api.buildkite.com/v2/analytics/organizations/{org.slug}/suites/{suite.slug}/tests/{test.id}/skip" ``` ```json { "id":"80b455d3-d197-8c6d-a7bf-09d252c1bf6e", "url":"http://api.buildkite.com/v2/analytics/organizations/buildkite/suites/my-sample-suite/tests/80b455d3-d197-8c6d-a7bf-09d252c1bf6e", "web_url":"http://buildkite.com/organizations/buildkite/analytics/suites/my-sample-suite/tests/80b455d3-d197-8c6d-a7bf-09d252c1bf6e", "scope":"Flaky test", "name":"passes only on the second try on BK CI", "location":"./spec/flaky_spec.rb:6", "file_name":"./spec/flaky_spec.rb" } ``` Required scope: `write_suites` Success response: `200 OK` ###### Mute test ```bash curl -H "Authorization: Bearer $TOKEN" \ -X PUT "https://api.buildkite.com/v2/analytics/organizations/{org.slug}/suites/{suite.slug}/tests/{test.id}/mute" ``` ```json { "id":"80b455d3-d197-8c6d-a7bf-09d252c1bf6e", "url":"http://api.buildkite.com/v2/analytics/organizations/buildkite/suites/my-sample-suite/tests/80b455d3-d197-8c6d-a7bf-09d252c1bf6e", "web_url":"http://buildkite.com/organizations/buildkite/analytics/suites/my-sample-suite/tests/80b455d3-d197-8c6d-a7bf-09d252c1bf6e", "scope":"Flaky test", "name":"passes only on the second try on BK CI", "location":"./spec/flaky_spec.rb:6", "file_name":"./spec/flaky_spec.rb" } ``` Required scope: `write_suites` Success response: `200 OK` ###### Enable test ```bash curl -H "Authorization: Bearer $TOKEN" \ -X PUT "https://api.buildkite.com/v2/analytics/organizations/{org.slug}/suites/{suite.slug}/tests/{test.id}/enable" ``` ```json { "id":"80b455d3-d197-8c6d-a7bf-09d252c1bf6e", "url":"http://api.buildkite.com/v2/analytics/organizations/buildkite/suites/my-sample-suite/tests/80b455d3-d197-8c6d-a7bf-09d252c1bf6e", "web_url":"http://buildkite.com/organizations/buildkite/analytics/suites/my-sample-suite/tests/80b455d3-d197-8c6d-a7bf-09d252c1bf6e", "scope":"Flaky test", "name":"passes only on the second try on BK CI", "location":"./spec/flaky_spec.rb:6", "file_name":"./spec/flaky_spec.rb" } ``` Required scope: `write_suites` Success response: `200 OK` ##### List quarantined tests A list of skipped tests or muted tests can be retrieved via the following APIs. You can use this list to configure your test runner to skip or ignore failures for these tests. ###### Muted tests ```bash curl -H "Authorization: Bearer $TOKEN" \ -X GET "https://api.buildkite.com/v2/analytics/organizations/{org.slug}/suites/{suite.slug}/tests/muted" ``` ```json [ { "id":"160988e4-836e-88ab-af45-22170a169e23", "url":"http://api.buildkite.com/v2/analytics/organizations/buildkite/suites/my-sample-suite/tests/160988e4-836e-88ab-af45-22170a169e23", "web_url":"http://buildkite.com/organizations/buildkite/analytics/suites/my-sample-suite/tests/160988e4-836e-88ab-af45-22170a169e23", "scope":"Flaky test", "name":"passes only on the second try on BK CI", "location":"flaky.spec.js:1", "file_name":"flaky.spec.js" } ] ``` Required scope: `read_suites` Success response: `200 OK` ###### Skipped tests ```bash curl -H "Authorization: Bearer $TOKEN" \ -X PUT "https://api.buildkite.com/v2/analytics/organizations/{org.slug}/suites/{suite.slug}/tests/skipped" ``` ```json [ { "id":"160988e4-836e-88ab-af45-22170a169e23", "url":"http://api.buildkite.com/v2/analytics/organizations/buildkite/suites/my-sample-suite/tests/160988e4-836e-88ab-af45-22170a169e23", "web_url":"http://buildkite.com/organizations/buildkite/analytics/suites/my-sample-suite/tests/160988e4-836e-88ab-af45-22170a169e23", "scope":"Flaky test", "name":"passes only on the second try on BK CI", "location":"flaky.spec.js:1", "file_name":"flaky.spec.js" } ] ``` Required scope: `read_suites` Success response: `200 OK` --- ### Runs URL: https://buildkite.com/docs/apis/rest-api/test-engine/runs #### Runs API ##### List all runs Returns a [paginated list](/docs/apis/rest-api#pagination) of runs in a test suite. ```bash curl -H "Authorization: Bearer $TOKEN" \ -X GET "https://api.buildkite.com/v2/analytics/organizations/{org.slug}/suites/{suite.slug}/runs" ``` ```json [ { "id": "64374307-12ab-4b13-a3f3-6a408f644ea2", "branch": "main", "commit_sha": "1c3214fcceb2c14579a2c3c50cd78f1442fd8936", "state": "finished", "result": "passed", "url": "https://api.buildkite.com/v2/analytics/organizations/my_great_org/suites/my_suite_slug/runs/64374307-12ab-4b13-a3f3-6a408f644ea2", "web_url": "https://buildkite.com/organizations/my_great_org/analytics/suites/my_suite_slug/runs/64374307-12ab-4b13-a3f3-6a408f644ea2", "build_id": "89c02425-7712-4ee5-a694-c94b56b4d54c", "created_at": "2023-06-25T05:32:53.228Z" } ] ``` Optional [query string parameters](/docs/api#query-string-parameters): | `build_id` | Filters the results by the given build UUID. _Example:_ `?build_id=018c133d-5419-7fe3-9e9e-3c51464490a2` Required scope: `read_suites` Success response: `200 OK` ##### Get a run ```bash curl -H "Authorization: Bearer $TOKEN" \ -X GET "https://api.buildkite.com/v2/analytics/organizations/{org.slug}/suites/{suite.slug}/runs/{run.id}" ``` ```json { "id": "64374307-12ab-4b13-a3f3-6a408f644ea2", "branch": "main", "commit_sha": "1c3214fcceb2c14579a2c3c50cd78f1442fd8936", "state": "finished", "result": "passed", "url": "https://api.buildkite.com/v2/analytics/organizations/my_great_org/suites/my_suite_slug/runs/64374307-12ab-4b13-a3f3-6a408f644ea2", "web_url": "https://buildkite.com/organizations/my_great_org/analytics/suites/my_suite_slug/runs/64374307-12ab-4b13-a3f3-6a408f644ea2", "build_id": "89c02425-7712-4ee5-a694-c94b56b4d54c", "created_at": "2023-06-25T05:32:53.228Z" } ``` Required scope: `read_suites` Success response: `200 OK` Runs are created with a `state` of `running` and proceed to `finished` when all uploads have been processed. The run may return to `running` if additional results are uploaded. A run's `result` starts as `pending` and will proceed to `passed` or `failed` when at least one test result has been processed. The presence of a `passed` or `failed` result does not indicate that the run has finished processing. `result` may change from `passed` to `failed` if additional results are uploaded. The `result` is `failed` when there is at least one failing test in the run, and it is not possible for `result` to change from `failed` to any other state. If a run receives no results within a reasonable time period its `result` will proceed to `stale`. ##### Get failed execution data Returns a [paginated list](/docs/apis/rest-api#pagination) of failed executions for a run. ```bash curl -H "Authorization: Bearer $TOKEN" \ -X GET "https://api.buildkite.com/v2/analytics/organizations/{org.slug}/suites/{suite.slug}/runs/{run.id}/failed_executions" ``` ```json [ { "execution_id": "60f0e64c-ae4b-870e-b41f-5431205caf06", "run_id": "075bbcd9-662c-86f5-9d40-adfa6549eff1", "test_id": "f6cb6c43-df94-8b60-81ed-14f9db7bbfd8", "run_name": "075bbcd9-662c-86f5-9d40-adfa6549eff1", "commit_sha": "1c3214fcceb2c14579a2c3c50cd78f1442fd8936", "created_at": "2025-02-03T05:32:53.228Z", "branch": "main", "failure_reason": "it didn't work", "duration": 3.79073, "location": "./spec/models/user.rb:23", "test_name": "Deploy should be available", "run_url": "https:://buildkite.com/organizations/buildkite/analytics/suites/my-test-suite/runs/075bbcd9-662c-86f5-9d40-adfa6549eff1", "test_url": "https:://buildkite.com/organizations/buildkite/analytics/suites/my-test-suite/tests/f6cb6c43-df94-8b60-81ed-14f9db7bbfd8", "test_execution_url": "https:://buildkite.com/organizations/buildkite/analytics/suites/my-test-suite/tests/f6cb6c43-df94-8b60-81ed-14f9db7bbfd8?execution_id=60f0e64c-ae4b-870e-b41f-5431205caf06", "tags": { "language.name": "ruby", "language.version": "3.3.6", "custom.env": "staging" } } ] ``` The `tags` field contains user-defined metadata associated with the execution, returned as a flat map with dot-notation keys. Tags that duplicate other fields already present in the response (`result`, `scm.branch`, `scm.commit_sha`) are excluded. If no user-defined tags exist, this field returns an empty object (`{}`). Required scope: `read_suites` Success response: `200 OK` --- ### Suites URL: https://buildkite.com/docs/apis/rest-api/test-engine/suites #### Suites API ##### List all suites Returns a [paginated list](/docs/apis/rest-api#pagination) of an organization's suites. ```bash curl -H "Authorization: Bearer $TOKEN" \ -X GET "https://api.buildkite.com/v2/analytics/organizations/{org.slug}/suites" ``` ```json [ { "id": "3e979a94-a479-4a6e-ab8d-8b6607ffb62c", "graphql_id": "U3VpdGUtLS0zZTk3OWE5NC1hNDc5LTRhNmUtYWI4ZC04YjY2MDdmZmI2MmM=", "slug":"my_suite_slug", "name":"My suite name", "url":"https://api.buildkite.com/v2/analytics/organizations/my_great_org/suites/my_suite_slug", "web_url":"https://buildkite.com/organizations/my_great_org/analytics/suites/my_suite_slug", "default_branch":"main" } ] ``` Optional [query string parameters](/docs/api#query-string-parameters): | `show_api_token` | Return the suite's API token in the response. A 403 Forbidden error is returned if the user does not have permission to view the suite's API token. _Default value:_ `false`. _Example:_ `?show_api_token=true` Required scope: `read_suites` Success response: `200 OK` ##### Get a suite ```bash curl -H "Authorization: Bearer $TOKEN" \ -X GET "https://api.buildkite.com/v2/analytics/organizations/{org.slug}/suites/{suite.slug}" ``` ```json { "id": "3e979a94-a479-4a6e-ab8d-8b6607ffb62c", "graphql_id": "U3VpdGUtLS0zZTk3OWE5NC1hNDc5LTRhNmUtYWI4ZC04YjY2MDdmZmI2MmM=", "slug":"my_suite_slug", "name":"My suite name", "url":"https://api.buildkite.com/v2/analytics/organizations/my_great_org/suites/my_suite_slug", "web_url":"https://buildkite.com/organizations/my_great_org/analytics/suites/my_suite_slug", "default_branch":"main" } ``` Optional [query string parameters](/docs/api#query-string-parameters): | `show_api_token` | Return the suite's API token in the response. A 403 Forbidden error is returned if the user does not have permission to view the suite's API token. _Default value:_ `false`. _Example:_ `?show_api_token=true` Required scope: `read_suites` Success response: `200 OK` ##### Create a suite ```bash curl -H "Authorization: Bearer $TOKEN" \ -X POST "https://api.buildkite.com/v2/analytics/organizations/{org.slug}/suites" \ -H "Content-Type: application/json" \ -d '{ "name": "Jasmine", "default_branch": "main", "application_name": "Buildkite", "color": "#FFF700", "emoji": "🍋", "show_api_token": true, "team_ids": ["3f4aa5ee-671b-41b0-9b44-b94831db6cc8"] }' ``` ```json { "id": "3e979a94-a479-4a6e-ab8d-8b6607ffb62c", "graphql_id": "U3VpdGUtLS0zZTk3OWE5NC1hNDc5LTRhNmUtYWI4ZC04YjY2MDdmZmI2MmM=", "slug": "jasmine", "name": "Jasmine", "url": "https://api.buildkite.com/v2/analytics/organizations/my_great_org/suites/jasmine", "web_url": "https://buildkite.com/organizations/my_great_org/analytics/suites/jasmine", "default_branch": "main", "application_name": "Buildkite", "color": "#FFF700", "emoji": "🍋", "api_token": "AAAAAAAAAAAAAAAAAAAAAAAA" } ``` Required [request body properties](/docs/api#request-body-properties): | `name` | Name of the new suite. _Example:_ `"Jasmine"`. | `default_branch` | Your test suite will default to showing trends for this default branch, but collect data for all test runs. _Example:_ `"main"` or `"master"`. Optional [request body properties](/docs/api#request-body-properties): | `show_api_token` | Return the suite's API token in the response. _Default value:_ `false`. | `teams_ids` | An array of team UUIDs to add this suite to. You can find your team's UUID either using the [GraphQL API](/docs/apis/graphql-api), or on the Settings page for a team. This property is only available if your organization has enabled Teams, in which case it is a required field. _Example:_ `"team_ids": ["3f4aa5ee-671b-41b0-9b44-b94831db6cc8"]` | `application_name` | Application name for the suite. _Example:_ `"Buildkite"` | `color` | Color for the suite navatar. _Example:_ `"#FFF700"` | `emoji` | Emoji for the suite navatar. Check out our [documentation for supported emoji](https://github.com/buildkite/emojis?tab=readme-ov-file#emoji-reference). _Example:_ `"🍋"`, `"\:lemon\:"` Required scope: `write_suites` Success response: `201 Created` ##### Update a suite ```bash curl -H "Authorization: Bearer $TOKEN" \ -X PATCH "https://api.buildkite.com/v2/analytics/organizations/{org.slug}/suites/{suite.slug}" \ -H "Content-Type: application/json" \ -d '{ "name": "Jasmine", "default_branch": "main" }' ``` ```json { "id": "3e979a94-a479-4a6e-ab8d-8b6607ffb62c", "graphql_id": "U3VpdGUtLS0zZTk3OWE5NC1hNDc5LTRhNmUtYWI4ZC04YjY2MDdmZmI2MmM=", "slug": "jasmine", "name": "Jasmine", "url": "https://api.buildkite.com/v2/analytics/organizations/my_great_org/suites/jasmine", "web_url": "https://buildkite.com/organizations/my_great_org/analytics/suites/jasmine", "default_branch": "main" } ``` Optional [request body properties](/docs/api#request-body-properties): | `name` | Name of the suite. _Example:_ `"Jasmine"`. | `default_branch` | Your test suite will default to showing trends for this default branch, but collect data for all test runs. _Example:_ `"main"` or `"master"`. | `application_name` | Application name for the suite. _Example:_ `"Buildkite"` | `color` | Color for the suite navatar. _Example:_ `"#ffb7c5"` | `emoji` | Emoji for the suite navatar. Check out our [documentation for supported emoji.](https://github.com/buildkite/emojis?tab=readme-ov-file#emoji-reference) _Example:_ `"🌸"`, `"\:cherry_blossom\:"` | `show_api_token` | Return the suite's API token in the response. _Default value:_ `false`. Required scope: `write_suites` Success response: `200 OK` ##### Delete a suite ```bash curl -H "Authorization: Bearer $TOKEN" \ -X DELETE "https://api.buildkite.com/v2/analytics/organizations/{org.slug}/suites/{suite.slug}" ``` Required scope: `write_suites` Success response: `204 No Content` --- ### Tests URL: https://buildkite.com/docs/apis/rest-api/test-engine/tests #### Tests API ##### List tests ```bash curl -H "Authorization: Bearer $TOKEN" \ -X GET "https://api.buildkite.com/v2/analytics/organizations/{org.slug}/suites/{suite.slug}/tests" ``` ```json [ { "id": "01867216-8478-7fde-a55a-0300f88bb49b", "url": "https://api.buildkite.com/v2/analytics/organizations/my_great_org/suites/my_suite_name/tests/01867216-8478-7fde-a55a-0300f88bb49b", "web_url": "https://buildkite.com/organizations/my_great_org/analytics/suites/my_suite_name/tests/01867216-8478-7fde-a55a-0300f88bb49b", "scope": "User#email", "name": "is correctly formatted", "location": "./spec/models/user_spec.rb:42", "file_name": "./spec/models/user_spec.rb", } ] ``` Optional [query string parameters](/docs/api#query-string-parameters): | `label` | Filters the results by test label. _Example:_ `?label=flaky` Required scope: `read_suites` Success response: `200 OK` ##### Get a test ```bash curl -H "Authorization: Bearer $TOKEN" \ -X GET "https://api.buildkite.com/v2/analytics/organizations/{org.slug}/suites/{suite.slug}/tests/{test.id}" ``` ```json { "id": "01867216-8478-7fde-a55a-0300f88bb49b", "url": "https://api.buildkite.com/v2/analytics/organizations/my_great_org/suites/my_suite_name/tests/01867216-8478-7fde-a55a-0300f88bb49b", "web_url": "https://buildkite.com/organizations/my_great_org/analytics/suites/my_suite_name/tests/01867216-8478-7fde-a55a-0300f88bb49b", "scope": "User#email", "name": "is correctly formatted", "location": "./spec/models/user_spec.rb:42", "file_name": "./spec/models/user_spec.rb", } ``` Required scope: `read_suites` Success response: `200 OK` ##### Find a test with scope and name In some situations, you may not have access to UUID to make a call to Test Engine API. You can locate a test record using its scope and name to retrieve the UUID from the response. ```bash curl -H "Authorization: Bearer $TOKEN" \ -X POST "https://api.buildkite.com/v2/analytics/organizations/{org.slug}/suites/{suite.slug}/tests/find" \ -H "Content-Type: application/json" \ -d '{ "scope": "User#email", "name": "is correctly formatted" }' ``` ```json { "id": "01867216-8478-7fde-a55a-0300f88bb49b", "url": "https://api.buildkite.com/v2/analytics/organizations/my_great_org/suites/my_suite_name/tests/01867216-8478-7fde-a55a-0300f88bb49b", "web_url": "https://buildkite.com/organizations/my_great_org/analytics/suites/my_suite_name/tests/01867216-8478-7fde-a55a-0300f88bb49b", "scope": "User#email", "name": "is correctly formatted", "location": "./spec/models/user_spec.rb:42", "file_name": "./spec/models/user_spec.rb", } ``` Required scope: `read_suites` Success response: `200 OK` ##### Add or remove labels from a test ```bash curl -H "Authorization: Bearer $TOKEN" \ -X PATCH "https://api.buildkite.com/v2/organizations/{org.slug}/suites/{suite.uuid}/tests/{test.id}/labels" \ -H "Content-Type: application/json" \ -d '{ "operator": "add", "labels": ["flaky", "slow"] }' ``` ```json { "file_name": "./spec/features/cool_spec.rb", "id": "ccd837ee-d484-8864-a6ee-29cfae965bd8", "labels": [ "flaky", "slow" ], "location": "./spec/features/cool_spec.rb:232", "name": "one plus one", "scope": "A fancy feature", "url": "https://api.buildkite.com/v2/analytics/organizations/acme-inc/suites/acme-suite/tests/ccd837ee-d484-8864-a6ee-29cfae965bd8", "web_url": "https://buildkite.com/organizations/acme-inc/analytics/suites/acme-suite/tests/ccd837ee-d484-8864-a6ee-29cfae965bd8" } ``` Required [request body properties](/docs/api#request-body-properties): | `operator` | The operation that will be apply to labels. `"add"` or `"remove"`. | `labels` | The labels that will be added or removed. _Example:_ `["flaky"]`. Required scope: `write_suites` Success response: `200 OK` --- ### Overview URL: https://buildkite.com/docs/apis/graphql-api #### GraphQL API overview The Buildkite GraphQL API provides an alternative to the [REST API](/docs/apis/rest-api). It allows for more efficient retrieval of data by enabling you to fetch multiple, nested resources in a single request. For the list of existing disparities between the GraphQL API and the REST API, see [API differences](/docs/apis/api-differences). ##### Getting started The quickest way to get started with the GraphQL API is to try the [GraphQL console](https://buildkite.com/user/graphql/console) on Buildkite. [](https://buildkite.com/user/graphql/console) Learn more about using GraphQL queries and mutations with the GraphQL console or command line in the [Using GraphQL from the console or the command line](/docs/apis/graphql/graphql-tutorial) tutorial. > 📘 Note for contributors to public and open-source projects > You need to be a member of the Buildkite organization to be able to generate and use an API token for it. ##### Endpoint The GraphQL API endpoint is `https://graphql.buildkite.com/v1`. All requests must be HTTP `POST` requests with `application/json` encoded bodies. ##### Authentication GraphQL requests must be authenticated using an [API access token](https://buildkite.com/user/api-access-tokens) with the **Enable GraphQL API Access** permission selected. Pass the token in your GraphQL request using the `Authorization` HTTP header with a value `Bearer `. For example: ```bash curl -H "Authorization: Bearer $TOKEN" https://graphql.buildkite.com/v1 ``` Since the [scopes](/docs/apis/managing-api-tokens#token-scopes) of these API access tokens cannot be restricted, [Buildkite organization administrators](/docs/platform/team-management/permissions#manage-teams-and-permissions-organization-level-permissions) can implement [portals](/docs/apis/graphql/portals), which instead provide restricted GraphQL API access to the Buildkite platform. ##### Performing requests with curl A GraphQL request is a standard HTTPS POST request, with a JSON-encoded body containing a `"query"` key, and optionally a `"variables"` key. For example, the following `curl` command returns the `name` property of the current `viewer`: ```bash curl https://graphql.buildkite.com/v1 \ -H "Authorization: Bearer $TOKEN" \ -H "Content-Type: application/json" \ -d '{ "query": "{ viewer { user { name } } }", "variables": "{ }" }' ``` ```json { "data": { "viewer": { "user": { "name": "Jane Doe" } } } } ``` For documentation on the full list of fields and types, refer to the [**Documentation** tab of the GraphQL console](https://buildkite.com/user/graphql/documentation). ##### GraphQL IDs All node types have an `id` property, which is a global identifier for the node. You can find the GraphQL ID for any node by querying for the `id` property, for example: ```graphql query { organization(slug: "my-org") { id } } ``` ```json { "data": { "organization": { "id": "T3JnYW5pemF0aW9uLS0tYTk4OTYxYjctYWRjMS00MWFhLTg3MjYtY2ZiMmM0NmU0MmUw" } } } ``` A GraphQL ID can be used with the global `node` query to quickly return properties of a node, without having to query through nested layers of data. To return specific properties of the object, you'll need to specify the object's type using an [Inline Fragment](https://graphql.org/learn/queries/#inline-fragments). For example, the following query uses an organization's `id` to find the total number of pipelines in the organization: ```graphql query { node(id: "T3JnYW5pemF0aW9uLS0tYTk4OTYxYjctYWRjMS00MWFhLTg3MjYtY2ZiMmM0NmU0MmUw") { ... on Organization { pipelines { count } } } } ``` ```json { "data": { "node": { "pipelines": { "count": 42 } } } } ``` ##### Relay compatibility The Buildkite GraphQL API adheres to the [Relay specification](https://relay.dev/docs/guides/graphql-server-specification/), which defines standards for querying [paginated collections](https://relay.dev/docs/guides/graphql-server-specification/#connections) ("Connections" and "Edges") and for [identifying objects](https://relay.dev/docs/guides/graphql-server-specification/#object-identification) directly from the root of a query (avoiding long nested queries). ##### GraphQL schema If you need the GraphQL schema, you can get it from the API using [GraphQL introspection](https://graphql.org/learn/introspection/), by running the following query against the API: ```graphql query IntrospectionQuery { __schema { queryType { name description kind} mutationType { name description kind } subscriptionType { name description kind } types { name kind description ...FullType } directives { name description locations args { ...InputValue } } } } fragment FullType on __Type { fields(includeDeprecated: true) { name description args { ...InputValue } type { ...TypeRef } isDeprecated deprecationReason } inputFields { ...InputValue } interfaces { ...TypeRef } enumValues(includeDeprecated: true) { name description isDeprecated deprecationReason } possibleTypes { ...TypeRef } } fragment InputValue on __InputValue { name description type { ...TypeRef } defaultValue } fragment TypeRef on __Type { kind name description ofType { kind name description ofType { kind name description ofType { kind name description ofType { kind name description ofType { kind name description ofType { kind name description ofType { kind name description } } } } } } } } ``` ##### Learning more about GraphQL Further resources for learning more about GraphQL: - The [GraphQL API cookbook](/docs/apis/graphql/graphql-cookbook) page full of common queries and mutations. - The [Portals](/docs/apis/graphql/portals) page, where you can learn more about how to provide restricted access to Buildkite's GraphQL API. - The [**Learn** section](https://graphql.org/learn/) of [the official GraphQL website](https://graphql.org). - The [Getting started with GraphQL queries and mutations](https://buildkite.com/blog/getting-started-with-graphql-queries-and-mutations) blog post. --- ### Console and CLI tutorial URL: https://buildkite.com/docs/apis/graphql/graphql-tutorial #### Using GraphQL from the console or the command line [GraphQL](http://graphql.org) is a standard for defining, querying and documenting APIs in a human-friendly way, with built-in documentation, a friendly query language and a bunch of tools to help you get started. This guide shows you how to query the GraphQL API using the GraphQL console (see the [GraphQL overview](/docs/apis/graphql-api) page > [Getting started](/docs/apis/graphql-api#getting-started) section for more information) and from the command line. You'll first need a [Buildkite](https://buildkite.com/) user account, and for the command line, an [API access token](https://buildkite.com/user/api-access-tokens/new) for this user account with the **Enable GraphQL API Access** permission selected. ##### Running your first GraphQL request in the console The following is a GraphQL query that requests the name of the current user (the account attached to the API Access Token, in other words, you!) ```graphql query { viewer { user { name } } } ``` Running that in the GraphQL console returns: ```json { "data": { "viewer": { "user": { "name": "Sam Wright" } } } } ``` Notice how the structure of the data returned is similar to the structure of the query. ##### Running your first GraphQL request on the command line To run the same query using [cURL](https://curl.haxx.se), replace `xxxxxxx` with your API Access Token: ```sh $ curl 'https://graphql.buildkite.com/v1' \ -H 'Authorization: Bearer xxxxxxx' \ -H "Content-Type: application/json" \ -d '{ "query": "query { viewer { user { name } } }" }' ``` which returns exactly the same as the query we ran in the explorer: ```json { "data": { "viewer": { "user": { "name": "Sam Wright" } } } } ``` ##### Getting collections of objects Getting the name of the current user is one thing, but what about a more complex query? The `builds` [field](https://buildkite.com/user/graphql/documentation/type/User) of the `user` returns a `BuildConnection`. A connection is a collection of objects, and requires some metadata called [`edges` and `nodes`](https://graphql.org/learn/pagination/#pagination-and-edges). In the this query we're asking for for the current user's most recently created build (get one build, starting from the first `first): 1`). ```graphql query { viewer { user { name builds(first: 1) { edges { node { number branch message } } } } } } ``` which returns: ```json { "data": { "viewer": { "user": { "name": "Sam Wright", "builds": { "edges": [ { "node": { "number": 136, "branch": "main", "message": "Merge pull request #796 from buildkite/docs\n\nImprove API docs" } } ] } } } } } ``` --- ### Overview URL: https://buildkite.com/docs/apis/graphql/graphql-cookbook #### GraphQL API cookbook The GraphQL cookbook is a collection of recipes detailing how to do common tasks using the GraphQL API. Use them as a starting point when trying something new. There are recipes for a range of different topics, including: - [Agents](/docs/apis/graphql/cookbooks/agents) - [Builds](/docs/apis/graphql/cookbooks/builds) - [Clusters](/docs/apis/graphql/cookbooks/clusters) - [GitHub rate limits](/docs/apis/graphql/cookbooks/github-rate-limits) - [Hosted agents](/docs/apis/graphql/cookbooks/hosted-agents) - [Jobs](/docs/apis/graphql/cookbooks/jobs) - [Pipelines](/docs/apis/graphql/cookbooks/pipelines) - [Pipeline templates](/docs/apis/graphql/cookbooks/pipeline-templates) - [Registries](/docs/apis/graphql/cookbooks/registries) - [Rules](/docs/apis/graphql/cookbooks/rules) - [Organizations](/docs/apis/graphql/cookbooks/organizations) - [Teams](/docs/apis/graphql/cookbooks/teams) --- ### Agents URL: https://buildkite.com/docs/apis/graphql/cookbooks/agents #### Agents A collection of common tasks with unclustered agents using the GraphQL API. You can test out the Buildkite GraphQL API using the Buildkite [GraphQL console](https://buildkite.com/user/graphql/console). This includes built-in documentation under its **Documentation** tab. ##### Get a list of unclustered agent token IDs Get the first five unclustered agent token IDs for an organization. ```graphql query token { organization(slug: "organization-slug") { id name agentTokens(first: 5) { edges { node { id description } } } } } ``` ##### Search for unclustered agents in an organization ```graphql query SearchAgent { organization(slug:"organization-slug") { agents(first:500, search:"search-string") { edges { node { name hostname version } } } } } ``` ##### Revoke an unclustered agent token Revoking an unclustered agent token means no new agents can start using the token. It does not affect any connected agents. First, retrieve a list of agent token IDs using this query to obtain the required token ID. ```graphql query GetAgentTokenID { organization(slug: "organization-slug") { agentTokens(first:50) { edges { node { id uuid description } } } } } ``` Then, using this token ID, revoke the agent token: ```graphql mutation { agentTokenRevoke(input: { id: "token-id", reason: "A reason" }) { agentToken { description revokedAt revokedReason } } } ``` ##### Stop an agent First, get the agent's ID. Search for the agent in the organization where the `search-string` matches the agent name and retrieve the agent's ID. ```graphql query SearchAgent { organization(slug:"organization-slug") { agents(first:500, search:"search-string") { edges { node { name id } } } } } ``` Then, using the agent ID, stop the agent gracefully: ```graphql mutation { agentStop(input: { id: "QWdlbnQtLS0wMThkYWUyZi02NjRjLTQxYjgtOWE4Ny1mMGY5ODhkZWRhM2Q=", graceful: true }) { agent{ id, connectionState } } } ``` --- ### Artifacts URL: https://buildkite.com/docs/apis/graphql/cookbooks/artifacts #### Artifacts A collection of common tasks with artifacts using the GraphQL API. You can test out the Buildkite GraphQL API using the Buildkite [GraphQL console](https://buildkite.com/user/graphql/console). This includes built-in documentation under its **Documentation** tab. ##### List download URLs for artifacts from a build To get the download URLs for artifacts from a build. If the artifact is stored on Buildkite-managed artifact storage, the download URL will be valid for only 10 minutes. ```graphql query GetDownloadURLsForArtifactsFromBuild { build(uuid: "build-uuid") { jobs(first: 500) { edges { node { ... on JobTypeCommand { artifacts { edges { node { path downloadURL } } } } } } } } } ``` --- ### Builds URL: https://buildkite.com/docs/apis/graphql/cookbooks/builds #### Builds A collection of common tasks with builds using the GraphQL API. You can test out the Buildkite GraphQL API using the Buildkite [GraphQL console](https://buildkite.com/user/graphql/console). This includes built-in documentation under its **Documentation** tab. ##### Get build info by ID Get all the available info from a build while only having its UUID. ``` query GetBuilds { build(uuid: "a00000a-xxxx-xxxx-xxxx-a000000000a") { id number url } } ``` ##### Get environment variables set on a build Retrieve (all of) a job's environment variables for a given build. This is the equivalent of what you see in the _Environment_ tab of each build. ```graphql query GetEnvVarsBuild { build(slug:"organization-slug/pipeline-slug/build-number") { message jobs(first: 10, state:FINISHED) { edges { node { ... on JobTypeCommand { label env } } } } } } ``` ##### Get builds for a pipeline Retrieve (all of) the builds for a given pipeline, including each build's ID and UUID, number, and URL. ```graphql query GetBuilds { pipeline(slug: "organization-slug/pipeline-slug") { builds(first: 10) { edges { node { id uuid number url } } } } } ``` ##### Get the creation date of the most recent build in every pipeline Get the creation date of the most recent build in every pipeline. Use pagination to handle large responses. Buildkite sorts builds by newest first. Get the first 500: ```graphql query { organization(slug: "organization-slug") { pipelines(first: 500) { count pageInfo { endCursor hasNextPage } edges { node { name slug builds(first: 1) { edges { node { createdAt } } } } } } } } ``` Then, if there are more than 500 results, use the value of `organization.pipelines.pageInfo.endCursor` to get the next page: ```graphql query { organization(slug: "organization-slug") { pipelines(first: 500, after: "value-from-organization.pipelines.pageInfo.endCursor") { count pageInfo { endCursor hasNextPage } edges { node { name slug builds(first: 1) { edges { node { createdAt } } } } } } } } ``` > 📘 Cursor pagination > Replace `` with the actual `endCursor` string returned from your previous query. ##### Get number of builds between two dates This query helps you understand how many job minutes you've used by looking at the number of builds. While not equivalent, there's a correlation between the number of builds and job minutes. So, looking at the number of builds in different periods gives you an idea of how the job minutes would compare in those periods. ```graphql query PipelineBuildCountForPeriod { pipeline(slug: "organization-slug") { builds( createdAtFrom:"YYYY-MM-DDTHH:mm:ss", createdAtTo:"YYYY-MM-DDTHH:mm:ss" ) { count edges{ node{ createdAt finishedAt id } } } } } ``` > 📘 Date format > In this example, both the `createdAtFrom` and `createdAtTo` fields within the `builds` sub-query of the `pipeline` query must be specified in [DateTime](/docs/apis/graphql/schemas/scalar/datetime) format, which is an ISO-8601 encoded UTC date string. ##### Get all builds with a certain state between two dates This query allows you to find all builds with the same state (for example, `running`) that were started within a certain time frame. For example, you could find all builds that started at a particular point and failed or are still running. ```graphql query { organization(slug: "organization-slug") { pipelines(first: 10) { edges { node { name slug builds( first: 10, createdAtFrom: "YYYY-MM-DDTHH:mm:ss", createdAtTo: "YYYY-MM-DDTHH:mm:ss", state: RUNNING ) { edges { node { id number message state url } } } } } } } } ``` > 📘 Date format > In this example, both the `createdAtFrom` and `createdAtTo` fields within the `builds` sub-query of the `pipeline` query must be specified in [DateTime](/docs/apis/graphql/schemas/scalar/datetime) format, which is an ISO-8601 encoded UTC date string. ##### Count the number of builds on a branch Count how many builds a pipeline has done for a given repository branch. ```graphql query PipelineBuildCountForBranchQuery { pipeline(slug:"organization-slug/pipeline-slug") { builds(branch:"branch-name") { count } } } ``` You can limit the results to a certain timeframe using `createdAtFrom` or `createdAtTo`. ```graphql query PipelineBuildCountForBranchQuery { pipeline(slug:"organization-slug/pipeline-slug") { builds( branch:"branch-name", createdAtTo:"YYYY-MM-DDTHH:mm:ss" ) { count } } } ``` > 📘 Date format > In this example, both the `createdAtTo` field within the `builds` sub-query of the `pipeline` query must be specified in [DateTime](/docs/apis/graphql/schemas/scalar/datetime) format, which is an ISO-8601 encoded UTC date string. ##### Increase the next build number Set the number for the next build to run in this pipeline. First, get the pipeline ID: ```graphql query PipelineId { pipeline(slug: "organization-slug/pipeline-slug") { id } } ``` Then mutate the next build number. In this example, we set `nextBuildNumber` to 300: ```graphql mutation PipelineUpdate { pipelineUpdate(input: { id: "pipeline-id", nextBuildNumber: 300 }) { pipeline { name nextBuildNumber } } } ``` ##### Get the total build run time To get the total run time for a build, you can use the following query. ``` query GetTotalBuildRunTime { build(slug: "organization-slug/pipeline-slug/build-number") { pipeline { name } url startedAt finishedAt } } ``` ##### Create a build on a pipeline Create a build programmatically. First, get the ID for the pipeline to create a build for: ``` query GetPipelineID { organization(slug: "organization-slug") { pipelines(first: 50, search: "part of slug") { edges { node { slug id } } } } } ``` Then, create the build: ``` mutation createBuild { buildCreate( input: { commit: "commit-hash" branch: "branch-name" pipelineID: "pipeline-id" } ) { build { number } } } ``` ##### Get the webhook payload of a build This query allows you to fetch the webhook payload of a specific build using its UUID. The payload is only available for 7 days. ```graphql query GetWebhookPayLoad { build(uuid:"build-uuid") { source{ ... on BuildSourceWebhook { headers payload } } } } ``` --- ### Clusters URL: https://buildkite.com/docs/apis/graphql/cookbooks/clusters #### Clusters A collection of common tasks with clusters using the GraphQL API. You can test out the Buildkite GraphQL API using the Buildkite [GraphQL console](https://buildkite.com/user/graphql/console). This includes built-in documentation under its **Documentation** tab. ##### List clusters Get the first 10 clusters and their information for an organization: ```graphql query getClusters { organization(slug: "organization-slug") { clusters(first: 10) { edges { node { id uuid color description } } } } } ``` ##### List queues Get the first 10 cluster queues for a particular cluster, specifying the clusters' UUID as the `id` argument of the `cluster` query: ```graphql query getQueues { organization(slug: "organization-slug") { cluster(id: "cluster-uuid") { queues(first: 10) { edges { node { id uuid key description } } } } } } ``` ##### List agent tokens Get the first 10 agent tokens for a particular cluster, specifying the clusters' UUID as the `id` argument of the `cluster` query: ```graphql query getAgentTokens { organization(slug: "organization-slug") { cluster(id: "cluster-uuid") { agentTokens(first: 10){ edges{ node{ id uuid description allowedIpAddresses } } } } } } ``` > 🚧 Cluster `token` field deprecation > The `token` field of the [ClusterToken](/docs/apis/graphql/schemas/object/clustertoken) object has been deprecated to improve security. Please use the `tokenValue` field from the [ClusterAgentTokenCreatePayload](/docs/apis/graphql/schemas/object/clusteragenttokencreatepayload) object instead after creating a token. ##### Create agent token with an expiration date Create an agent token with an expiration date. The expiration date is displayed in the Buildkite interface and cannot be changed using another Buildkite API call. ```graphql mutation createToken { clusterAgentTokenCreate(input: { organizationId: "organization-id", description: "A token with an expiration date", clusterId:"cluster-id", expiresAt: "2026-01-01T00:00:00Z" }) { tokenValue } } ``` ##### Revoke an agent token First, get the agent token's ID from your [list of agent tokens](#list-agent-tokens), followed by your [Buildkite organization's ID](/docs/apis/graphql/cookbooks/organizations#get-organization-id). Then, use these ID values to revoke the agent token: ```graphql mutation revokeClusterAgentToken { clusterAgentTokenRevoke(input: { id: "agent-token-id" organizationId: "organization-id" }) { clientMutationId deletedClusterAgentTokenId } } ``` ##### Create a self-hosted queue Create a new _self-hosted queue_ in a cluster, which are queues created for agents that you host yourself. ```graphql mutation { clusterQueueCreate(input: { organizationId: "organization-id", clusterId: "cluster-id", key: "default", description: "The default queue for this cluster." }) { clusterQueue { id uuid key description hosted createdBy { id uuid name } cluster { id uuid name } } } } ``` ##### Create a Buildkite hosted queue Learn more about how to create a Buildkite hosted queue in [Create a Buildkite hosted queue](/docs/apis/graphql/cookbooks/hosted-agents#create-a-buildkite-hosted-queue) of the [Hosted agents](/docs/apis/graphql/cookbooks/hosted-agents) page of this cookbook. ##### Update a queue Update an existing queue. ```graphql mutation { clusterQueueUpdate(input: { organizationId: "organization-id", id: "cluster-id", description: "The default queue for this cluster, but this time with a modified description.", }) { clusterQueue { id uuid key description hosted createdBy { id uuid name } cluster { id uuid name } } } } ``` Learn more about how to update a Buildkite hosted queue's instance shape in [Change the instance shape of a Buildkite hosted queue's agents](/docs/apis/graphql/cookbooks/hosted-agents#change-the-instance-shape-of-a-buildkite-hosted-queues-agents) of the [Hosted agents](/docs/apis/graphql/cookbooks/hosted-agents) page of this cookbook. ##### Delete a queue Deletes an existing queue using the queue's ID. ```graphql mutation { clusterQueueDelete(input: { organizationId: "organization-id", id: "queue-id" }) { deletedClusterQueueId } } ``` ##### List jobs in a particular queue To get jobs within a particular queue of a cluster, use the `clusterQueue` argument of the `jobs` query, passing in the ID of the queue to filter jobs from: ```graphql query getQueueJobs { organization(slug: "organization-slug") { jobs(first: 10, clusterQueue: "cluster-queue-id") { edges { node { ... on JobTypeCommand { id state label url build { number } pipeline { name } } } } } } } ``` To obtain jobs in specific states within a particular queue of a cluster, specify the queues' ID with the `clusterQueue` argument and one or more [JobStates](/docs/apis/graphql/schemas/enum/jobstates) with the `state` argument in the `jobs` query: ```graphql query getQueueJobsByJobState { organization(slug: "organization-slug") { jobs( first: 10, clusterQueue: "cluster-queue-id", state: [WAITING, BLOCKED] ){ edges { node { ... on JobTypeCommand { id state label url build { number } pipeline { name } } } } } } } ``` ##### List agents in a cluster Get the first 10 agents within a cluster, use the `cluster` argument of the `agents` query, passing in the ID of the cluster: ```graphql query getClusterAgents { organization(slug:"organization-slug") { agents(first: 10, cluster: "cluster-id") { edges { node { name hostname version clusterQueue{ uuid id } } } } } } ``` ##### List agents in a queue Get the first 10 agents in a particular queue of a cluster, specifying the `clusterQueue` argument of the `agents` query, passing in the ID of the cluster queue: ```graphql query getQueueAgents { organization(slug:"organization-slug") { agents(first: 10, clusterQueue: "cluster-queue-id") { edges { node { name hostname version id clusterQueue{ id uuid } } } } } } ``` ##### Associate a pipeline with a cluster First, [get the Cluster ID](#list-clusters) you want to associate the Pipeline with. Second, [get the Pipeline's ID](/docs/apis/graphql/cookbooks/pipelines#get-a-pipelines-id). Then, use the IDs to archive the pipelines: ```graphql mutation AssociatePipelineWithCluster { pipelineUpdate(input:{id: "pipeline-id" clusterId: "cluster-id"}) { pipeline { cluster { name id } } } } ``` --- ### GitHub rate limits URL: https://buildkite.com/docs/apis/graphql/cookbooks/github-rate-limits #### GitHub rate limits A collection of common tasks with GitHub rate limits using the GraphQL API. You can test out the Buildkite GraphQL API using the Buildkite [GraphQL console](https://buildkite.com/user/graphql/console). This includes built-in documentation under its **Documentation** tab. ##### List GitHub repository providers rate limits Get all repository providers and their GitHub rate limits if applicable. These are the rate limits GitHub imposes on the [Buildkite app for GitHub](/docs/pipelines/source-control/github#connect-your-buildkite-account-to-github-using-the-github-app), based on [GitHub's rate limits for their REST API](https://docs.github.com/en/rest/using-the-rest-api/rate-limits-for-the-rest-api?apiVersion=2022-11-28). ```graphql query getLimits { organization(slug: "organization-slug") { repositoryProviders { name ... on OrganizationRepositoryProviderGitHub { id name rateLimit { mostRecent { limit used remaining resetAt } } } } } } ``` ##### Show single repository providers rate limits You can query a single repository provider's GitHub rate limit using the [OrganizationRepositoryProviderGitHub](/docs/apis/graphql/schemas/object/organizationrepositoryprovidergithub) [GraphQL ID](/docs/apis/graphql-api#graphql-ids) from the `getLimits` query [above](#list-github-repository-providers-rate-limits). ```graphql query getLimit { node( id: "U0NMU2VrxmljZS0tLT70NWE3Y9QyLWMzYzctQGZkZS1hmGE3LWFmIWVmMmA5ZmP4Ng==" ) { ... on OrganizationRepositoryProviderGitHub { name rateLimit { mostRecent { limit used remaining resetAt } } } } } ``` --- ### Hosted agents URL: https://buildkite.com/docs/apis/graphql/cookbooks/hosted-agents #### Hosted agents A collection of common tasks with [Hosted agents](/docs/agent/buildkite-hosted) using the GraphQL API. You can test out the Buildkite GraphQL API using the Buildkite [GraphQL console](https://buildkite.com/user/graphql/console). This includes built-in documentation under its **Documentation** tab. ##### Create a Buildkite hosted queue Create a new _Buildkite hosted queue_ in a cluster, which are queues created for Buildkite hosted agents. ```graphql mutation { clusterQueueCreate( input: { organizationId: "organization-id" clusterId: "cluster-id" key: "hosted_linux_small" description: "Small AMD64 Linux agents hosted by Buildkite." hostedAgents: { instanceShape: LINUX_AMD64_2X4 } } ) { clusterQueue { id uuid key description dispatchPaused hosted hostedAgents { instanceShape { name size vcpu memory } } createdBy { id uuid name email avatar { url } } } } } ``` Creates a small Buildkite hosted queue using AMD64-based Linux Buildkite hosted agents. The `instanceShape` value is referenced from the [InstanceShape](/docs/apis/graphql/schemas/enum/hostedagentinstanceshapename) enum, and represents the combination of machine type, architecture, CPU and Memory available to each job running on a hosted queue. The `LINUX_AMD64_2X4` value is a Linux AMD64 2 vCPU and 4 GB memory instance. Learn more about the instance shapes available for [Linux](#instance-shape-values-for-linux) and [macOS](#instance-shape-values-for-macos) Buildkite hosted agents. ##### Change the instance shape of a Buildkite hosted queue's agents ```graphql mutation { clusterQueueUpdate( input: { organizationId: "organization-id" id: "cluster-queue-id" hostedAgents: { instanceShape: LINUX_AMD64_4X16 } } ) { clusterQueue { id hostedAgents { instanceShape { name size vcpu memory } } } } } ``` To increase the size of the AMD64-based Linux agent instances for a Buildkite hosted queue, update the `instanceShape` value to a one of a greater size, such as `LINUX_AMD64_4X8`, which is a 4 vCPU and 8 GB memory. This allows you to scale the resources available to each job running on this Buildkite hosted queue. Learn more about the instance shapes available for [Linux](#instance-shape-values-for-linux) and [macOS](#instance-shape-values-for-macos) Buildkite hosted agents. > 📘 > It is only possible to change the _size_ of the current instance shape assigned to this queue. It is not possible to change the current instance shape's machine type (from macOS to Linux, or vice versa), or for a Linux machine, its architecture (from AMD64 to ARM64, or vice versa). ##### Set a custom image URL for a Buildkite hosted queue > 📘 Private preview feature > The custom image URL feature is currently in _private preview_. To enable this feature for your Buildkite organization, contact support@buildkite.com. Learn more about [custom image URLs](/docs/agent/buildkite-hosted/linux/custom-base-images#use-an-agent-image-specify-a-custom-image-for-a-queue). You can configure a Buildkite hosted queue to use a custom image URL. When set, this overrides the agent image selected through the Buildkite interface. ```graphql mutation { clusterQueueUpdate( input: { organizationId: "organization-id" id: "cluster-queue-id" hostedAgents: { agentImageRef: "my-custom-image:latest" } } ) { clusterQueue { id hostedAgents { instanceShape { name size vcpu memory } agentImageRef } } } } ``` The `agentImageRef` value is a URL or reference to a custom image. The image must be publicly available or pushed to the [internal container registry](/docs/pipelines/hosted-agents/internal-container-registry). > 📘 > Only one of `agentImageRef` or `platformSettings.linux.agentImageRef` can be provided in a single mutation. Providing both results in a validation error. ##### Instance shape values for Linux Specify the appropriate **Instance shape** for the `instanceShape` value in your GraphQL API mutation. | Instance shape | Size | Architecture | vCPU | Memory | Disk space | `LINUX_AMD64_2X4` | Small | AMD64 | 2 | 4 GB | 47 GB | `LINUX_AMD64_4X16` | Medium | AMD64 | 4 | 16 GB | 95 GB | `LINUX_AMD64_8X32` | Large | AMD64 | 8 | 32 GB | 158 GB | `LINUX_AMD64_16X64` | Extra Large | AMD64 | 16 | 64 GB | 284 GB | `LINUX_ARM64_2X4` | Small | ARM642 | 4 GB | 47 GB | `LINUX_ARM64_4X16` | Medium | ARM64 | 4 | 16 GB | 95 GB | `LINUX_ARM64_8X32` | Large | ARM64 | 8 | 32 GB | 158 GB | `LINUX_ARM64_16X64` | Extra Large | ARM64 | 16 | 64 GB | 284 GB ##### Instance shape values for macOS Specify the appropriate **Instance shape** for the `instanceShape` value in your GraphQL API mutation. | Instance shape | Size | vCPU | Memory | Disk space | `MACOS_ARM64_M4_6X28` | Medium | 6 | 28 GB | 182 GB | `MACOS_ARM64_M4_12X56` | Large | 12 | 56 GB | 294 GB **Note:** Shapes `MACOS_M2_4X7`, `MACOS_M2_6X14`, `MACOS_M2_12X28`, `MACOS_M4_12X56` were deprecated and removed on July 1, 2025. --- ### Jobs URL: https://buildkite.com/docs/apis/graphql/cookbooks/jobs #### Jobs A collection of common tasks with jobs using the GraphQL API. You can test out the Buildkite GraphQL API using the Buildkite [GraphQL console](https://buildkite.com/user/graphql/console). This includes built-in documentation under its **Documentation** tab. ##### Get all jobs in a given queue for a given timeframe Get all jobs in a named queue, created on or after a given date. If you want to get all jobs across your Buildkite organization, you do not need to set a queue name, and you can therefore omit the `agentQueryRules` option. ```graphql query PipelineRecentBuildLastJobQueue { organization(slug: "organization-slug") { pipelines(first: 500) { edges { node { slug builds(first: 1) { edges { node { number jobs(state: FINISHED, first: 1, agentQueryRules: "queue=queue-name") { edges { node { ... on JobTypeCommand { uuid agentQueryRules createdAt } } } } } } } } } pageInfo { hasNextPage endCursor } } } } ``` ##### Get all jobs in a particular concurrency group To see which jobs are waiting for a concurrency group in case the secret URL fails, you can use the following query. ``` query getConcurrency { organization(slug: "organization-slug") { jobs(first:100,concurrency:{group:"name"}, type:[COMMAND], state:[LIMITED,WAITING,ASSIGNED]) { edges { node { ... on JobTypeCommand { url createdAt } } } } } } ``` ###### Handling 504 errors When attempting to get all jobs in a particular concurrency group throughout your Buildkite organization, you might receive a 504 error in the response, which could result from your specific query being too resource-intensive for the Buildkite GraphQL API to resolve. In such circumstances, restrict the query by a specific pipeline, using its slug. ``` query getConcurrency { organization(slug: "organization-slug/pipeline-slug") { jobs(first:100,concurrency:{group:"name"}, type:[COMMAND], state:[LIMITED,WAITING,ASSIGNED]) { edges { node { ... on JobTypeCommand { url createdAt } } } } } } ``` ##### Get the last job of an agent To get the last job of an agent or `null`. You will need to know the UUID of the agent. ``` query AgentJobs { agent(slug: "organization-slug/agent-UUID") { jobs(first: 10) { edges { node { ... on JobTypeCommand { state build { state } } } } } } } ``` ##### Get the job run time per build To get the run time of each job in a build, you can use the following query. ``` query GetJobRunTimeByBuild { build(slug: "organization-slug/pipeline-slug/build-number") { jobs(first: 1) { edges { node { ... on JobTypeCommand { startedAt finishedAt } } } } } } ``` ##### Get a job's UUID To get UUIDs of the jobs in a build, you can use the following query. ```graphql query GetJobsUUID { build(slug: "org-slug/pipeline-slug/build-number") { jobs(first: 1) { edges { node { ... on JobTypeCommand { uuid } } } } } } ``` ##### Get info about a job by its UUID Get info about a job using the job's UUID only. ```graphql query GetJob { job(uuid: "a00000a-xxxx-xxxx-xxxx-a000000000a") { ... on JobTypeCommand { id uuid createdAt scheduledAt finishedAt pipeline{ name } build{ id number pipeline{ name } } } } } ``` ##### Cancel a job If you need to cancel a job, you can use the following call with the job's ID: ```graphql mutation CancelJob { jobTypeCommandCancel(input: { id: "job-id" }) { jobTypeCommand { id } } } ``` ##### Get retry information for a job Gets information about how a job was retried (`retryType`), who retried the job (`retriedBy`) and which job was source of the retry (`uuid`). `retriedBy` will be `null` if the `retryType` is `AUTOMATIC`. ```graphql query GetJobRetryInformation { job(uuid: "job-uuid") { ... on JobTypeCommand { retrySource { ... on JobInterface { uuid retried retryType retriedBy { email name } } } } } } ``` --- ### Pipelines URL: https://buildkite.com/docs/apis/graphql/cookbooks/pipelines #### Pipelines A collection of common tasks with pipelines using the GraphQL API. You can test out the Buildkite GraphQL API using the Buildkite [GraphQL console](https://buildkite.com/user/graphql/console). This includes built-in documentation under its **Documentation** tab. ##### Create a pipeline Create a pipeline programmatically. First, get the organization ID, team ID, and cluster ID (`uuid`) values: ```graphql query getOrganizationTeamAndClusterIds { organization(slug: "organization-slug") { id teams(first:500) { edges { node { id slug } } } clusters(first: 10) { edges { node { name uuid color description } } } } } ``` The relevant cluster's `uuid` value is the `cluster-id` value used in the next step. Then, create the pipeline: ```graphql mutation createPipeline { pipelineCreate(input: { organizationId: "organization-id" name: "pipeline-name" repository: {url: "repo-url"} clusterId: "cluster-id" steps: { yaml: "steps:\n - command: \"buildkite-agent pipeline upload\"" } teams: { id: "team-id" } }) { pipeline { id name teams(first: 10) { edges { node { id } } } } } } ``` > 📘 When setting pipeline steps using the API, you must pass in a string that Buildkite parses as valid YAML, escaping quotes and line breaks. > To avoid writing an entire YAML file in a single string, you can place a `pipeline.yml` file in a `.buildkite` directory at the root of your repo, and use the `pipeline upload` command in your pipeline steps to tell Buildkite where to find it. This means you only need the following: > `steps: { yaml: "steps:\n - command: \"buildkite-agent pipeline upload\"" }` ###### Deriving a pipeline slug from the pipeline's name Pipeline slugs are derived from the pipeline name you provide when the pipeline is created (unless you use the optional `slug` parameter to specify a custom slug). This derivation process involves converting all space characters (including consecutive ones) in the pipeline's name to single hyphen `-` characters, and all uppercase characters to their lowercase counterparts. Therefore, pipeline names of either `Hello there friend` or `Hello    There Friend` are converted to the slug `hello-there-friend`. The maximum permitted length for a pipeline slug is 100 characters. > 📘 > The following regular expression is used to derive and convert the pipeline name to its slug: > `/\A[a-zA-Z0-9]+[a-zA-Z0-9\-]*\z/` Any attempt to create a new pipeline with a name that matches an existing pipeline's name, results in an error. ##### Get a list of recently created pipelines Get a list of the 500 most recently created pipelines. ```graphql query RecentPipelineSlugs { organization(slug: "organization-slug") { pipelines(first: 500) { edges { node { slug } } } } } ``` ##### Get a list of pipelines and their respective repository Get a list of the first 100 most recently created pipelines along with the URL of each pipeline's configured repository. ``` query GetPipelinesRepositories{ organization(slug: "organization-slug") { pipelines(first: 100) { edges { node { name repository { url } } } } } } ``` ##### Get a pipeline's ID Get a pipeline's ID which can be used in other queries. ```graphql query { pipeline(slug:"organization-slug/pipeline-slug") { id } } ``` ##### Get a pipeline's UUID Get a pipeline's UUID by searching for it in the API. Search term can match a pipeline slug. > 📘 > While you can change a pipeline's name, and therefore slug, over time, the pipeline's UUID is permanent. Use the UUID when you need a way to reference a pipeline whose name might change. ```graphql query GetPipelineUUID { organization(slug: "organization-slug") { pipelines(first: 50, search: "part of slug") { edges { node { slug uuid } } } } } ``` ##### Get a pipeline's information You can get specific pipeline information for each of your pipelines. You can retrieve information for each build, job, and any other information listed on the [Pipeline object](/docs/apis/graphql/schemas/object/pipeline) page. ```graphql query GetPipelineInfo { pipeline(uuid: "pipeline-uuid") { slug uuid builds(first:50){ edges { node { state message } } } } } ``` ##### Get pipeline metrics The **Pipelines** page in Buildkite shows speed, reliability, and builds per week, for each pipeline. You can also access this information through the API. ```graphql query AllPipelineMetrics { organization(slug: "organization-slug") { name pipelines(first: 50) { edges { node { name metrics { edges { node { label value } } } } } } } } ``` ##### Delete a pipeline First, [get the ID of the pipeline](#get-a-pipelines-id) you want to delete. Then, use the ID to delete the pipeline: ```graphql mutation PipelineDelete { pipelineDelete(input: { id: "pipeline-id" }) { deletedPipelineID } } ``` ###### Delete multiple pipelines First, [get the IDs of the pipelines](#get-a-pipelines-id) you want to delete. Then, use the IDs to delete multiple pipelines: ```graphql mutation PipelinesDelete { pipeline1: pipelineDelete(input: { id: "pipeline1-id" }) { deletedPipelineID } pipeline2: pipelineDelete(input: { id: "pipeline2-id" }) { deletedPipelineID } } ``` ##### Update pipeline schedule with multiple environment variables You can set multiple environment variables on a pipeline schedule by using the new-line value `\n` as a delimiter. ```graphql mutation UpdateSchedule { pipelineScheduleUpdate(input:{ id: "schedule-id" env: "FOO=bar\nBAR=foo" }) { pipelineSchedule { id env } } } ``` ##### Get a list of all webhook URLs Get a list of all the webhook URLs associated with the 500 most recently created pipelines. ```graphql query GetPipelineWebhooks { organization(slug: "organization-slug") { pipelines(first: 500) { edges { node { slug webhookURL } } } } } ``` ##### Archive a pipeline First, [get the ID of the pipeline](#get-a-pipelines-id) you want to archive. Then, use the ID to archive the pipeline: ```graphql mutation PipelineArchive { pipelineArchive(input: { id: "pipeline-id" }) { pipeline { id name } } } ``` ###### Archive multiple pipelines First, [get the IDs of the pipelines](#get-a-pipelines-id) you want to archive. Then, use the IDs to archive the pipelines: ```graphql mutation PipelinesArchive { pipeline1: pipelineArchive(input: { id: "pipeline1-id" }) { pipeline { id name } } pipeline2: pipelineArchive(input: { id: "pipeline2-id" }) { pipeline { id name } } } ``` ##### Unarchive a pipeline First, [get the ID of the pipeline](#get-a-pipelines-id) you want to unarchive. Then, use the ID to unarchive the pipeline: ```graphql mutation PipelineUnarchive { pipelineUnarchive(input: { id: "pipeline-id" }) { pipeline { id name } } } ``` ###### Unarchive multiple pipelines The process for unarchiving multiple pipelines is similar to that for [archiving multiple pipelines](#archive-a-pipeline-archive-multiple-pipelines). However, use the field `pipelineUnrchive` (in `pipeline1: pipelineUnarchive(input: { ... })`, etc.) instead of `pipelineArchive`. --- ### Pipeline templates URL: https://buildkite.com/docs/apis/graphql/cookbooks/pipeline-templates #### Pipeline templates A collection of common tasks with pipeline templates using the GraphQL API. You can test out the Buildkite GraphQL API using the Buildkite [GraphQL console](https://buildkite.com/user/graphql/console). This includes built-in documentation under its **Documentation** tab. ##### List pipeline templates Get the first 10 pipeline templates and their information for an organization: ```graphql query GetPipelineTemplates { organization(slug: "organization-slug") { pipelineTemplates(first: 10) { edges { node { id uuid name description configuration available } } } } } ``` ##### Get a pipeline template Get information on a pipeline template, specifying the pipeline templates' UUID as the `uuid` argument of the `pipelineTemplate` query: ```graphql query GetPipelineTemplate { pipelineTemplate(uuid: "pipeline-template-uuid") { id uuid name description configuration available } } ``` ##### Create a pipeline template Create a pipeline template for an organization using the `pipelineTemplateCreate` mutation: ```graphql mutation CreatePipelineTemplate { pipelineTemplateCreate(input: { organizationId: "organization-id", name: "template name", description: "it does a thing", configuration: "steps:\n - command: deploy.sh", available: false }) { pipelineTemplate { id uuid name description configuration available } } } ``` ##### Update a pipeline template Update a pipeline template on an organization using the `pipelineTemplateUpdate` mutation, specifying the ID for organization and pipeline template: ```graphql mutation UpdatePipelineTemplate { pipelineTemplateUpdate(input: { organizationId: "organization-id", id: "pipeline-template-id", configuration: "steps:\n - comand: updated_steps.sh" available: true }) { pipelineTemplate { id uuid name description configuration available } } } ``` ##### Delete a pipeline template Delete a pipeline template using the `pipelineTemplateDelete` mutation, specifying the ID for organization and pipeline template: ```graphql mutation DeletePipelineTemplate { pipelineTemplateDelete(input: { organizationId: "organization-id", id: "pipeline-template-id" }) { deletedPipelineTemplateId } } ``` ##### Assign a template to a pipeline Admins and users with permission to manage pipelines can assign a pipeline template to a pipeline using the `pipelineUpdate` mutation: ```graphql mutation AssignPipelineTemplate { pipelineUpdate(input: { id: "pipeline-id" pipelineTemplateId: "pipeline-template-id" }) { pipeline { id name pipelineTemplate { id } } } } ``` ##### Remove a template from a pipeline Admins and users with permission to manage pipelines can remove from a pipeline by specifying `pipelineTemplateId` as `null` in the mutation input: ```graphql mutation UnassignPipelineTemplate { pipelineUpdate(input: { id: "pipeline-id" pipelineTemplateId: null }) { pipeline { id name pipelineTemplate { id } } } } ``` --- ### Registries URL: https://buildkite.com/docs/apis/graphql/cookbooks/registries #### Registries A collection of common tasks with package registries using the GraphQL API. You can test out the Buildkite GraphQL API using the Buildkite [GraphQL console](https://buildkite.com/user/graphql/console). This includes built-in documentation under its **Documentation** tab. ##### List organization registries List the first 50 registries in the organization. ```graphql query getOrganizationRegistries { organization(slug: "organization-slug"){ registries(first: 50){ edges{ node{ name id uuid createdAt updateaAt } } } } } ``` --- ### Rules URL: https://buildkite.com/docs/apis/graphql/cookbooks/rules #### Rules A collection of common tasks with rules using the GraphQL API. You can test out the Buildkite GraphQL API using the Buildkite [GraphQL console](https://buildkite.com/user/graphql/console). This includes built-in documentation under its **Documentation** tab. ##### List rules Get the first 10 rules and their information for an organization. ```graphql query getRules { organization(slug: "organization-slug") { rules(first: 10) { edges { node { id type targetType sourceType source { ... on Pipeline { slug } } target { ... on Pipeline { slug } } effect action createdBy { id name } } } } } } ``` > 📘 Rule access for organization members > Organization members are able to obtain rule data using the above `rules` query above, as long as the user has at least **Read Only** access to both the source _and_ target pipelines. Learn more about this in [Pipeline-level permissions](/docs/pipelines/security/permissions#manage-teams-and-permissions-pipeline-level-permissions). > A user typically gains **Read Only** permission to access pipelines if the user is associated with one or more [teams](/docs/platform/team-management/permissions#manage-teams-and-permissions) that the source and target pipelines (with at least the **Read Only** permission) are also associated with. > Learn more about associating pipelines with teams in [Team-level permissions](/docs/platform/team-management/permissions#manage-teams-and-permissions-team-level-permissions). ##### Get a rule Get the details of a specific rule by using its `id` using a `node` query. The `id` of a rule can can be obtained: - From the **Rules** section of your **Organization Settings** page, accessed by selecting **Settings** in the global navigation of your organization in Buildkite. Then, expand the existing rule and copy its **GraphQL ID** value. - By running a [List rules GraphQL API query](/docs/apis/graphql/cookbooks/rules#list-rules) to obtain the rule's `id` in the response. ```graphql query getRule { node(id: "rule-id") { id type targetType sourceType source { ... on Pipeline { slug } } target { ... on Pipeline { slug } } effect action createdBy { id name } } } ``` > 📘 Rule access for organization members > Organization members are able to obtain rule data using the above `node` query above, as long as the user has at least **Read Only** access to both the source _and_ target pipelines. Learn more about this in [Pipeline-level permissions](/docs/pipelines/security/permissions#manage-teams-and-permissions-pipeline-level-permissions). > A user typically gains **Read Only** permission to access pipelines if the user is associated with one or more [teams](/docs/platform/team-management/permissions#manage-teams-and-permissions) that the source and target pipelines (with at least the **Read Only** permission) are also associated with. > Learn more about associating pipelines with teams in [Team-level permissions](/docs/platform/team-management/permissions#manage-teams-and-permissions-team-level-permissions). ##### Create a rule Create a rule. The value of the `value` field must be a JSON-encoded string. ```graphql mutation { ruleCreate(input: { organizationId: "organization-id", type: "pipeline.trigger_build.pipeline", description: "An short description for your rule", value: "{\"source_pipeline\":\"pipeline-uuid-or-slug\",\"target_pipeline\":\"pipeline-uuid-or-slug\",\"conditions\":[\"condition-1\",\"condition-2\"]}" }) { rule { id type description targetType sourceType source { ... on Pipeline { uuid } } target { ... on Pipeline { uuid } } effect action createdBy { id name } } } } ``` ##### Edit a rule Edit a rule. The value of the `value` field must be a JSON-encoded string. ```graphql mutation { ruleUpdate(input: { organizationId: "organization-id", id: "rule-id", description: "An optional, new short description for your rule", value: "{\"source_pipeline\":\"pipeline-uuid-or-slug\",\"target_pipeline\":\"pipeline-uuid-or-slug\",\"conditions\":[\"condition-1\",\"condition-2\"]}" }) { rule { id type description targetType sourceType source { ... on Pipeline { uuid } } target { ... on Pipeline { uuid } } effect action createdBy { id name } } } } ``` ##### Delete a rule Delete a rule: ```graphql mutation { ruleDelete(input: { organizationId: "organization-id", id: "rule-id" }) { deletedRuleId } } ``` --- ### Organizations URL: https://buildkite.com/docs/apis/graphql/cookbooks/organizations #### Organizations A collection of common tasks with Buildkite organizations using the GraphQL API. You can test out the Buildkite GraphQL API using the Buildkite [GraphQL console](https://buildkite.com/user/graphql/console). This includes built-in documentation under its **Documentation** tab. ##### Get organization ID Knowing the ID of a Buildkite organization is a prerequisite for running many other GraphQL queries. Use this query to get the ID of an organization based on the organization's slug. ```graphql query getOrganizationID { organization(slug:"organization-slug") { id } } ``` ##### List organization members List the first 100 members in the organization. ```graphql query getOrgMembers { organization(slug: "organization-slug") { members(first: 100) { edges { node { role user { name email id } } } } } } ``` ##### Get the number of organization members Get the total number of members in the organization. Regardless of the value you enter for `members` in the query, the output of the query will provide the actual number of members in the organization. ```graphql query getOrgMembersCount { organization(slug: "org-slug") { members(first:1) { count } } } ``` ##### Search for organization members Look up organization members using their email address. ```graphql query getOrgMember { organization(slug: "organization-slug") { members(first: 1, search: "user-email") { edges { node { role user { name email id } } } } } } ``` ##### Find inactive organization members List organization members who haven't been active since a specific date. ```graphql query getInactiveOrgMembers { organization(slug: "organization-slug") { members(first: 100, inactiveSince: "2025-01-01") { count edges { node { id lastSeenAt user { name email } } } } } } ``` ##### Get the most recent SSO sign-in for all users Use this to get the last sign-in date for users in your organization, if your organization has SSO enabled. ```graphql query getRecentSignOn { organization(slug: "organization-slug") { members(first: 100) { edges { node { user { name email } sso { authorizations(first: 1) { edges { node { createdAt expiredAt } } } } } } } } } ``` ##### Update the default SSO provider session duration You can control how long the session can go before the user must revalidate with your SSO. By default that's indefinite, but you can reduce it down to hours or days. ```graphql mutation UpdateSessionDuration { ssoProviderUpdate(input: { id: "ID", sessionDurationInHours: 2 }) { ssoProvider { sessionDurationInHours } } } ``` ##### Update inactive API token revocation On the [Enterprise](https://buildkite.com/pricing/) plan, you can control when inactive API tokens are revoked. By default, they are never (`NEVER`) revoked, but you can set your token revocation to either 30, 60, 90, 180, or 365 days. ```graphql mutation UpdateRevokeInactiveTokenPeriod { organizationRevokeInactiveTokensAfterUpdate(input: { organizationId: "organization-id", revokeInactiveTokensAfter: DAYS_30 }) { organization { revokeInactiveTokensAfter } } } ``` ##### Pin SSO sessions to IP addresses You can require users to re-authenticate with your SSO provider when their IP address changes with the following call, replacing `ID` with the GraphQL ID of the SSO provider: ```graphql mutation UpdateSessionIPAddressPinning { ssoProviderUpdate(input: { id: "ID", pinSessionToIpAddress: true }) { ssoProvider { pinSessionToIpAddress } } } ``` ##### Enforce two-factor authentication (2FA) for your organization Require users to have two-factor authentication enabled before they can access your organization's Buildkite dashboard. ```graphql mutation EnableEnforced2FA { organizationEnforceTwoFactorAuthenticationForMembersUpdate( input: { organizationId: "organization-id", membersRequireTwoFactorAuthentication: true } ) { organization { id membersRequireTwoFactorAuthentication uuid } } } ``` ##### Create a user, add them to a team, and set user permissions Invite a new user to the organization, add them to a team, and set their role. First, get the organization and team ID: ```graphql query getOrganizationAndTeamId { organization(slug: "organization-slug") { id teams(first:500) { edges { node { id slug } } } } } ``` Then invite the user and add them to a team, setting their role to 'maintainer': ```graphql mutation CreateUser { organizationInvitationCreate(input: { organizationID: "organization-id", emails: ["user-email"], role: MEMBER, teams: [ { id: "team-id", role: MAINTAINER } ] }) { invitationEdges { node { email createdAt } } } } ``` ##### Get the creation timestamp for an organization member Use this to find out when the user was added to the organization. ```graphql query getOrganizationMemberCreation { organization(slug: "organization-slug") { id members(search: "organization-member-name", first: 10) { edges { node { id createdAt user { id name email } } } } } } ``` ##### Update an organization member's role This updates an organization member's role to either `USER` or `ADMIN`. First, find the organization member's ID (`organization-member-id`) using their email address, noting that this ID value is not the same as the user's ID (`user-id`). ```graphql query getOrgMemberID{ organization(slug: "organization-slug") { members(first: 1, search: "user-email") { edges { node { role user { name email id } } } } } } ``` Then, use this `organization-member-id` value (retrieved from the query above) to update the organization member's role. ```graphql mutation UpdateOrgMemberRole { organizationMemberUpdate (input: {id:"organization-member-id", role:ADMIN}) { organizationMember { id role user { name } } } } ``` ##### Delete an organization member This deletes a member from an organization. This action does not delete their Buildkite user account. First, find the organization member's ID (`organization-member-id`) using their email address, noting that this ID value is not the same as the user's ID (`user-id`). ```graphql query getOrgMemberID{ organization(slug: "organization-slug") { members(first: 1, search: "user-email") { edges { node { id role user { name email } } } } } } ``` Then, use this `organization-member-id` value (retrieved from the query above) to delete the user from the organization. ```graphql mutation deleteOrgMember { organizationMemberDelete(input: { id: "organization-member-id" }){ organization{ name } deletedOrganizationMemberID user{ name } } } ``` ##### Get organization audit events Query your Buildkite organization's audit events. Audit events are only available to [Enterprise](https://buildkite.com/pricing/) plan customers. ```graphql query getOrganizationAuditEvents { organization(slug:"organization-slug"){ auditEvents(first: 500){ edges{ node{ type occurredAt actor{ name } subject{ name type } } } } } } ``` To get all audit events in a given period, use the `occurredAtFrom` and `occurredAtTo` filters like in the following query: ```graphql query getTimeScopedOrganizationAuditEvents { organization(slug:"organization-slug"){ auditEvents(first: 500, occurredAtFrom: "2023-01-01T12:00:00.000", occurredAtTo: "2023-01-01T13:00:00.000"){ edges{ node{ type occurredAt actor{ name } subject{ name type } } } } } } ``` ##### Get organization audit events of a specific user Query audit events from within a Buildkite organization of a specific user. Audit events are only available to [Enterprise](https://buildkite.com/pricing/) plan customers. ```graphql query getActorRefinedOrganizationAuditEvents { organization(slug:"organization-slug"){ auditEvents(first: 500, actor: "user-id"){ edges{ node{ type occurredAt actor{ name } subject{ name type } } } } } } ``` To find the actor's `user-id` for the query above, the following query can be run: replacing the `search` term with the name/email of the user: ```graphql query getActorID { organization(slug:"organization-slug"){ members(first:50, search: "search term"){ edges{ node{ user{ name email id } } } } } } ``` ##### Create and delete system banners Create and delete system banners using the `organizationBannerUpsert` and `organizationBannerDelete` mutations. These features are only available to [Enterprise](https://buildkite.com/pricing/) plan customers. To create a banner call `organizationBannerUpsert` with the organization's GraphQL id and message. ```graphql mutation OrganizationBannerUpsert { organizationBannerUpsert(input: { organizationId: "organization-id", message: "**Change to 2FA**: On October 1st ECommerce Inc will require 2FA to be set to access all Pipelines. \r\n\r\n---\r\n\r\nIf you have not set already setup 2FA please go to: [https://buildkite.com/user/two-factor](https://buildkite.com/user/two-factor) and setup 2FA now. ", }) { clientMutationId banner { id message uuid } } } ``` To remove the banner call `organizationBannerDelete` with the organization's GraphQL id. ```graphql mutation OrganizationBannerDelete { organizationBannerDelete(input: { organizationId: "organization-id" }) { deletedBannerId } } ``` --- ### Teams URL: https://buildkite.com/docs/apis/graphql/cookbooks/teams #### Teams A collection of common tasks with teams using the GraphQL API. You can test out the Buildkite GraphQL API using the Buildkite [GraphQL console](https://buildkite.com/user/graphql/console). This includes built-in documentation under its **Documentation** tab. ##### Create a team Create a new team. First, get the organization ID: ```graphql query getOrganizationId { organization(slug: "organization-slug") { id } } ``` Then use the ID to create a new team within the organization: ```graphql mutation CreateTeam { teamCreate(input: { organizationID: "organization-id", name: "team-name", privacy: SECRET, isDefaultTeam: false, defaultMemberRole: MEMBER }) { organization { uuid teams(first: 1, order: RECENTLY_CREATED) { count edges { node { name membersCanCreatePipelines membersCanCreateSuites membersCanCreateRegistries membersCanDestroyRegistries membersCanDestroyPackages } } } } } } ``` ##### Add an existing organization user to a team Add an organization member to a team. This does not create a new user. First, get a list of teams in the organization, to get the team ID: ```graphql query getOrgTeams { organization(slug: "organization-slug") { teams(first: 500) { edges { node { name id } } } } } ``` Then, add a team member. You can get the `user-id` using the example in [Search for organization members](/docs/apis/graphql/cookbooks/organizations#search-for-organization-members). >📘 > `clientMutationId` is null when the mutation is successful. ```graphql mutation addTeamMember { teamMemberCreate(input: {teamID: "team-id", userID: "user-id"}) { clientMutationId } } ``` ##### Remove a team member This deletes a user from a team, but not from the organization. First, get a list of teams and members, to get the team IDs and current memberships: ```graphql query TeamMembersQuery { organization(slug: "organization-slug") { teams(first: 500) { edges { node { name id members(first: 100) { edges { node { role id user { name email id } } } } } } } } } ``` Then delete a team member. Check that you have the team member ID and not the user ID: >📘 > `clientMutationId` is null when the mutation is successful. ```graphql mutation deleteTeamMember { teamMemberDelete(input: {id: "team-member-id"}) { clientMutationId } } ``` ##### Get pipelines by team To get the first 100 pipelines managed by the first 100 teams, use the following query. ```graphql query getPipelinesByTeam { organization(slug: "organization-slug") { id name teams(first: 100) { pageInfo { hasNextPage endCursor } edges { node { name pipelines(first: 100) { pageInfo { hasNextPage endCursor } edges { node { pipeline { name } } } } } } } } } ``` If you have more than 100 teams or more than 100 pipelines per team, use the pagination information in `pageInfo` to get the next results page. ##### Search for team names and retrieve the teams' members The following query retrieves members of one or more teams within a Buildkite organization, along with each team member's role, based on a partial match to the teams' _name_ specified in the query. This query finds the first 200 members of the first team containing the letters "My te" (for example, "My team"), noting that any letters specified are case insensitive. ```graphql query GetTeamsAndTheirMembers { organization(slug:"organization-slug") { teams(first:1, search:"My te") { edges { node { name members(first:200) { edges { node { role user { name email } } } } } } } } } ``` ##### Get members from a specific team The following query retrieves members of a team, along with each team member's roles, which requires both the Buildkite organization and team slugs, separated by a `/`. This query finds the first 10 members of the team with slug `my-team` within the Buildkite organization with slug `organization-slug`. ```graphql query GetTeamMember { team(slug: "organization-slug/my-team") { id members(first:10) { edges { node { role user { name email } } } } } } ``` ##### Get teams and members with permissions The following query retrieves all teams in an organization and their members, showing the permissions enabled for each team. Use this query to identify which members across your organization have specific permissions for pipelines, suites, registries, and packages. ```graphql query TeamPermissions { organization(slug: "organization-slug") { teams(first: 100) { edges { node { name membersCanCreatePipelines membersCanCreateSuites membersCanCreateRegistries membersCanDestroyRegistries membersCanDestroyPackages members(first: 100) { edges { node { user { email } } } } } } } } } ``` ##### Set teams' pipeline edit access to READ_ONLY or BUILD_AND_READ Remove edit access from existing teams. This is helpful when you want to centralize pipeline edit permissions to a single system user, controlled by an organization admin. First, walk through all teams: ```graphql query Teams { organization(slug: "organization-slug") { teams(first: 500) { edges { node { slug } } } } } ``` Then, get the team pipeline IDs from the team slugs. Use the `id` returned here as the `team-pipeline-id` in the next step. ```graphql query TeamPipelineIDs { team(slug: "organization-slug/team-slug") { pipelines(first: 500) { edges { node { id } } } } } ``` Finally, update all pipelines in a team to have either READ_ONLY or BUILD_AND_READ access: ```graphql mutation UpdateTeamPipelineReadonly { teamPipelineUpdate(input: { id: "team-pipeline-id", accessLevel: BUILD_AND_READ }) { teamPipeline { permissions { teamPipelineDelete { allowed code message } teamPipelineUpdate { allowed code message } } } clientMutationId } } ``` --- ### Overview URL: https://buildkite.com/docs/apis/graphql/portals #### Portals Buildkite's GraphQL API is accessed using an [authenticated API access token](/docs/apis/graphql-api#authentication) whose [scopes](/docs/apis/managing-api-tokens#token-scopes) cannot be restricted. Therefore, the Buildkite _portals_ feature provides restricted GraphQL API access to the Buildkite platform, by allowing [Buildkite organization administrators](/docs/platform/team-management/permissions#manage-teams-and-permissions-organization-level-permissions) to define and create GraphQL operations, which are stored by Buildkite, and are made accessible through an authenticated URL endpoint. These URL endpoints are similar to creating custom REST API endpoints, whose responses can be restricted by the GraphQL queries they're based on. Portals work well with machine-to-machine operations, since they're scoped to perform only the operations described within a [GraphQL document](https://spec.graphql.org/October2021/#sec-Language) and are not tied to user-owned access tokens. ##### Creating a portal Portals can only be created by [Buildkite organization administrators](/docs/platform/team-management/permissions#manage-teams-and-permissions-organization-level-permissions). This section explains how to create a new example portal that triggers a build on the main branch of a pipeline. To start creating a new portal: 1. Select **Settings** in the global navigation to access the [**Organization Settings**](https://buildkite.com/organizations/~/settings) page. 1. Select **Integrations > Portals** to access your organization's [**Portals**](https://buildkite.com/organizations/~/portals) page. 1. Select the **Create a portal** button. Note that if existing portals are present, select the **New Portal** button instead. At a minimum, a portal requires a **Name** and **GraphQL query**, which are used to generate a unique endpoint, and a GraphQL document. 1. Specify your portal's **Name** (for example, **Trigger main build**). 1. Specify the definition for the operation that your portal is allowed to perform in **GraphQL query**. For example, use the following GraphQL mutation: ```graphql mutation triggerBuild { buildCreate(input:{ branch: "main", commit: "HEAD," pipelineID: "pipeline-id", }) { build { url } } } ``` **Tip:** You can get the GraphQL pipeline ID (for example, a value looking similar to something like `UGlwZWxpbmUtLS0wMTkzMDkxZC1lOTIUzzRhMWEtYWQ0NS1jMWJhNTA2N2RiMzQ=`) from your pipeline settings. 1. After completing these required fields and any others for this portal, select **Save Portal** to create the portal. A new HTTP endpoint is generated, along with a corresponding _portal token_ (a type of access token known as an _long-lived service token_), which you'll need to use later for authentication. 1. Save this portal token to somewhere secure, as you won't be able to access its value again through the Buildkite interface. **Note:** This long-lived service token is scoped to the GraphQL operations defined in this portal. The token allows for the execution of these operations with administrator-level permissions, but does not allow operations outside those you've explicitly defined to be performed. 1. Make a request to your new endpoint. You can access it using the following `curl` command, replacing the organization slug with your own. For example: ```sh curl -H "Authorization: Bearer $TOKEN" \ -H "Content-Type: application/json" \ -d "{}" \ -X POST "https://portal.buildkite.com/organizations/my-organization/portals/trigger-main-build" ``` Voila! You've just created and executed a portal. >📘 What more examples? > To explore our entire GraphQL API, check out our [GraphQL Explorer](https://buildkite.com/user/graphql/console) or our [GraphQL Cookbook](https://buildkite.com/docs/apis/graphql/graphql-cookbook). ##### Endpoint Each portal has a unique endpoint served from `https://portal.buildkite.com` with the following URL structure: ``` https://portal.buildkite.com/organizations/{organization.slug}/portals/{portal} ``` All requests must be `HTTP POST` requests with `application/json` encoded bodies. ##### Defining multiple operations Multiple GraphQL operations can be defined within a single portal [GraphQL document](https://spec.graphql.org/October2021/#sec-Language). This enables grouping related queries and mutations such as those used in CLI tools or custom workflows under a single portal token for more streamlined usage. The following example defines two operations in the same document—one to fetch recent builds, and another to trigger a new build: ```graphql query GetBuilds($pipelineSlug: ID!) { pipeline(slug: $pipelineSlug) { builds(last: 10, branch: "main") { edges { node { url } } } } } mutation triggerBuild($pipelineID: ID!) { buildCreate(input:{ branch: "main", commit: "HEAD", pipelineID: $pipelineID, }) { build { url } } } ``` >📘 > While multiple operations can exist in a portal document, only one can be executed per request. To run a specific operation, include its `operation_name` as a query parameter along with the relevant variables. An example request for running the `GetBuilds` operation: ```sh curl -H "Authorization: Bearer $TOKEN" \ -H "Content-Type: application/json" \ -d '{ "pipelineSlug": "organization-slug/pipeline-slug" }' \ -X POST "https://portal.buildkite.com/organizations/my-organization/portals/portal-slug?operation_name=GetBuilds" ``` ##### Authentication Similar to the Buildkite REST and GraphQL APIs, portals are authenticated with the associated portal token generated for a given portal. For example: ```sh curl -H "Authorization: Bearer $TOKEN" https://portal.buildkite.com/organizations/my-org/portals/my-portal ``` >📘 > If you need to generate a new long-lived service token (to replace an older or suspected compromised one), you can do this through the portal's **Security** page, by selecting the **New token** button next to the **Long lived service tokens** section, and then removing the existing or initial long-lived service token (created when the portal was created). ##### Passing arguments GraphQL operations may include arguments which can be provided as part of the JSON request body. For example, given a portal that uses the following GraphQL query: ```graphql query GetTotalBuildRunTime($build_slug: ID) { build(slug: "organization-slug/pipeline-slug/build-number") { pipeline { name } url startedAt finishedAt } } ``` Calling this specific portal would then require `build_slug` to be included as part of the HTTP request. e.g. ```sh curl -H "Authorization: Bearer $TOKEN" \ -H "Content-Type: application/json" \ -d '{ "build_slug": "organization-slug/pipeline-slug/build-number" }' \ -X POST "https://portal.buildkite.com/organizations/my-organization/portals/get-total-build-run-time" ``` --- ### Limits URL: https://buildkite.com/docs/apis/graphql/portals/limits #### Portal API rate limits To ensure stability and prevent excessive or abusive calls to the server, Buildkite imposes a limit on the number of portal API requests that can be made within a minute. These limits apply to all portal API endpoints within an organization. ##### Rate limits Buildkite imposes a rate limit of 200 requests per minute for each Buildkite organization. This is the cumulative limit of all API requests made using a portal token as well as users-scoped portal tokens in an organization. ##### Checking rate limit details The rate limit status is available in the following response headers of each API call. - `RateLimit-Remaining` - The remaining requests that can be made within the current time window. - `RateLimit-Limit` - The current rate limit imposed on your organization. - `RateLimit-Reset` - The number of seconds remaining until a new time window is started and limits are reset. - `Ratelimit-Scope` - This will be set as `portal` for all portal requests and helps identify different types of rate limits. For example, the following headers show a situation where 180 requests can still be made in the current window, with a limit of 200 requests per minute imposed on the organization, and 42 seconds before a new time window begins. ```js RateLimit-Remaining: 180 RateLimit-Limit: 200 RateLimit-Reset: 42 Ratelimit-Scope: 'portal' ``` ##### Exceeding the rate limit Once the rate limit is exceeded, subsequent API requests will return a 429 HTTP status code, and the `RateLimit-Remaining` header will be 0. You should not make any further requests until the `RateLimit-Reset` specifies a new availability window. --- ### Ephemeral portal tokens URL: https://buildkite.com/docs/apis/graphql/portals/ephemeral-portal-tokens #### Ephemeral portal tokens When a [Buildkite portal is created](/docs/apis/graphql/portals#creating-a-portal), it's assigned a long-lived service token. However, in scenarios where security is a priority, it's advisable to use _ephemeral portal tokens_ instead. These tokens enhance security, since they are only valid for a short duration. Since ephemeral portal tokens have the same admin-level permissions as long-lived service tokens, ephemeral portal tokens provide a secure alternative to managing portals. ##### Generating a secret Before obtaining an ephemeral portal token, a Buildkite organization administrator must generate a _portal secret_ via the Buildkite interface. This secret is essential for requesting ephemeral portal tokens. Each portal can have up to two secrets, enabling safe rotation practices. To generate a portal secret for an [existing portal](/docs/apis/graphql/portals#creating-a-portal): 1. Select **Settings** in the global navigation to access the [**Organization Settings**](https://buildkite.com/organizations/~/settings) page. 1. Select **Integrations > Portals** to access your organization's [**Portals**](https://buildkite.com/organizations/~/portals) page. 1. Select the portal for which a portal secret will be generated, followed by the portal's **Security** tab. 1. Select the **New Secret** button to generate a new secret. 1. Save this portal secret to somewhere secure, as you won't be able to access its value again through the Buildkite interface. ##### Requesting an ephemeral portal token With the portal secret generated, users can request an ephemeral portal token by making a POST request to the portal's token endpoint. This request should include the following parameters: - `grant_type`: must be set to `client_credentials`. - `client_id`: The Portal's UUID, which is available on the portal's page. - `secret`: The previously generated portal secret. An example `curl` command for this request is: ```bash curl -H "Content-Type: application/json" \ -d '{ "grant_type": "client_credentials", "client_id": "$CLIENT_ID", "secret": "$SECRET" }' \ -X POST "https://portal.buildkite.com/organizations/{org.slug}/portals/{portal.slug}/tokens" ``` The response will contain the ephemeral portal token and its expiration timestamp: ```bash { "token": "bkpat_************************", "expires_at": "2025-10-12T12:16:44Z" } ``` ##### Token validity and custom expiration By default, ephemeral portal tokens are valid for up to an hour. Optionally, users can request tokens with a shorter duration by specifying the `expires_in` parameter (in minutes) in the token request: ```bash curl -H "Content-Type: application/json" \ -d '{ "grant_type": "client_credentials", "client_id": "$CLIENT_ID", "secret": "$SECRET", "expires_in": $MINUTES }' \ -X POST "https://portal.buildkite.com/organizations/{org.slug}/portals/{portal.slug}/tokens" ``` ##### Authentication using the ephemeral token Once obtained, the ephemeral portal token can be used to authenticate portal APIs by including it in the authorization header as a bearer token: ```bash curl -H "Authorization: Bearer $EPHEMERAL_PORTAL_TOKEN" \ -H "Content-Type: application/json" \ -X POST "https://portal.buildkite.com/organizations/{org.slug}/portals/{portal.slug}" ``` --- ### User-invoked portals URL: https://buildkite.com/docs/apis/graphql/portals/user-invoked-portals #### User-invoked portals User-invoked portals allow users within a Buildkite organization (also known as _Buildkite organization members_) to: - Execute GraphQL operations from a portal, and ensures that such operations are run under their own permissions and identity. This approach is suitable when the user conducting such portal operations needs to be identified, or when user-specific permissions for such operations must be enforced. - Authorize and generate short-lived tokens, providing a secure mechanism to execute API actions through these portals, without requiring API tokens to be stored on a developer's machine. ##### Short-lived portal token To use a user-invoked portal, Buildkite organization administrators must explicitly configure portals to be _user-invokable_. This provides these administrators control over which portals allow user-invoked operations while restricting other from being user-invokable. Once a portal is marked as user-invokable, users can request a _token code_ and authorize it to retrieve a _user-specific portal token_ for executing portal operations. Unlike a [portal](/docs/apis/graphql/portals#creating-a-portal)'s long-lived service tokens, these types of _portal tokens_ are referred to as _user-specific_ ones, since they only grant privileges to what this user has access to within the Buildkite organization. ###### Generating token codes Users can generate a token code by making a `POST` request to the portal's code endpoint: ```bash curl -X POST "https://portal.buildkite.com/organizations/{org.slug}/portals/{portal.slug}/codes" ``` ```json { "code": "{code}", "secret": "{secret}", "authorization_url": "{authorization_url}", "expires_at": "2025-03-12T08:21:22Z" } ``` Token codes expire after 5 minutes. Users must authorize the token code before it expires, to generate a portal token. ###### Authorizing using web interface To complete authorization, users must navigate to the authorization URL (provided by the `authorization_url` value when [generating token codes](#short-lived-portal-token-generating-token-codes)) and approve the token codes in the request. Once authorized, the user may close the browser tab. For this authorization process to succeed, the user must be both: - a member of the Buildkite organization - authenticated to Buildkite ###### Generating a portal token Once the token codes are authorized, users can obtain a portal token by making a `POST` request to the portal's token endpoint. The request body must contain `grant_type` as `device_code` along with the `code` and `secret` obtained from [generating token codes](#short-lived-portal-token-generating-token-codes): ```bash curl -H "Content-Type: application/json" \ -d '{ "grant_type": "device_code", "code": "$CODE", "secret": "$SECRET" }' \ -X POST "https://portal.buildkite.com/organizations/{org.slug}/portals/{portal.slug}/tokens" ``` The response contains the generated user-specific portal token and its expiration timestamp: ```json { "token": "bkpat_************************", "expires_at": "2025-10-12T12:16:44Z" } ``` Token usage and expiration: - Each set of token codes can generate only a single user-specific portal token. - Portal tokens are valid for 12 hours by default. - Users can request their own portal tokens with a shorter duration if needed. - The portal token generated can be used to execute operations with the portal that was authorized by the user. ###### Custom expiration duration Optionally, expiration duration can be specified (in minutes) if a shorter expiration is needed: ```bash curl -H "Content-Type: application/json" \ -d '{ "grant_type": "device_code", "code": "$CODE", "secret": "$SECRET", "expires_in": $MINUTES }' \ -X POST "https://portal.buildkite.com/organizations/{org.slug}/portals/{portal.slug}/tokens" ``` By leveraging user-invoked portals, administrators of Buildkite organizations can provide a flexible and secure mechanism for user-scoped GraphQL operations while maintaining strict access control. --- ### Limits URL: https://buildkite.com/docs/apis/graphql/graphql-resource-limits #### GraphQL resource limits To ensure that Buildkite stays stable for everyone, there are limits on how you can use the GraphQL API. These limits prevent excessive or abusive calls to the servers while still allowing you to use GraphQL endpoints in a wide range of ways. The limits are based on query complexity, which is calculated from the requested resources. We recommend using techniques for limiting calls, pagination, caching, and retrying requests to lower the complexity of queries. ##### Query complexity Every field type in the schema has an integer cost assigned to it. The cost of the query is the sum of the cost of each field. Usually, running the query is the best way to know the true cost of the query. A cost is based on what the field returns using the following values. | Field type | Complexity value | Scalar | 0 | Enum | 0 | Object | 1 | Interface | 1 | Union | 1 Although these default costs are in place, Buildkite reserves the right to set different costs for specific fields. ##### Complexity calculation Buildkite calculates the cost of the query before and after the query execution. ###### Requested complexity The requested complexity is calculated based on the number of fields and objects requested. Usually, requesting a deeply nested query or excluding pagination details from connections results in high requested complexity. A simple query like the following would incur more than 500 requested complexity points as the query asks for 503 possible resources. ```graphql query RecentPipelineSlugs { organization(slug: "organization-slug") { # 1 point pipelines(first: 500) { # 1 point edges { # 1 point node { # 500 points slug # 0 points } } } } } ``` ###### Actual complexity The actual complexity is based on the results returned after the query execution, since the connection fields can return fewer nodes than requested. Lowering the requested complexity usually lowers the actual complexity of queries. Taking the same query used earlier, if the organization has only 10 pipelines, the actual complexity will be around 13. ```graphql query RecentPipelineSlugs { organization(slug: "organization-slug") { # 1 point pipelines(first: 500) { # 1 point edges { # 1 point node { # 10 points slug # 0 points } } } } } ``` ##### Rate limits Buildkite has implemented two distinct limits to the GraphQL endpoints. These limits play a critical role in ensuring the platform operates smoothly and efficiently, while minimizing the risk of unnecessary downtime or system failures. By enforcing these limits, we can effectively manage and allocate the necessary resources for our GraphQL endpoints. The GraphQL API enforces two rate limits, both measured in actual complexity points. A request is rejected if either is exceeded: - An [organization-level limit](#rate-limits-organization-time-based-rate-limit) shared across all users in the organization. - A [per-user limit](#rate-limits-per-user-rate-limit). The default per-user limit is 5,000 complexity points per five minutes. There is also a [single query limit](#rate-limits-single-query-limit) that caps the maximum complexity of any individual query. ###### Single query limit Buildkite's API has a requested complexity limit of 50,000 for each individual query. This limit is enforced prior to query execution. The intention of this limit is to prevent users from requesting an excessive number of resources in a single query. As a best practice, we recommend breaking up queries into smaller, more manageable chunks and utilizing pagination to navigate through the resulting list rather than relying on a single large query. If the query exceeds the limit, the response will return HTTP 200 status code with the following error. ```json { "errors": [ { "message": "Query has complexity of 251503, which exceeds max complexity of 50000" } ] } ``` ###### Organization-level time-based rate limit To ensure optimal performance, a Buildkite organization can use up to 20,000 actual complexity points within a 5-minute period. By allowing a set number of actual complexity points, you have the flexibility to run queries of different sizes within a 5-minute window. As a best practice, we recommend utilizing client-side strategies like the following to manage time-based rate limits: - Caching to lower the number of API calls. - Queues to schedule API calls. - Pagination to only request the necessary data. If an organization exceeds the 20,000 point limit, the response returns HTTP 429 status code with the following error. ```json { "errors": [ { "message": "Your organization has exceeded the limit of 20000 complexity points. Please try again in 187 seconds." } ] } ``` ###### Per-user rate limit In addition to the organization-level limit, the GraphQL API enforces a per-user complexity limit on requests. This limit prevents a single user from consuming the entire organization's GraphQL quota. The per-user limit is evaluated for the authenticated user associated with the API access token. The default per-user limit is 5,000 complexity points per five minutes. A request's complexity counts towards both the per-user limit and the [organization-level limit](#rate-limits-organization-time-based-rate-limit). The request is rejected with a `429` status code if either limit is exceeded. Check the `RateLimit-User-Remaining` response header to monitor your per-user quota. If a user exceeds their per-user complexity limit, the response returns HTTP 429 status code with the following error. ```json { "errors": [ { "message": "You have exceeded your per-user limit of 5000 complexity points. Please try again in 187 seconds." } ] } ``` Organization administrators can view the per-user limits that apply to their organization on the [**Service Quotas**](https://buildkite.com/organizations/~/quotas) page, accessible from **Settings** > **Quotas** in the Buildkite interface. ##### Accessing limit details You can access both time-based limits and query complexity information through the API. Accessing limit details will not incur any additional complexity points. ###### Check time-based limits Every GraphQL API response includes two independent sets of rate limit headers: - one for the [organization-level limit](#rate-limits-organization-time-based-rate-limit) - one for the [per-user limit](#rate-limits-per-user-rate-limit). You can monitor both limits independently and determine which one your application is closer to reaching. The `RateLimit-*` headers track the organization's shared complexity quota, while the `RateLimit-User-*` headers track the quota for the authenticated user making the request. A `429` response is returned if either limit is exceeded. Organization-level headers: | Header | Description | |--------|-------------| | `RateLimit-Remaining` | The remaining complexity points within the current organization time window. | | `RateLimit-Limit` | The organization complexity limit for the time window. | | `RateLimit-Reset` | The number of seconds remaining until the organization time window resets. | Per-user headers: | Header | Description | |--------|-------------| | `RateLimit-User-Remaining` | The remaining complexity points for the authenticated user within the current time window. | | `RateLimit-User-Limit` | The per-user complexity limit for the time window. | | `RateLimit-User-Reset` | The number of seconds remaining until the per-user time window resets. | For example, the following response headers show an authenticated user with 3,500 complexity points remaining against their per-user limit of 5,000. The organization has 15,000 points remaining against its limit of 20,000: ```js RateLimit-Remaining: 15000 RateLimit-Limit: 20000 RateLimit-Reset: 300 RateLimit-User-Remaining: 3500 RateLimit-User-Limit: 5000 RateLimit-User-Reset: 300 ``` ###### View query complexity The query complexity status is available in the following response headers of each GraphQL call: | Header | Description | |--------|-------------| | `RateLimit-Complexity-Requested` | The requested complexity of the query, based on the maximum possible data that the query could return. | | `RateLimit-Complexity-Actual` | The actual complexity based on the query response. | If reading response headers is not possible, you can include the complexity data in the response body by setting the `Buildkite-Include-Query-Stats` request header to `true`. This returns the complexity data in the response like the following: ```json { "data" : { "organization": { "name": "Buildkite" } }, "stats" : { "requestedComplexity": 1910, "actualComplexity": 550 } } ``` ##### Best practices to avoid rate limit errors Designing your client application with best practices in mind is the simplest way to avoid throttling errors. For example, you can stagger API requests in a queue and do other processing tasks while waiting for the next queued job to run. Consider the following best practices when designing your API usage: - Optimize the request by only requesting the data you require. We recommend using specific queries rather than a single all-purpose query. - Always use appropriate `first` or `last` values when requesting connections. Not providing those may default to 500, which can increase the requested complexity exponentially. Some connections support a higher maximum — for example, `Build.metaData` accepts `first` values up to 10,000. - Use strategies like caching for data you use often that is unlikely to be updated instead of constantly calling APIs. - Regulate the rate of your requests for smoother distribution. You can do this using queues or scheduling API calls in appropriate intervals. - Use metadata about your API usage, including rate limit status to manage the behavior dynamically. - Think of rate limiting while designing your client application. Be mindful of retries, errors, loops, and the frequency of API calls. --- ### Overview URL: https://buildkite.com/docs/apis/mcp-server #### Buildkite MCP server overview The [Model Context Protocol (MCP)](https://modelcontextprotocol.io/) is an open protocol standard on how to connect artificial intelligence (AI) tools, agents and models to a variety of other systems and data sources. Buildkite provides its own [open-source MCP server](https://github.com/buildkite/buildkite-mcp-server) to expose Buildkite product data (for example, data from pipelines, builds, and jobs for Pipelines, including test data for Test Engine) for AI tools, editors, agents, and other products to interact with. Buildkite's MCP server is built on and interacts with the [Buildkite REST API](/docs/apis/rest-api). Learn more about what the MCP server is capable of in the [MCP tools overview](/docs/apis/mcp-server/tools). To start using Buildkite's MCP server, first determine which [type of Buildkite MCP server](#types-of-mcp-servers) to work with. This next section provides an overview of the differences between these MCP server types and how they need to be configured. Once you have established which Buildkite MCP server to use (remote or local) and if local, have [installed the MCP server](/docs/apis/mcp-server/local/installing#install-and-run-the-server-locally) and [configured its API access token](/docs/apis/mcp-server/local/installing#configure-an-api-access-token), you can then proceed to configure your AI tools to work with the [remote](/docs/apis/mcp-server/remote/configuring-ai-tools) (recommended) or [local](/docs/apis/mcp-server/local/configuring-ai-tools) MCP server. ##### Types of MCP servers Buildkite provides both a [remote](#types-of-mcp-servers-remote-mcp-server) and [local](#types-of-mcp-servers-local-mcp-server) MCP server, both of which provide access to its [MCP server tools](/docs/apis/mcp-server/tools#available-mcp-tools). ###### Remote MCP server The _remote_ MCP server is one that Buildkite hosts, and is available for all users to access at the following URL: ```url https://mcp.buildkite.com/mcp ``` This type of MCP server is typically used by AI tools that you interact with directly from a prompt, and it's the recommended MCP server type to use. ###### What it's suitable for and advantages The remote MCP server is suitable for personal usage with an AI tool, as it has the following advantages. - You don't need to configure an API access token, which poses a potential security risk if leaked. Instead, you only require your Buildkite user account, and the Buildkite platform issues a short-lived OAuth access token, representing your user account for authentication, along with both _read_ and _write_ access permission scopes which are pre-set by the Buildkite platform to provide the authorization. This OAuth token auth process takes place after [configuring your AI tool with the remote MCP server](/docs/apis/mcp-server/remote/configuring-ai-tools) and connecting to it. **Notes:** * OAuth access tokens are valid for 12 hours, and the refresh tokens are valid for seven days. * These OAuth access tokens provide both read and write access to the remote MCP server. If you'd prefer to restrict your access to read-only, a [read-only version of the MCP server](#read-only-remote-mcp-server) is also available. - There is no need to install or upgrade any software. Since the remote MCP server undergoes frequent updates, you get access to new features and fixes automatically. - The remote MCP server has a separate rate limit usage quota to your [Buildkite organization's REST API](/docs/apis/rest-api/rate-limits). See [Remote MCP server rate limits](/docs/apis/mcp-server/remote/rate-limits). ###### What it's not suitable for The remote MCP server is not suitable for use in automated workflows, where running a specific version of the MCP server is important for generating consistent results. ###### Read-only remote MCP server Buildkite also provides a version of the remote MCP server with read-only access to the Buildkite platform. This version is available for all users to access at the following URL: ```url https://mcp.buildkite.com/mcp/readonly ``` This remote MCP server version issues a short-lived OAuth access token for your Buildkite user account, along with _read-only_ access permission scopes pre-set by the Buildkite platform. Hence, when using this remote MCP server, only [MCP tools](/docs/apis/mcp-server/tools#available-mcp-tools) whose required [token scope](/docs/apis/managing-api-tokens#token-scopes) begins with `read_` are available, as well as tools with no required scope specified. > 📘 > Read-only access can also be configured in a similar manner for [toolsets](/docs/apis/mcp-server/tools/toolsets). Learn more about this in [Configuring AI tools with the remote MCP server](/docs/apis/mcp-server/remote/configuring-ai-tools) and [Remote MCP server configuration for toolsets](/docs/apis/mcp-server/tools/toolsets#configuring-the-remote-mcp-server). ###### Local MCP server The _local_ MCP server is one that you install yourself directly on your own machine or in a containerized environment. This type of MCP server is typically used by AI tools as _AI agents_, which an automated system or workflow, such as a Buildkite pipeline, can interact with. Such AI agent interactions are usually shell-based. ###### What it's suitable for The local MCP server enables automated workflows (for example, using [Buildkite Pipelines](/docs/pipelines)), where running a specific version of the MCP server is important for generating consistent results. Also, if you want to contribute to the [Buildkite MCP server project](https://github.com/buildkite/buildkite-mcp-server), the local MCP server allows you to run and test your changes locally. ###### What it's not suitable for and disadvantages The local MCP server is not suitable for personal usage with an AI tool, as it has the following disadvantages. - Since your Buildkite API access token is used for authentication and authorization to the MCP server, you'll need to manage the security (for example, leak prevention) of this token and its storage in plain text. - You'll also need to manage upgrades to the MCP server yourself, especially if you choose to install the binary version of the local MCP server, which means you may miss out on new and updated features offered automatically through the [remote MCP server](#types-of-mcp-servers-remote-mcp-server). - Unlike the [remote MCP server](#types-of-mcp-servers-remote-mcp-server), requests made through the local MCP server count towards your Buildkite organization's standard [REST API rate limit](/docs/apis/rest-api/limits). If you intend to use the local Buildkite MCP server, learn more about how to set up and install it in [Installing the Buildkite MCP server](/docs/apis/mcp-server/local/installing). --- ### Overview URL: https://buildkite.com/docs/apis/mcp-server/tools #### MCP tools overview _MCP tools_ form the fundamental components of an _MCP server_, and provide the mechanisms through which AI tools and agents can access a system's APIs, through its MCP server. Learn more about MCP tools in the [Core Server Features](https://modelcontextprotocol.io/docs/learn/server-concepts#core-server-features) and [Tools](https://modelcontextprotocol.io/docs/learn/server-concepts#tools) sections of the [Understanding MCP servers](https://modelcontextprotocol.io/docs/learn/server-concepts) page in the [Model Context Protocol](https://modelcontextprotocol.io/docs/getting-started/intro) docs. ##### Available MCP tools The Buildkite MCP server exposes the following categories of MCP tools. The names of these tools (for example, `list_pipelines`) typically do not need to be used in direct prompts to AI tools or agents. However, each MCP tool name is designed to be understandable, so that it can be used directly in a prompt when you want your AI tool or agent to explicitly use that MCP tool to query the Buildkite platform. As part of configuring your AI tool or agent with the [remote or local Buildkite MCP server](/docs/apis/mcp-server#types-of-mcp-servers), you can restrict its access to specific categories of tools using [toolsets](/docs/apis/mcp-server/tools/toolsets). Additionally, Buildkite recommends [configuring your project's `AGENTS.md` file with a hint](#the-agents-dot-md-file) to help guide your AI tool or agent to use the Buildkite MCP server and its tools with your project. > 📘 > While Buildkite's MCP server makes calls to the Buildkite REST API, note that in some cases, only a subset of the resulting fields are returned in the response to your AI tool or agent. This is done to reduce noise for your AI tool / agent, as well as reduce costs associated with text tokenization of the response (also known as token usage). ###### User, authentication and Buildkite organization These MCP tools are associated with [authentication](/docs/apis#authentication) and relate to querying details about the access token's user and Buildkite organization they belong to. | Tool | Description | `` | Required [token scope](/docs/apis/managing-api-tokens#token-scopes): ``. ###### Buildkite clusters These MCP tools are used to retrieve details about the [clusters](/docs/pipelines/security/clusters/manage) and their [queues](/docs/agent/queues/managing) configured in your Buildkite organization. Learn more about clusters in [Clusters overview](/docs/pipelines/security/clusters). | Tool | Description | `` | Required [token scope](/docs/apis/managing-api-tokens#token-scopes): ``. ###### Pipelines These MCP tools are used to retrieve details about existing [pipelines](/docs/apis/rest-api/pipelines) in [your Buildkite organization](/docs/apis/rest-api/organizations), as well as create new pipelines, and update existing ones. | Tool | Description | `` | Required [token scope](/docs/apis/managing-api-tokens#token-scopes): ``. ###### Builds These MCP tools are used to retrieve details about existing [builds](/docs/apis/rest-api/builds) of a [pipeline](#available-mcp-tools-pipelines), as well as create new builds, and wait for a specific build to finish. | Tool | Description | `` | Required [token scope](/docs/apis/managing-api-tokens#token-scopes): ``. ###### Jobs These MCP tools are used to retrieve the logs of [jobs](/docs/apis/rest-api/jobs) from a pipeline [build](#available-mcp-tools-builds), as well as unblock jobs in a pipeline build. A job's logs can then be processed by the [logs](#available-mcp-tools-logs) tools of the MCP server, for the benefit of your AI tool or agent. | Tool | Description | `` | Required [token scope](/docs/apis/managing-api-tokens#token-scopes): ``. ###### Logs These MCP tools are used to process the logs of [jobs](#available-mcp-tools-jobs), for the benefit of your AI tool or agent. These MCP tools leverage the [Buildkite Logs Search & Query Library](https://github.com/buildkite/buildkite-logs?tab=readme-ov-file#buildkite-logs-search--query-library) (used by the Buildkite MCP server), which converts the complex Buildkite logs returned by the Buildkite platform into [Parquet file](https://parquet.apache.org/docs/file-format/) versions of these log files, making the logs more consumable for AI tools, agents and large language models (LLMs). For improved performance, these Parquet log files are also cached and stored. Learn more about this in [Smart caching and storage](#smart-caching-and-storage). | Tool | Description | `` | ###### Artifacts These MCP tools are used to retrieve details about artifacts from a pipeline [build](#available-mcp-tools-builds), as well as obtain the artifacts themselves. | Tool | Description | `` | Required [token scope](/docs/apis/managing-api-tokens#token-scopes): ``. ###### Annotations These MCP tools are used to retrieve details about the annotations resulting from a pipeline [build](#available-mcp-tools-builds). | Tool | Description | `` | Required [token scope](/docs/apis/managing-api-tokens#token-scopes): ``. ###### Test Engine These MCP tools are used to retrieve details about Test Engine [tests](/docs/test-engine/glossary#test) and their [runs](/docs/test-engine/glossary#run) from a [test suite](/docs/test-engine/test-suites), along with other Test Engine-related data. | Tool | Description | `` | Required [token scope](/docs/apis/managing-api-tokens#token-scopes): ``. ##### Smart caching and storage To improve performance in accessing log data from the Buildkite platform, the Buildkite MCP server downloads and stores the [logs of jobs](/docs/apis/rest-api/jobs#get-a-jobs-log-output) in [Parquet file format](https://parquet.apache.org/docs/file-format/) to either of the following areas. - For the [local MCP server](/docs/apis/mcp-server#types-of-mcp-servers-local-mcp-server), on the file system of the machine running the MCP server. - For the [remote MCP server](/docs/apis/mcp-server#types-of-mcp-servers-remote-mcp-server), in a dedicated area of the Buildkite platform. These Parquet log files are stored and managed by the MCP server and all interactions with these files are managed by the [MCP server's log tools](#available-mcp-tools-logs). If the job is in a terminal state (for example, the job was completed successfully, had failed, or was canceled), then the job's Parquet format logs are downloaded and stored indefinitely. If the job is in a non-terminal state (for example, the job is still running or is blocked), then the job's Parquet logs are retained for 30 seconds. ###### Storage locations If you are running the [local MCP server](/docs/apis/mcp-server/local/installing), the following table indicates the default locations for these Parquet log files. | Environment | Default Parquet log file location | | You can override these default Parquet log file locations through the `$BKLOG_CACHE_URL` environment variable, which can be used with either a local file system path or an `s3://` path, where the latter may be better suited for pipeline usage, for example: ```bash #### Local development with persistent cache export BKLOG_CACHE_URL="file:///Users/me/bklog-cache" #### Shared cache across build agents export BKLOG_CACHE_URL="s3://ci-logs-cache/buildkite/" ``` ##### The AGENTS.md file The [`AGENTS.md` file](https://agents.md/) is used to help guide your AI tool or agent to work on a project. Depending on which AI tool or agent you use, this file might use a different name, such as `CLAUDE.md` for Claude Code. Buildkite recommends configuring your project's `AGENTS.md` file by adding a hint like the following to help your AI tool or agent to use the Buildkite MCP server and its tools with your project: ```markdown - **CI/CD**: `my-buildkite-organization` Buildkite organization, `my-pipeline` pipeline slug for build and test (`.buildkite/pipeline.yml`), `my-pipeline-release` pipeline slug for releases (`.buildkite/pipeline.release.yml`) ``` You should replace your Buildkite organization, pipeline slugs, and pipeline file names with those applicable to your project. Add this hint to an appropriate section within your `AGENTS.md` file. For example, for a typical development project, you might add this hint to a series of existing ones in a section about about architecture. --- ### Toolsets URL: https://buildkite.com/docs/apis/mcp-server/tools/toolsets #### Toolsets The [Buildkite MCP server](/docs/apis/mcp-server) organizes its [MCP tools](/docs/apis/mcp-server/tools#available-mcp-tools) into logical groups of _toolsets_, each of which can be selectively enabled on the MCP server, based on your requirements. ##### Available toolsets Each toolset groups related [MCP tools](/docs/apis/mcp-server/tools#available-mcp-tools), which interact with specific areas of the Buildkite platform. You can enable these individual toolsets by configuring them for the [remote](#configuring-the-remote-mcp-server) or [local](#configuring-the-local-mcp-server) Buildkite MCP server. Doing so effectively restricts your AI tool's or agent's access to the Buildkite API, based on each set of MCP tools made available through the MCP server's configured toolsets. Also, see [Recommended toolset configurations](#recommended-toolset-configurations) for details on how to configure different combinations of toolsets for different use cases. | Toolset (name) | Description | Tools | `` | | ##### Configuring the remote MCP server You can configure toolset availability for the [remote MCP server](/docs/apis/mcp-server#types-of-mcp-servers-remote-mcp-server) by adding the required [toolset names](#available-toolsets) as part of an [extension to the remote MCP server's URL](#configuring-the-remote-mcp-server-using-a-url-extension) (for a single toolset only), or alternatively, and for multiple toolsets, as part of the [header of requests](#configuring-the-remote-mcp-server-using-headers) sent to the Buildkite platform from the remote MCP server. You can also configure [read-only access](/docs/apis/mcp-server#read-only-remote-mcp-server) to the remote MCP server as part of this process, and when configuring multiple toolsets, be [selective over which ones have read-only access](#configuring-the-remote-mcp-server-selective-read-only-access-to-toolsets). ###### Using a URL extension When [configuring your AI tool with the remote MCP server](/docs/apis/mcp-server/remote/configuring-ai-tools), you can enable a single toolset by appending `/x/{toolset.name}` to the end of the remote MCP server URL (`https://mcp.buildkite.com/mcp`), where `{toolset.name}` is the name of the [toolset](#available-toolsets) you want to enable. To enforce read-only access, append `/readonly` to the end of `/x/{toolset.name}`. ###### Examples To enable the `builds` toolset for the remote MCP server, configure your AI tool with the following URL: ```url https://mcp.buildkite.com/mcp/x/builds ``` To enforce read-only access to this remote MCP server toolset, configure your AI tool with this URL instead: ```url https://mcp.buildkite.com/mcp/x/builds/readonly ``` > 📘 > The remote MCP server URL `https://mcp.buildkite.com/mcp` without any further extension provides unrestricted access to the Buildkite API, restricted only by all applicable [token scopes](/docs/apis/managing-api-tokens#token-scopes) available to your Buildkite user account's access token, and what you can access on the Buildkite platform. ###### Using headers When [configuring your AI tool with the remote MCP server](/docs/apis/mcp-server/remote/configuring-ai-tools), you can enable one or more toolsets by specifying their [toolset names](#available-toolsets) as a single-line comma-separated list value for the `X-Buildkite-Toolsets` header of requests sent to the Buildkite platform from the remote MCP server. To enforce read-only access, use the [read-only remote MCP server URL](/docs/apis/mcp-server#read-only-remote-mcp-server). ###### Examples To enable the `builds` toolset for the remote MCP server, configure your AI tool with the [standard remote MCP server URL](/docs/apis/mcp-server#types-of-mcp-servers-remote-mcp-server): ```url https://mcp.buildkite.com/mcp ``` along with the required toolset value specified in the request header `X-Buildkite-Toolsets`, that is: ```url X-Buildkite-Toolsets: builds ``` To enable the `user`, `pipelines`, and `builds` toolsets for the remote MCP server, configure your AI tool with this [standard remote MCP server URL](/docs/apis/mcp-server#types-of-mcp-servers-remote-mcp-server), along with these toolset values specified in the `X-Buildkite-Toolsets` header: ```url X-Buildkite-Toolsets: user,pipelines,builds ``` To enforce read-only access across all of these toolsets, use the [read-only remote MCP server URL](/docs/apis/mcp-server#read-only-remote-mcp-server): ```url https://mcp.buildkite.com/mcp/readonly ``` along with the following request header: ```url X-Buildkite-Toolsets: user,pipelines,builds ``` You can also be [selective with read-only access to toolsets](#configuring-the-remote-mcp-server-selective-read-only-access-to-toolsets). > 📘 > Learn more about how to configure different AI tools with these header configurations in [Configuring AI tools with the remote MCP server](/docs/apis/mcp-server/remote/configuring-ai-tools). > Instead of using the [read-only remote MCP server URL](/docs/apis/mcp-server#read-only-remote-mcp-server), you could use the [standard remote MCP server URL](/docs/apis/mcp-server#types-of-mcp-servers-remote-mcp-server) (`https://mcp.buildkite.com/mcp`), along with the additional header of `X-Buildkite-Readonly: true`. However, for simplicity, using the read-only remote MCP server URL is preferred. > Using the [standard remote MCP server URL](/docs/apis/mcp-server#types-of-mcp-servers-remote-mcp-server) and omitting the `X-Buildkite-Toolsets` and `X-Buildkite-Readonly` headers from these configurations provides unrestricted access to the Buildkite API, restricted only by all applicable [token scopes](/docs/apis/managing-api-tokens#token-scopes) available to your Buildkite user account's access token, and what you can access on the Buildkite platform. ###### Selective read-only access to toolsets If you want to enable multiple [toolsets](#available-toolsets), but be selective over which ones of these have read-only access, you'll need to create two remote MCP server configurations ([using headers](#configuring-the-remote-mcp-server-using-headers)) for your AI tool—one for toolsets with both read and write access, and the other for toolsets with read-only access. ###### Examples To enable the `builds` toolset with both read and write access, and the `user` and `pipelines` toolsets with read-only access, create two remote MCP server configurations—one with the [standard URL](/docs/apis/mcp-server#types-of-mcp-servers-remote-mcp-server): ```url https://mcp.buildkite.com/mcp ``` and the other with the [read-only URL](/docs/apis/mcp-server#read-only-remote-mcp-server): ```url https://mcp.buildkite.com/mcp/readonly ``` For the `builds` toolset with read and write access to the remote MCP server, implement the request header for your remote MCP server configuration which uses the _standard URL_: ```url X-Buildkite-Toolsets: builds ``` And for the `user` and `pipelines` toolsets with read-only access to the remote MCP server, implement the request header for your MCP server configuration which uses the _read-only URL_: ```url X-Buildkite-Toolsets: user,pipelines ``` You could also [use the URL extension](#configuring-the-remote-mcp-server-using-a-url-extension) approach to do this by implementing three separate remote MCP server configurations, each of whose URLs are respectively: ```url https://mcp.buildkite.com/mcp/x/builds https://mcp.buildkite.com/mcp/x/user/readonly https://mcp.buildkite.com/mcp/x/pipelines/readonly ``` > 📘 > Ensure you provide an appropriate name for each MCP server configuration to make it easier to identify which toolsets and level of access each server has to the Buildkite API. > For example, instead of `buildkite` as an MCP server configuration name, use more descriptive names, for example: `buildkite-read-only-user-pipelines-toolsets` and `buildkite-builds-toolset`. ##### Configuring the local MCP server You can configure toolset availability for the [local MCP server](/docs/apis/mcp-server#types-of-mcp-servers-local-mcp-server) by adding the required [toolset names](#available-toolsets) as part of an environment variable or command-line flag when either the [Docker](#configuring-the-local-mcp-server-using-docker) or [binary](#configuring-the-local-mcp-server-using-the-binary) version of the local MCP server is started. You can also configure read-only access to the local MCP server as part of this process, and when configuring multiple toolsets, be [selective over which ones have read-only access](#configuring-the-local-mcp-server-selective-read-only-access-to-toolsets). ###### Using Docker When [configuring your AI tool with the local MCP server](/docs/apis/mcp-server/local/configuring-ai-tools) running in [Docker](/docs/apis/mcp-server/local/installing#install-and-run-the-server-locally-using-docker), you can enable one or more toolsets by adding the `BUILDKITE_TOOLSETS` environment variable to the `docker run` command, and specifying the [toolset names](#available-toolsets) as a comma-separated list value for this variable. To enforce read-only access, also add the `BUILDKITE_READ_ONLY` environment variable with a value of `true` to this command. ###### Examples To enable the `builds` toolset for the local MCP server, configure the `docker run` command with: ```bash docker run --rm -e BUILDKITE_API_TOKEN=bkua_xxxxx -e BUILDKITE_TOOLSETS="builds" buildkite/mcp-server stdio ``` To enable the `user`, `pipelines`, and `builds` toolsets for the local MCP server, and enforce read-only access across all of these toolsets, configure the `docker run` command with: ```bash docker run --rm -e BUILDKITE_API_TOKEN=bkua_xxxxx -e BUILDKITE_TOOLSETS="user,pipelines,builds" -e BUILDKITE_READ_ONLY="true" buildkite/mcp-server stdio ``` You can also be [selective with read-only access to toolsets](#configuring-the-local-mcp-server-selective-read-only-access-to-toolsets). Most [AI tool or agent configurations for the local MCP server](/docs/apis/mcp-server/local/configuring-ai-tools) require configuring the `docker run` command's environment variables with both an `args` array and `env` object in its JSON configuration file. Hence, the example above would be configured in these JSON files as: ```json { ... "buildkite-read-only-toolsets": { "command": "docker", "args": [ "run", "--pull=always", "-q", "-i", "--rm", "-e", "BUILDKITE_API_TOKEN", "-e", "BUILDKITE_TOOLSETS", "-e", "BUILDKITE_READ_ONLY", "buildkite/mcp-server", "stdio" ], "env": { "BUILDKITE_API_TOKEN": "bkua_xxxxx", "BUILDKITE_TOOLSETS": "user,pipelines,builds", "BUILDKITE_READ_ONLY": "true" } } ... } ``` > 📘 > Specifying `BUILDKITE_TOOLSETS` with a value of `all` enables all available toolsets, which is the default value for this environment variable when omitted. > Omitting the `BUILDKITE_TOOLSETS` and `BUILDKITE_READ_ONLY` environment variables from these `docker run` commands provides unrestricted access to the Buildkite API, restricted only by all applicable [token scopes](/docs/apis/managing-api-tokens#token-scopes) available to the Buildkite user account's API access token, and what it can access on the Buildkite platform. ###### Using the binary When [configuring your AI tool with the local MCP server](/docs/apis/mcp-server/local/configuring-ai-tools) running as a [pre-built](/docs/apis/mcp-server/local/installing#install-and-run-the-server-locally-using-a-pre-built-binary) or [source-built](/docs/apis/mcp-server/local/installing#install-and-run-the-server-locally-building-from-source) binary, you can enable one or more toolsets by adding the `--enabled-toolsets` flag to the `buildkite-mcp-server` command, and specifying the [toolset names](#available-toolsets) as a comma-separated list value for this flag. To enforce read-only access, also add the `--read-only` flag. ###### Examples To enable the `builds` toolset for the local MCP server, configure the `buildkite-mcp-server` command with: ```bash buildkite-mcp-server stdio --api-token=bkua_xxxxx --enabled-toolsets="builds" ``` To enable the `user`, `pipelines`, and `builds` toolsets for the local MCP server, and enforce read-only access across all of these toolsets, configure the `buildkite-mcp-server` command with: ```bash buildkite-mcp-server stdio --api-token=bkua_xxxxx --enabled-toolsets="user,pipelines,builds" --read-only ``` You can also be [selective with read-only access to toolsets](#configuring-the-local-mcp-server-selective-read-only-access-to-toolsets). Most [AI tool or agent configurations for the local MCP server](/docs/apis/mcp-server/local/configuring-ai-tools) require configuring the `buildkite-mcp-server` command flags with an `env` object in its JSON configuration file. Hence, the example above would be configured in these JSON files as: ```json { ... "buildkite-read-only-toolsets": { "command": "buildkite-mcp-server", "args": ["stdio"], "env": { "BUILDKITE_API_TOKEN": "bkua_xxxxx", "BUILDKITE_TOOLSETS": "user,pipelines,builds", "BUILDKITE_READ_ONLY": "true" } } ... } ``` > 📘 > Specifying `BUILDKITE_TOOLSETS` environment variable or the `--enabled-toolsets` flag with a value of `all` enables all available toolsets, which is the default value for this environment variable or flag when omitted. > Omitting the `BUILDKITE_TOOLSETS` and `BUILDKITE_READ_ONLY` environment variables (or `--enabled-toolsets` and `--read-only` flags) from these `buildkite-mcp-server` commands provides unrestricted access to the Buildkite API, restricted only by all applicable [token scopes](/docs/apis/managing-api-tokens#token-scopes) available to the Buildkite user account's API access token, and what it can access on the Buildkite platform. ###### Selective read-only access to toolsets If you want to enable multiple [toolsets](#available-toolsets), but be selective over which ones of these have read-only access, you'll need to create two local MCP servers for your AI tool or agent, each with different configurations—one for toolsets with both read and write access, and the other for toolsets with read-only access. ###### Examples To enable the `user` and `pipelines` toolsets with read-only access, and the `builds` toolset with both read and write access, create two local MCP servers (in this case, running in Docker) each with these different configurations. For the `user` and `pipelines` toolsets with read-only access, configure the local MCP server's `docker run` command with: ```bash docker run --rm -e BUILDKITE_API_TOKEN=bkua_xxxxx -e BUILDKITE_TOOLSETS="user,pipelines" -e BUILDKITE_READ_ONLY="true" buildkite/mcp-server stdio ``` And for the `builds` toolset with read and write access, configure the local MCP server's `docker run` command with: ```bash docker run --rm -e BUILDKITE_API_TOKEN=bkua_xxxxx -e BUILDKITE_TOOLSETS="builds" buildkite/mcp-server stdio ``` > 📘 > When [configuring your AI tool or agent with these local MCP servers](/docs/apis/mcp-server/local/configuring-ai-tools), ensure you provide an appropriate name for each MCP server configuration to make it easier to identify which toolsets and level of access each server has to the Buildkite API. > For example, instead of: > `"buildkite": { ... }` > Use more descriptive names, for example: > `"buildkite-read-only-user-pipelines-toolsets": { ... }` > and > `"buildkite-builds-toolset": { ... }` ##### Recommended toolset configurations Once you've learned how to configure the [remote](#configuring-the-remote-mcp-server) or [local](#configuring-the-local-mcp-server) MCP server for toolsets, you can configure different combinations of [toolsets](#available-toolsets) for different use cases. ###### Recommended minimum baseline As a recommended minimum baseline, always include the `user` toolset as its tools provide essential user and organization information that many AI workflows depend on. ###### CI/CD management For CD/CD management, set the following MCP server toolsets: - `user` - `pipelines` - `builds` ###### Debugging and analysis For debugging and analysis of pipeline builds, set the following MCP server toolsets: - `user` - `builds` - `logs` - `tests` - `annotations` ###### Full access For full access to the Buildkite MCP server's toolsets: - If you are using the [remote MCP server](#configuring-the-remote-mcp-server), don't configure any toolsets, and instead, only configure the remote MCP server URL: `https://mcp.buildkite.com/mcp`. See [Configuring AI tools with the remote MCP server](/docs/apis/mcp-server/remote/configuring-ai-tools). - If you are using the [local MCP server](#configuring-the-local-mcp-server), also don't configure any toolsets (see [Configuring AI tools with the local MCP server](/docs/apis/mcp-server/local/configuring-ai-tools)), or, if you want to be explicit about this in your configuration, set the `BUILDKITE_TOOLSETS` environment variable or the `--enabled-toolsets` flag with a value of `all`. --- ### Configuring AI tools URL: https://buildkite.com/docs/apis/mcp-server/remote/configuring-ai-tools #### Configuring AI tools with the remote MCP server This page explains how to configure your AI tool to work with the [_remote_ Buildkite MCP server](/docs/apis/mcp-server#types-of-mcp-servers-remote-mcp-server). > 📘 > The Buildkite MCP server is available both [locally and remotely](/docs/apis/mcp-server#types-of-mcp-servers). This page is about configuring AI tools with the remote MCP server. If you are using an AI tool or agent and would prefer it to work with the _local_ MCP server, ensure you have followed the required instructions on [Installing the Buildkite MCP server](/docs/apis/mcp-server/local/installing) locally first, before proceeding with the relevant instructions on its [Configuring AI tools](/docs/apis/mcp-server/local/configuring-ai-tools) page. ##### Organization IP allowlist considerations If your Buildkite organization has an [API IP allowlist](/docs/apis/managing-api-tokens#restricting-api-access-by-ip-address) configured, you must add Buildkite's egress IP addresses to this allowlist for the remote MCP server to function. The remote MCP server makes API calls from Buildkite's infrastructure, and these requests are subject to your organization's API IP allowlist. Buildkite's current egress IP addresses are provided from the [meta API endpoint](/docs/apis/rest-api/meta). ##### Amp You can configure [Amp](https://ampcode.com/) with the remote Buildkite MCP server, by adding the following JSON configuration to your [Amp `settings.json` file](https://ampcode.com/manual#configuration), which requires the `mcp-remote` command argument to allow OAuth authorization. Learn more about this type of configuration in the [Custom Tools (MCP)](https://ampcode.com/manual#mcp) section of the Amp docs. ```json { "amp.mcpServers": { "buildkite": { "command": "npx", "args": [ "mcp-remote", "https://mcp.buildkite.com/mcp" ] } } } ``` Once connected to the remote MCP server, if you need a new OAuth token, the **Authorize Application** for the **Buildkite MCP Server** page appears. If so, scroll down and select your Buildkite organization in **Authorize for organization**, followed by **Authorize**. You're now ready to use Buildkite's remote MCP server through Amp for this Buildkite organization. ###### Toolsets and read-only access To enable [toolsets](/docs/apis/mcp-server/tools/toolsets) or [configure read-only access](/docs/apis/mcp-server#read-only-remote-mcp-server), or both, for the remote MCP server with [Amp](#amp), you can implement the following headers to this configuration, for example: ```json { "amp.mcpServers": { "buildkite-read-only-toolsets": { "command": "npx", "args": [ "mcp-remote", "https://mcp.buildkite.com/mcp/readonly", "--header", "X-Buildkite-Toolsets: user,pipelines,builds" ] } } } ``` This example applies read-only access to all toolsets specified in the `X-Buildkite-Toolsets` header. However, you can be selective over which toolsets have read-only access. Learn more about this in [Selective read-only access to toolsets](/docs/apis/mcp-server/tools/toolsets#configuring-the-remote-mcp-server-selective-read-only-access-to-toolsets). Alternatively, instead of using headers, you could also [use the URL extension](/docs/apis/mcp-server/tools/toolsets#configuring-the-remote-mcp-server-using-a-url-extension) approach by creating multiple MCP server configurations—one for each toolset. Ensure you provide an appropriate name for each MCP server configuration to make it easier to identify which toolsets and level of access each server has to the Buildkite API. ##### Claude Code You can configure [Claude Code](https://www.anthropic.com/claude-code) with the remote Buildkite MCP server by running the relevant Claude Code command, after [installing Claude Code](https://docs.anthropic.com/en/docs/claude-code/overview). ```bash claude mcp add --transport http buildkite https://mcp.buildkite.com/mcp ``` Once connected to the remote MCP server, if you need a new OAuth token, the **Authorize Application** for the **Buildkite MCP Server** page appears. If so, scroll down and select your Buildkite organization in **Authorize for organization**, followed by **Authorize**. You're now ready to use Buildkite's remote MCP server through Claude Code for this Buildkite organization. ###### Toolsets and read-only access To enable [toolsets](/docs/apis/mcp-server/tools/toolsets) or [configure read-only access](/docs/apis/mcp-server#read-only-remote-mcp-server), or both, for the remote MCP server with [Claude Code](#claude-code), you can add the following header configurations to this command, for example: ```bash claude mcp add --transport http buildkite-read-only-toolsets https://mcp.buildkite.com/mcp/readonly --header "X-Buildkite-Toolsets: user,pipelines,builds" ``` This example applies read-only access to all toolsets specified in the `X-Buildkite-Toolsets` header. However, you can be selective over which toolsets have read-only access. Learn more about this in [Selective read-only access to toolsets](/docs/apis/mcp-server/tools/toolsets#configuring-the-remote-mcp-server-selective-read-only-access-to-toolsets). Alternatively, instead of using headers, you could also [use the URL extension](/docs/apis/mcp-server/tools/toolsets#configuring-the-remote-mcp-server-using-a-url-extension) approach by creating multiple MCP server configurations—one for each toolset. Ensure you provide an appropriate name for each MCP server configuration to make it easier to identify which toolsets and level of access each server has to the Buildkite API. ##### Claude Desktop You can configure [Claude Desktop](https://claude.ai/download) with the remote Buildkite MCP server, by creating a custom connector for this MCP server in Claude Desktop. > 📘 > This process assumes you are on an Enterprise or Team plan (with either the Owner or Primary Owner role), or a Pro or Max plan for Claude Desktop. 1. Select **Settings** > **Connectors**. **Note:** If you're on an Enterprise or Team plan, select **Admin settings** > **Connectors** instead. 1. Towards the end of the **Connectors** page, select the **Add custom connector** button. 1. In the **Add custom connector** dialog, for the **Name** field, specify **Buildkite**. 1. For the **Remote MCP server URL** field, specify `https://mcp.buildkite.com/mcp`. 1. Select **Add** to complete the configuration. 1. On the **Settings** > **Connectors** page, select the **Connect** button for **Buildkite** to connect to the remote MCP server. **Note:** If you are on the Enterprise or Team plan, to access this **Connect** button, you may need to select the **Your connectors** tab first. If you need a new OAuth token, the **Authorize Application** for the **Buildkite MCP Server** page appears. If so, scroll down and select your Buildkite organization in **Authorize for organization**, followed by **Authorize**. You're now ready to use Buildkite's remote MCP server through Claude Desktop for this Buildkite organization. If you need more assistance with this process, follow Anthropic's guidelines for [Getting Started with Custom Connectors](https://support.anthropic.com/en/articles/11175166-getting-started-with-custom-connectors-using-remote-mcp#h_3d1a65aded). ###### Toolsets and read-only access To enable [toolsets](/docs/apis/mcp-server/tools/toolsets) or [configure read-only access](/docs/apis/mcp-server#read-only-remote-mcp-server), or both, for the remote MCP server with [Claude Desktop](#claude-desktop), follow this [create custom connector procedure](#claude-desktop) by implementing the [URL extension approach](/docs/apis/mcp-server/tools/toolsets#configuring-the-remote-mcp-server-using-a-url-extension) when enabling the toolset, with the following updates: - For the **Name** field, specify a name that better describes the customer connector. For example, **Buildkite - pipelines toolset** for the `pipelines` toolset. - For the **Remote MCP server URL** field, specify the enabled toolset for the remote MCP server. For example, `https://mcp.buildkite.com/mcp/x/pipelines`. **Note:** If you also want to enforce read-only access for the tools in this toolset, append `/readonly` to this URL, for example, `https://mcp.buildkite.com/mcp/x/pipelines/readonly`. Repeat this process for each toolset you want to enable. You'll end up with multiple custom connectors for the Buildkite MCP server, and to use them together, you'll need to connect to each one you want to use during your Claude Desktop sessions. ##### Cursor You can configure [Cursor](https://cursor.com/) with the remote Buildkite MCP server by adding the relevant configuration to your [Cursor's `mcp.json` file](https://docs.cursor.com/en/context/mcp#using-mcpjson), which is usually located in your home directory's `.cursor` sub-directory. 1. From your **Cursor Settings**, select **MCP & Integrations**. 1. Under **MCP Tools**, select **Add Custom MCP** to open the `mcp.json` file. 1. Implement the following update to this file, where if you have other MCP servers configured in Cursor, just add the `"buildkite": { ... }` object to this JSON file. ```json { "mcpServers": { "buildkite": { "url": "https://mcp.buildkite.com/mcp" } } } ``` Once connected to the remote MCP server, if you need a new OAuth token, the **Authorize Application** for the **Buildkite MCP Server** page appears. If so, scroll down and select your Buildkite organization in **Authorize for organization**, followed by **Authorize**. You're now ready to use Buildkite's remote MCP server through Cursor for this Buildkite organization. ###### Toolsets and read-only access To enable [toolsets](/docs/apis/mcp-server/tools/toolsets) or [configure read-only access](/docs/apis/mcp-server#read-only-remote-mcp-server), or both, for the remote MCP server with [Cursor](#cursor), you can implement the following headers to this configuration, for example: ```json { "mcpServers": { "buildkite-read-only-toolsets": { "url": "https://mcp.buildkite.com/mcp/readonly", "headers": { "X-Buildkite-Toolsets": "user,pipelines,builds" } } } } ``` This example applies read-only access to all toolsets specified in the `X-Buildkite-Toolsets` header. However, you can be selective over which toolsets have read-only access. Learn more about this in [Selective read-only access to toolsets](/docs/apis/mcp-server/tools/toolsets#configuring-the-remote-mcp-server-selective-read-only-access-to-toolsets). Alternatively, instead of using headers, you could also [use the URL extension](/docs/apis/mcp-server/tools/toolsets#configuring-the-remote-mcp-server-using-a-url-extension) approach by creating multiple MCP server configurations—one for each toolset. Ensure you provide an appropriate name for each MCP server configuration to make it easier to identify which toolsets and level of access each server has to the Buildkite API. ##### Goose [Goose](https://github.com/aaif-goose/goose) is a local AI tool and agent that can be configured with different [LLM (AI model) providers](https://goose-docs.ai/docs/getting-started/providers). You can configure Goose with the remote Buildkite MCP server by adding the relevant configuration to the `extensions:` section of your [Goose `config.yaml` file](https://goose-docs.ai/docs/getting-started/using-extensions#config-entry). ```yaml extensions: buildkite: enabled: true type: streamable_http name: buildkite uri: https://mcp.buildkite.com/mcp envs: {} env_keys: [] headers: {} description: '' timeout: 300 bundled: null available_tools: [] ``` Once connected to the remote MCP server, if you need a new OAuth token, the **Authorize Application** for the **Buildkite MCP Server** page appears. If so, scroll down and select your Buildkite organization in **Authorize for organization**, followed by **Authorize**. You're now ready to use Buildkite's remote MCP server through Goose for this Buildkite organization. ###### Toolsets and read-only access To enable [toolsets](/docs/apis/mcp-server/tools/toolsets) or [configure read-only access](/docs/apis/mcp-server#read-only-remote-mcp-server), or both, for the remote MCP server with [Goose](#goose), you can implement the following headers to this configuration, for example: ```yaml extensions: buildkitereadonlytoolsets: enabled: true type: streamable_http name: buildkitereadonlytoolsets uri: https://mcp.buildkite.com/mcp/readonly envs: {} env_keys: [] headers: X-Buildkite-Toolsets: user,pipelines,builds description: '' timeout: 300 bundled: null available_tools: [] ``` This example applies read-only access to all toolsets specified in the `X-Buildkite-Toolsets` header. However, you can be selective over which toolsets have read-only access. Learn more about this in [Selective read-only access to toolsets](/docs/apis/mcp-server/tools/toolsets#configuring-the-remote-mcp-server-selective-read-only-access-to-toolsets). Alternatively, instead of using headers, you could also [use the URL extension](/docs/apis/mcp-server/tools/toolsets#configuring-the-remote-mcp-server-using-a-url-extension) approach by creating multiple MCP server configurations—one for each toolset. Ensure you provide an appropriate name for each MCP server configuration to make it easier to identify which toolsets and level of access each server has to the Buildkite API. ##### Visual Studio Code You can configure [Visual Studio Code](https://code.visualstudio.com/) with the remote Buildkite MCP server by adding the relevant configuration to your [Visual Studio Code's `mcp.json` file](https://code.visualstudio.com/docs/copilot/customization/mcp-servers#_add-an-mcp-server). ```json { "servers": { "buildkite": { "url": "https://mcp.buildkite.com/mcp", "type": "http" } } } ``` Alternatively, you can initiate this process through the Visual Studio Code interface. To do this: 1. In the [Command Palette](https://code.visualstudio.com/docs/getstarted/getting-started#_access-commands-with-the-command-palette), find and select the **MCP: Add Server** command. 1. Select **HTTP (HTTP or Server-Sent Events)** to start configuring a remote MCP server. 1. For **Enter Server URL**, specify `https://mcp.buildkite.com/mcp`. 1. For **Enter Server ID**, specify `buildkite`. Follow the remaining prompts to complete this configuration process. Once connected to the remote MCP server, if you need a new OAuth token, the **Authorize Application** for the **Buildkite MCP Server** page appears. If so, scroll down and select your Buildkite organization in **Authorize for organization**, followed by **Authorize**. You're now ready to use Buildkite's remote MCP server through Visual Studio Code for this Buildkite organization. ###### Toolsets and read-only access To enable [toolsets](/docs/apis/mcp-server/tools/toolsets) or [configure read-only access](/docs/apis/mcp-server#read-only-remote-mcp-server), or both, for the remote MCP server with [Visual Studio Code](#visual-studio-code), you can implement the following headers to your `mcp.json` configuration file, for example: ```json { "servers": { "buildkite-read-only-toolsets": { "url": "https://mcp.buildkite.com/mcp/readonly", "type": "http", "headers": { "X-Buildkite-Toolsets": "user,pipelines,builds" } } } } ``` This example applies read-only access to all toolsets specified in the `X-Buildkite-Toolsets` header. However, you can be selective over which toolsets have read-only access. Learn more about this in [Selective read-only access to toolsets](/docs/apis/mcp-server/tools/toolsets#configuring-the-remote-mcp-server-selective-read-only-access-to-toolsets). Alternatively, instead of using headers, you could also [use the URL extension](/docs/apis/mcp-server/tools/toolsets#configuring-the-remote-mcp-server-using-a-url-extension) approach by creating multiple MCP server configurations—one for each toolset. Ensure you provide an appropriate name for each MCP server configuration to make it easier to identify which toolsets and level of access each server has to the Buildkite API. ##### Windsurf You can configure [Windsurf](https://windsurf.com/) with the remote Buildkite MCP server by adding the relevant configuration to your [Windsurf's `mcp_config.json` file](https://docs.windsurf.com/windsurf/cascade/mcp#mcp-config-json). ```json { "mcpServers": { "buildkite": { "url": "https://mcp.buildkite.com/mcp" } } } ``` Once connected to the remote MCP server, if you need a new OAuth token, the **Authorize Application** for the **Buildkite MCP Server** page appears. If so, scroll down and select your Buildkite organization in **Authorize for organization**, followed by **Authorize**. You're now ready to use Buildkite's remote MCP server through Windsurf for this Buildkite organization. ###### Toolsets and read-only access To enable [toolsets](/docs/apis/mcp-server/tools/toolsets) or [configure read-only access](/docs/apis/mcp-server#read-only-remote-mcp-server), or both, for the remote MCP server with [Windsurf](#windsurf), you can implement the following headers to your `mcp_config.json` configuration file, for example: ```json { "mcpServers": { "buildkite-read-only-toolsets": { "url": "https://mcp.buildkite.com/mcp/readonly", "headers": { "X-Buildkite-Toolsets": "user,pipelines,builds" } } } } ``` This example applies read-only access to all toolsets specified in the `X-Buildkite-Toolsets` header. However, you can be selective over which toolsets have read-only access. Learn more about this in [Selective read-only access to toolsets](/docs/apis/mcp-server/tools/toolsets#configuring-the-remote-mcp-server-selective-read-only-access-to-toolsets). Alternatively, instead of using headers, you could also [use the URL extension](/docs/apis/mcp-server/tools/toolsets#configuring-the-remote-mcp-server-using-a-url-extension) approach by creating multiple MCP server configurations—one for each toolset. Ensure you provide an appropriate name for each MCP server configuration to make it easier to identify which toolsets and level of access each server has to the Buildkite API. --- ### Rate limits URL: https://buildkite.com/docs/apis/mcp-server/remote/rate-limits #### Remote MCP server rate limits Requests to the [Buildkite REST API](/docs/apis/rest-api) made through the [remote Buildkite MCP server](/docs/apis/mcp-server#types-of-mcp-servers-remote-mcp-server) are tracked under a separate rate limit of _50 requests per minute per user_. Unlike your Buildkite organization [REST API rate limit](/docs/apis/rest-api/limits#rate-limits), this remote MCP server limit is scoped to each individual user. Your remote Buildkite MCP server usage is independent of your Buildkite organization's standard REST API rate limit, whose quota is not affected by remote MCP server usage. ##### Checking rate limit details The rate limit status is available in the following response headers of each API call. - `RateLimit-Remaining`: The remaining requests that can be made within the current time window. - `RateLimit-Limit`: The current rate limit. - `RateLimit-Reset`: The number of seconds remaining until a new time window is started and limits are reset. - `RateLimit-Scope`: Set to `mcp` for all MCP server requests, identifying the type of rate limit applied. For example, the following response headers from an MCP server request, show that 35 of the 50 per-user requests remain in the current window, with 28 seconds before a new time window begins. ```js RateLimit-Remaining: 35 RateLimit-Limit: 50 RateLimit-Reset: 28 RateLimit-Scope: mcp ``` ##### Exceeding the rate limit Once your Buildkite MCP server rate limit is exceeded, API requests made by the MCP server on your behalf fail with a `429` HTTP status code until the rate limit window is reset. The window resets after every 60 seconds, from which requests will work again. The `429` response body includes additional context: ```json { "message": "You have exceeded your API rate limit. Please wait 28 seconds before making more requests.", "scope": "mcp", "limit": 50, "current": 55, "reset": 28 } ``` --- ### Installing the server URL: https://buildkite.com/docs/apis/mcp-server/local/installing #### Installing the Buildkite MCP server The Buildkite MCP server is available both [locally and remotely](/docs/apis/mcp-server#types-of-mcp-servers). This page is about installing and configuring the _local_ MCP server, beginning with [Before you start](#before-you-start). Once you have installed your local Buildkite MCP server using the relevant instructions on this page, you can proceed to [configure your AI tools or agents](/docs/apis/mcp-server/local/configuring-ai-tools) to work with this MCP server. > 📘 > Buildkite's _remote_ MCP server requires no installation and is available publicly, with authentication and authorization fully managed by OAuth. If you're working directly with an AI tool as opposed to using an AI agent in a workflow (see [Types of MCP servers](/docs/apis/mcp-server#types-of-mcp-servers) for more information), and you'd prefer to use the remote MCP server instead, proceed directly to its [Configuring AI tools](/docs/apis/mcp-server/remote/configuring-ai-tools) page. ##### Before you start To use Buildkite's MCP server locally, you'll need the following: - A Buildkite user account, which you can sign into your Buildkite organization with. - A [Buildkite API access token](https://buildkite.com/user/api-access-tokens) for this Buildkite user account. Learn more about the required scopes to configure for this token in [Configure an API access token](#configure-an-api-access-token). Specific requirements for each type of local installation method for the Buildkite MCP server are covered in their relevant [installation sections](#install-and-run-the-server-locally). ##### Configure an API access token This section explains which [scopes](/docs/apis/managing-api-tokens#token-scopes) your local Buildkite MCP server's API access token requires permission for within your Buildkite organization, for your particular use case. These scopes typically fit into the following categories: - [Minimum access](#configure-an-api-access-token-minimum-access) - [All read-only access](#configure-an-api-access-token-all-read-only-access) - [All read and write access](#configure-an-api-access-token-all-read-and-write-access) ###### Minimum access For minimum access, select the following [scopes](/docs/apis/managing-api-tokens#token-scopes) for your local MCP server's API access token. These scopes provide your token with the minimum required access permissions on the Buildkite MCP server, and prevent access to more sensitive information within your Buildkite organization. | Scope | | Access permissions | **** | `` | You can also [create a new Buildkite API access token rapidly with these pre-selected scopes](https://buildkite.com/user/api-access-tokens/new?scopes%5B%5D=read_builds&scopes%5B%5D=read_pipelines&scopes%5B%5D=read_user). ###### All read-only access For all read-only access, select both the [minimum access permissions](#configure-an-api-access-token-minimum-access), as well as the following additional [scopes](/docs/apis/managing-api-tokens#token-scopes) for your local MCP server's API access token. These scopes provide your token with all read-only access permissions available through the Buildkite MCP server. These additional scopes include permission to access more information about your Buildkite organization, including clusters, more pipeline build details (that is, log information), as well as access to Test Engine test suite data. | Scope | | Access permissions | **** | `` | You can also [create a new Buildkite API access token rapidly with these pre-selected scopes](https://buildkite.com/user/api-access-tokens/new?scopes%5B%5D=read_clusters&scopes%5B%5D=read_pipelines&scopes%5B%5D=read_builds&scopes%5B%5D=read_build_logs&scopes%5B%5D=read_user&scopes%5B%5D=read_organizations&scopes%5B%5D=read_artifacts&scopes%5B%5D=read_suites). ###### All read and write access For all read and write access, select both the [minimum access permissions](#configure-an-api-access-token-minimum-access) and [all read-only access permissions](#configure-an-api-access-token-all-read-only-access), as well as the following additional [scopes](/docs/apis/managing-api-tokens#token-scopes) for your local MCP server's API access token. These scopes provide your token with all available read _and_ write access permissions available through the Buildkite MCP server. These additional scopes include permission to edit pipelines and their builds within your Buildkite organization. | Scope | | Access permissions | **** | `` | You can also [create a new Buildkite API access token rapidly with these pre-selected scopes](https://buildkite.com/user/api-access-tokens/new?scopes%5B%5D=read_clusters&scopes%5B%5D=read_pipelines&scopes%5B%5D=read_builds&scopes%5B%5D=read_build_logs&scopes%5B%5D=read_user&scopes%5B%5D=read_organizations&scopes%5B%5D=read_artifacts&scopes%5B%5D=read_suites&scopes%5B%5D=write_builds&scopes%5B%5D=write_pipelines). ##### Install and run the server locally To install and run the Buildkite MCP server locally, you can do so using [Docker](#install-and-run-the-server-locally-using-docker) (recommended), natively as a [pre-built binary](#install-and-run-the-server-locally-using-a-pre-built-binary), or [build it from source](#install-and-run-the-server-locally-building-from-source). ###### Using Docker To run the Buildkite MCP server locally in Docker: 1. Ensure you have installed and are running [Docker](https://www.docker.com/) version 20.x or later. **Note:** * You can also confirm the minimum required Docker version from the [buildkite-mcp-server's README](https://github.com/buildkite/buildkite-mcp-server/tree/main?tab=readme-ov-file#%EF%B8%8F-prerequisites). * These remaining steps are for running the MCP server in Docker from the command line, or if you have installed the [Docker Engine](https://docs.docker.com/engine/install/) only. If you've installed Docker through Docker Desktop, you can follow the more convenient [Docker Desktop instructions](#using-docker-desktop) instead. 1. Open a terminal or command prompt, and run this command to obtain the Buildkite MCP server Docker image. ```bash docker pull buildkite/mcp-server ``` 1. Run the following command to spin up the Buildkite MCP server image in Docker. ```bash docker run --pull=always -q -it --rm -e BUILDKITE_API_TOKEN= buildkite/mcp-server stdio ``` where `` is the value of your Buildkite API access token, set with [your required scopes](#configure-an-api-access-token). This token usually begins with the value `bkua_`. ###### Using Docker Desktop If you are using [Docker Desktop](https://www.docker.com/products/docker-desktop/), you can add the Buildkite MCP server to the **MCP Toolkit** area of Docker Desktop. To do so, visit the [Buildkite MCP server](https://hub.docker.com/mcp/server/buildkite/overview) page on [Docker's mcp hub site](https://hub.docker.com/mcp) for MCP servers. This page provides details on which Docker Desktop versions are supported, and a button from which you can add the MCP server directly to your Docker Desktop installation. ###### Using a pre-built binary To run the Buildkite MCP server locally using a pre-built binary, follow these steps, bearing in mind that macOS users can also use the convenient [Homebrew method](#homebrew-method) as an alternative to this procedure: 1. Visit the [buildkite-mcp-server Releases](https://github.com/buildkite/buildkite-mcp-server/releases) page in GitHub. 1. Download the appropriate pre-built binary file for your particular operating system and its architecture. For macOS, choose the appropriate **Darwin** binary for your machine's architecture. 1. Extract the binary and execute it to install the Buildkite MCP server locally to your computer. > 📘 > The installer is fully static, and no pre-requisite libraries are required. ###### Homebrew method Instead of installing the relevant **Darwin** binary from the [buildkite-mcp-server Releases](https://github.com/buildkite/buildkite-mcp-server/releases) page, you can run this [Homebrew](https://brew.sh/) command to install the Buildkite MCP server locally on macOS: ```bash brew install buildkite/buildkite/buildkite-mcp-server ``` ###### Building from source To build the Buildkite MCP server locally from source, run these commands: 1. Ensure you have installed [Go](https://go.dev/dl/) version 1.24 or later. **Note:** You can also confirm the minimum required Go version from the [buildkite-mcp-server's README](https://github.com/buildkite/buildkite-mcp-server/tree/main?tab=readme-ov-file#%EF%B8%8F-prerequisites). 1. Run the following commands to build the MCP server locally from source. ```bash go install github.com/buildkite/buildkite-mcp-server/cmd/buildkite-mcp-server@latest ``` > 📘 > If you're interested in contributing to the development of the Buildkite MCP server, see the [Contributing section of the README](https://github.com/buildkite/buildkite-mcp-server/tree/main?tab=readme-ov-file#-contributing) and [Development](https://github.com/buildkite/buildkite-mcp-server/blob/main/DEVELOPMENT.md) guide for more information. ##### Using 1Password For enhanced security, you can store your [Buildkite API access token](#configure-an-api-access-token) in [1Password](https://1password.com/) and reference this token using the [1Password command-line interface (CLI)](https://developer.1password.com/docs/cli) instead of exposing it as a plain environment variable. ###### Before you start Ensure you have met the following requirements before continuing with any 1Password configuration. - You have [installed the 1Password CLI](https://developer.1password.com/docs/cli/get-started/), and have authenticated into it. - Your [API access token](#configure-an-api-access-token) has been stored as an item in 1Password. ###### Accessing the API access token through 1Password Instead of using the `BUILDKITE_API_TOKEN` environment variable or `--api-token` flag, use the `BUILDKITE_API_TOKEN_FROM_1PASSWORD` environment variable or `--api-token-from-1password` flag, respectively, with a 1Password item reference. ###### Example environment variable usage ```bash export BUILDKITE_API_TOKEN_FROM_1PASSWORD="op://Private/Buildkite API Token/credential" buildkite-mcp-server stdio ``` ###### Example CLI flag usage ```bash buildkite-mcp-server stdio --api-token-from-1password="op://Private/Buildkite API Token/credential" ``` > 📘 > The local MCP server will call `op read -n ` to fetch the API access token. Ensure your 1Password CLI has been successfully authenticated before starting the server. ##### Self-hosting the MCP server You can [install the Buildkite MCP server](#install-and-run-the-server-locally) as your own self-hosted server, which effectively behaves similarly to Buildkite's remote MCP server, but as one that operates in your own environment. To do this, use the following the following command, which runs the MCP server with streamable HTTP transport, and makes the server available through `http://localhost:3000/mcp`: ```bash buildkite-mcp-server http --api-token=${BUILDKITE_API_TOKEN} ``` where `${BUILDKITE_API_TOKEN}` is the value of your [configured Buildkite API access token](#configure-an-api-access-token), set with your required scopes. To run the MCP server with legacy server-sent events (SSE), use this command with the `--use-sse` option. For example: ```bash buildkite-mcp-server http --use-sse --api-token=${BUILDKITE_API_TOKEN} ``` To change the listening address or port on which the MCP server runs, use the `HTTP_LISTEN_ADDR` environment variable. For example, to set this port to `4321`: ```bash HTTP_LISTEN_ADDR="localhost:4321" buildkite-mcp-server http --api-token=... ``` To run the MCP server using Docker with streamable HTTP transport and expose the server through port `3000`: ```bash docker run --pull=always -q --rm -e BUILDKITE_API_TOKEN -e HTTP_LISTEN_ADDR=":3000" -p 127.0.0.1:3000:3000 buildkite/mcp-server http ``` With your self-hosted MCP server up and running, you can now [configure your AI tools](/docs/apis/mcp-server/remote/configuring-ai-tools) as you would for Buildkite's remote MCP server, but substituting its URL (`https://mcp.buildkite.com/mcp`) for the URL of your self-hosted MCP server (for example, `http://127.0.0.1:3000/mcp`). Note that the OAuth authentication flow won't be triggered in this case, as your server will be configured to use your own API access token. > 📘 > If you'd like to customize your self-hosted MCP server further, note that the [Buildkite MCP server](https://github.com/buildkite/buildkite-mcp-server) implements the [mcp-go](https://github.com/mark3labs/mcp-go) library. Consult this library's README and associated documentation for more customization details. --- ### Configuring AI tools URL: https://buildkite.com/docs/apis/mcp-server/local/configuring-ai-tools #### Configuring AI tools with the local MCP server Once you followed the required instructions on [Installing the Buildkite MCP server](/docs/apis/mcp-server/local/installing) to install the MCP server locally for your AI tool or agent, you can then use the instructions on this page to configure your AI tool or agent to work with this [_local_ Buildkite MCP server](/docs/apis/mcp-server#types-of-mcp-servers-local-mcp-server). > 📘 > The Buildkite MCP server is available both [locally and remotely](/docs/apis/mcp-server#types-of-mcp-servers). This page is about configuring AI tools with the local MCP server. If you are working directly with an AI tool and would prefer it to use the _remote_ MCP server, proceed with the relevant instructions on its [Configuring AI tools](/docs/apis/mcp-server/remote/configuring-ai-tools) page. All the Docker instructions on this page implement the `--pull=always` option to ensure that the latest MCP server version is obtained when the container is started. If you are installing the Buildkite MCP server locally as a binary, then you are responsible for manually upgrading it. For all configuration processes covered on this page, you can alternatively store your [Buildkite API access token](/docs/apis/mcp-server/local/installing#configure-an-api-access-token) in [1Password](https://1password.com/), and configure your local MCP server to access this token from 1Password. Learn more about this process in [Using 1Password](/docs/apis/mcp-server/local/installing#using-1password) and [Accessing the API access token through 1Password](#accessing-the-api-access-token-through-1password), towards the end of this page. ##### Amp You can configure your [Amp](https://ampcode.com/) AI tool or agent to work with your local Buildkite MCP server, running [in Docker](#amp-docker) or [as a binary](#amp-binary). To do this, add the relevant configuration to your [Amp `settings.json` file](https://ampcode.com/manual#configuration). ###### Docker When using [Docker](/docs/apis/mcp-server/local/installing#install-and-run-the-server-locally-using-docker) to run the MCP server, add the following JSON configuration to your [Amp `settings.json` file](https://ampcode.com/manual#configuration). ```json { "amp.mcpServers": { "buildkite": { "command": "docker", "args": [ "run", "--pull=always", "-q", "-i", "--rm", "-e", "BUILDKITE_API_TOKEN", "buildkite/mcp-server", "stdio" ], "env": { "BUILDKITE_API_TOKEN": "bkua_xxxxx" } } } } ``` where `bkua_xxxxx` is the value of your [configured Buildkite API access token](/docs/apis/mcp-server/local/installing#configure-an-api-access-token), set with your required scopes. You can amend this configuration to enable [toolsets](/docs/apis/mcp-server/tools/toolsets) or configure read-only access, or both, for the Docker version of the local MCP server. Learn more about how to do this in the [Using Docker](/docs/apis/mcp-server/tools/toolsets#configuring-the-local-mcp-server-using-docker) section of [Configuring the local MCP server](/docs/apis/mcp-server/tools/toolsets#configuring-the-local-mcp-server) on the [Toolsets](/docs/apis/mcp-server/tools/toolsets) page. ###### Binary When using a [pre-built](/docs/apis/mcp-server/local/installing#install-and-run-the-server-locally-using-a-pre-built-binary) or [source-built](/docs/apis/mcp-server/local/installing#install-and-run-the-server-locally-building-from-source) binary to run the MCP server, add the following JSON configuration to your [Amp `settings.json` file](https://ampcode.com/manual#configuration). ```json { "amp.mcpServers": { "buildkite": { "command": "buildkite-mcp-server", "args": ["stdio"], "env": { "BUILDKITE_API_TOKEN": "bkua_xxxxx" } } } } ``` where `bkua_xxxxx` is the value of your [configured Buildkite API access token](/docs/apis/mcp-server/local/installing#configure-an-api-access-token), set with your required scopes. You can amend this configuration to enable [toolsets](/docs/apis/mcp-server/tools/toolsets) or configure read-only access, or both, for the binary version of the local MCP server. Learn more about how to do this in the [Using the binary](/docs/apis/mcp-server/tools/toolsets#configuring-the-local-mcp-server-using-the-binary) section of [Configuring the local MCP server](/docs/apis/mcp-server/tools/toolsets#configuring-the-local-mcp-server) on the [Toolsets](/docs/apis/mcp-server/tools/toolsets) page. ##### Claude Code You can configure your [Claude Code](https://www.anthropic.com/claude-code) AI tool or agent to work with your local Buildkite MCP server, running [in Docker](#claude-code-docker) or [as a binary](#claude-code-binary). To do this, run the relevant Claude Code command, after [installing Claude Code](https://docs.anthropic.com/en/docs/claude-code/overview). ###### Docker When using [Docker](/docs/apis/mcp-server/local/installing#install-and-run-the-server-locally-using-docker) to run the MCP server, run the following Claude Code command. ```bash claude mcp add buildkite -- docker run --pull=always -q --rm -i -e BUILDKITE_API_TOKEN=bkua_xxxxx buildkite/mcp-server stdio ``` where `bkua_xxxxx` is the value of your [configured Buildkite API access token](/docs/apis/mcp-server/local/installing#configure-an-api-access-token), set with your required scopes. You can amend this configuration to enable [toolsets](/docs/apis/mcp-server/tools/toolsets) or configure read-only access, or both, for the Docker version of the local MCP server. Learn more about how to do this in the [Using Docker](/docs/apis/mcp-server/tools/toolsets#configuring-the-local-mcp-server-using-docker) section of [Configuring the local MCP server](/docs/apis/mcp-server/tools/toolsets#configuring-the-local-mcp-server) on the [Toolsets](/docs/apis/mcp-server/tools/toolsets) page. ###### Binary When using a [pre-built](/docs/apis/mcp-server/local/installing#install-and-run-the-server-locally-using-a-pre-built-binary) or [source-built](/docs/apis/mcp-server/local/installing#install-and-run-the-server-locally-building-from-source) binary to run the MCP server, run the following Claude Code command. ```bash claude mcp add buildkite --env BUILDKITE_API_TOKEN=bkua_xxxxx -- buildkite-mcp-server stdio ``` where `bkua_xxxxx` is the value of your [configured Buildkite API access token](/docs/apis/mcp-server/local/installing#configure-an-api-access-token), set with your required scopes. You can amend this configuration to enable [toolsets](/docs/apis/mcp-server/tools/toolsets) or configure read-only access, or both, for the binary version of the local MCP server. Learn more about how to do this in the [Using the binary](/docs/apis/mcp-server/tools/toolsets#configuring-the-local-mcp-server-using-the-binary) section of [Configuring the local MCP server](/docs/apis/mcp-server/tools/toolsets#configuring-the-local-mcp-server) on the [Toolsets](/docs/apis/mcp-server/tools/toolsets) page. ##### Claude Desktop You can configure [Claude Desktop](https://claude.ai/download) to work with your local Buildkite MCP server, running [in Docker](#claude-desktop-docker) or [as a binary](#claude-desktop-binary). To do this, add the relevant configuration to your [Claude Desktop's `claude_desktop_config.json` file](https://modelcontextprotocol.io/quickstart/server#testing-your-server-with-claude-for-desktop). ###### Docker When using [Docker](/docs/apis/mcp-server/local/installing#install-and-run-the-server-locally-using-docker) to run the MCP server, add the following configuration to your [Claude Desktop's `claude_desktop_config.json` file](https://modelcontextprotocol.io/quickstart/server#testing-your-server-with-claude-for-desktop), which you can access from Claude Desktop's **Settings** > **Developer** > **Edit Config** button on the **Local MCP servers** page. ```json { "mcpServers": { "buildkite": { "command": "docker", "args": [ "run", "--pull=always", "-q", "-i", "--rm", "-e", "BUILDKITE_API_TOKEN", "buildkite/mcp-server", "stdio" ], "env": { "BUILDKITE_API_TOKEN": "bkua_xxxxx" } } } } ``` where `bkua_xxxxx` is the value of your [configured Buildkite API access token](/docs/apis/mcp-server/local/installing#configure-an-api-access-token), set with your required scopes. You can amend this configuration to enable [toolsets](/docs/apis/mcp-server/tools/toolsets) or configure read-only access, or both, for the Docker version of the local MCP server. Learn more about how to do this in the [Using Docker](/docs/apis/mcp-server/tools/toolsets#configuring-the-local-mcp-server-using-docker) section of [Configuring the local MCP server](/docs/apis/mcp-server/tools/toolsets#configuring-the-local-mcp-server) on the [Toolsets](/docs/apis/mcp-server/tools/toolsets) page. ###### Binary When using a [pre-built](/docs/apis/mcp-server/local/installing#install-and-run-the-server-locally-using-a-pre-built-binary) or [source-built](/docs/apis/mcp-server/local/installing#install-and-run-the-server-locally-building-from-source) binary to run the MCP server, add the following configuration to your [Claude Desktop's `claude_desktop_config.json` file](https://modelcontextprotocol.io/quickstart/server#testing-your-server-with-claude-for-desktop), which you can access from Claude Desktop's **Settings** > **Developer** > **Edit Config** button on the **Local MCP servers** page. ```json { "mcpServers": { "buildkite": { "command": "buildkite-mcp-server", "args": ["stdio"], "env": { "BUILDKITE_API_TOKEN": "bkua_xxxxx" } } } } ``` where `bkua_xxxxx` is the value of your [configured Buildkite API access token](/docs/apis/mcp-server/local/installing#configure-an-api-access-token), set with your required scopes. You can amend this configuration to enable [toolsets](/docs/apis/mcp-server/tools/toolsets) or configure read-only access, or both, for the binary version of the local MCP server. Learn more about how to do this in the [Using the binary](/docs/apis/mcp-server/tools/toolsets#configuring-the-local-mcp-server-using-the-binary) section of [Configuring the local MCP server](/docs/apis/mcp-server/tools/toolsets#configuring-the-local-mcp-server) on the [Toolsets](/docs/apis/mcp-server/tools/toolsets) page. ##### Cursor You can configure [Cursor](https://cursor.com/) to work with your local Buildkite MCP server, running [in Docker](#cursor-docker) or [as a binary](#cursor-binary). To do this, add the relevant configuration to your [Cursor's `mcp.json` file](https://docs.cursor.com/en/context/mcp#using-mcpjson), which is usually located in your home directory's `.cursor` sub-directory. To access the `mcp.json` file through the Cursor app to implement this configuration: 1. From your **Cursor Settings**, select **MCP & Integrations**. 1. Under **MCP Tools**, select **Add Custom MCP** to open the `mcp.json` file. 1. Implement one of the following required updates to this file, where if you have other MCP servers configured in Cursor, just add the `"buildkite": { ... }` object to this JSON file. ###### Docker When using [Docker](/docs/apis/mcp-server/local/installing#install-and-run-the-server-locally-using-docker) to run the MCP server, add the following JSON configuration to your [Cursor `mcp.json` file](https://docs.cursor.com/en/context/mcp#using-mcpjson). ```json { "mcpServers": { "buildkite": { "command": "docker", "args": [ "run", "--pull=always", "-q", "-i", "--rm", "-e", "BUILDKITE_API_TOKEN", "buildkite/mcp-server", "stdio" ], "env": { "BUILDKITE_API_TOKEN": "bkua_xxxxx" } } } } ``` where `bkua_xxxxx` is the value of your [configured Buildkite API access token](/docs/apis/mcp-server/local/installing#configure-an-api-access-token), set with your required scopes. You can amend this configuration to enable [toolsets](/docs/apis/mcp-server/tools/toolsets) or configure read-only access, or both, for the Docker version of the local MCP server. Learn more about how to do this in the [Using Docker](/docs/apis/mcp-server/tools/toolsets#configuring-the-local-mcp-server-using-docker) section of [Configuring the local MCP server](/docs/apis/mcp-server/tools/toolsets#configuring-the-local-mcp-server) on the [Toolsets](/docs/apis/mcp-server/tools/toolsets) page. ###### Binary When using a [pre-built](/docs/apis/mcp-server/local/installing#install-and-run-the-server-locally-using-a-pre-built-binary) or [source-built](/docs/apis/mcp-server/local/installing#install-and-run-the-server-locally-building-from-source) binary to run the MCP server, add the following JSON configuration to your [Cursor `mcp.json` file](https://docs.cursor.com/en/context/mcp#using-mcpjson). ```json { "mcpServers": { "buildkite": { "command": "buildkite-mcp-server", "args": ["stdio"], "env": { "BUILDKITE_API_TOKEN": "bkua_xxxxx" } } } } ``` where `bkua_xxxxx` is the value of your [configured Buildkite API access token](/docs/apis/mcp-server/local/installing#configure-an-api-access-token), set with your required scopes. You can amend this configuration to enable [toolsets](/docs/apis/mcp-server/tools/toolsets) or configure read-only access, or both, for the binary version of the local MCP server. Learn more about how to do this in the [Using the binary](/docs/apis/mcp-server/tools/toolsets#configuring-the-local-mcp-server-using-the-binary) section of [Configuring the local MCP server](/docs/apis/mcp-server/tools/toolsets#configuring-the-local-mcp-server) on the [Toolsets](/docs/apis/mcp-server/tools/toolsets) page. ##### Goose You can configure your [Goose](https://github.com/aaif-goose/goose) AI tool or agent to work with your local Buildkite MCP server, running [in Docker](#goose-docker) or [as a binary](#goose-binary). To do this, add the relevant configuration the `extensions:` section of your [Goose `config.yaml` file](https://goose-docs.ai/docs/getting-started/using-extensions#config-entry). ###### Docker When using [Docker](/docs/apis/mcp-server/local/installing#install-and-run-the-server-locally-using-docker) to run the MCP server, add the following YAML configuration the `extensions:` section of your [Goose `config.yaml` file](https://goose-docs.ai/docs/getting-started/using-extensions#config-entry). ```yaml extensions: fetch: name: Buildkite cmd: docker args: ["run", "--pull=always", "-q", "-i", "--rm", "-e", "BUILDKITE_API_TOKEN", "buildkite/mcp-server", "stdio"] enabled: true envs: { "BUILDKITE_API_TOKEN": "bkua_xxxxx" } type: stdio timeout: 300 ``` where `bkua_xxxxx` is the value of your [configured Buildkite API access token](/docs/apis/mcp-server/local/installing#configure-an-api-access-token), set with your required scopes. You can amend this configuration to enable [toolsets](/docs/apis/mcp-server/tools/toolsets) or configure read-only access, or both, for the Docker version of the local MCP server. Learn more about how to do this in the [Using Docker](/docs/apis/mcp-server/tools/toolsets#configuring-the-local-mcp-server-using-docker) section of [Configuring the local MCP server](/docs/apis/mcp-server/tools/toolsets#configuring-the-local-mcp-server) on the [Toolsets](/docs/apis/mcp-server/tools/toolsets) page. ###### Binary When using a [pre-built](/docs/apis/mcp-server/local/installing#install-and-run-the-server-locally-using-a-pre-built-binary) or [source-built](/docs/apis/mcp-server/local/installing#install-and-run-the-server-locally-building-from-source) binary to run the MCP server, add the following YAML configuration the `extensions:` section of your [Goose `config.yaml` file](https://goose-docs.ai/docs/getting-started/using-extensions#config-entry). ```yaml extensions: fetch: name: Buildkite cmd: buildkite-mcp-server args: [stdio] enabled: true envs: | { "BUILDKITE_API_TOKEN": "bkua_xxxxx" } type: stdio timeout: 300 ``` where `bkua_xxxxx` is the value of your [configured Buildkite API access token](/docs/apis/mcp-server/local/installing#configure-an-api-access-token), set with your required scopes. You can amend this configuration to enable [toolsets](/docs/apis/mcp-server/tools/toolsets) or configure read-only access, or both, for the binary version of the local MCP server. Learn more about how to do this in the [Using the binary](/docs/apis/mcp-server/tools/toolsets#configuring-the-local-mcp-server-using-the-binary) section of [Configuring the local MCP server](/docs/apis/mcp-server/tools/toolsets#configuring-the-local-mcp-server) on the [Toolsets](/docs/apis/mcp-server/tools/toolsets) page. ##### Visual Studio Code You can configure [Visual Studio Code](https://code.visualstudio.com/) to work with your local Buildkite MCP server, running [in Docker](#visual-studio-code-docker) or [as a binary](#visual-studio-code-binary). To do this, add the relevant configuration to your [Visual Studio Code's `mcp.json` file](https://code.visualstudio.com/docs/copilot/customization/mcp-servers#_add-an-mcp-server). ###### Docker When using [Docker](/docs/apis/mcp-server/local/installing#install-and-run-the-server-locally-using-docker) to run the MCP server, add the following JSON configuration to your [Visual Studio Code's `mcp.json` file](https://code.visualstudio.com/docs/copilot/customization/mcp-servers#_add-an-mcp-server). ```json { "inputs": [ { "id": "BUILDKITE_API_TOKEN", "type": "promptString", "description": "Enter your Buildkite API access token", "password": true } ], "servers": { "buildkite": { "command": "docker", "args": [ "run", "--pull=always", "-q", "-i", "--rm", "-e", "BUILDKITE_API_TOKEN", "buildkite/mcp-server", "stdio" ], "env": { "BUILDKITE_API_TOKEN": "${input:BUILDKITE_API_TOKEN}" } } } } ``` where `bkua_xxxxx` is the value of your [configured Buildkite API access token](/docs/apis/mcp-server/local/installing#configure-an-api-access-token), set with your required scopes. You can amend this configuration to enable [toolsets](/docs/apis/mcp-server/tools/toolsets) or configure read-only access, or both, for the Docker version of the local MCP server. Learn more about how to do this in the [Using Docker](/docs/apis/mcp-server/tools/toolsets#configuring-the-local-mcp-server-using-docker) section of [Configuring the local MCP server](/docs/apis/mcp-server/tools/toolsets#configuring-the-local-mcp-server) on the [Toolsets](/docs/apis/mcp-server/tools/toolsets) page. Alternatively, you can initiate this process through the Visual Studio Code interface. To do this: 1. In the [Command Palette](https://code.visualstudio.com/docs/getstarted/getting-started#_access-commands-with-the-command-palette), find and select the **MCP: Add Server** command. 1. Select **Docker image** to start configuring your local MCP server running in Docker. 1. For **Enter Docker Image Name**, specify `buildkite/mcp-server`, and **Allow** it to be installed. 1. For **Enter your Buildkite API Access Token**, enter your configured Buildkite API access token. 1. For **Enter Server ID**, specify `buildkite`. Follow the remaining prompts to complete this configuration process. ###### Binary When using a [pre-built](/docs/apis/mcp-server/local/installing#install-and-run-the-server-locally-using-a-pre-built-binary) or [source-built](/docs/apis/mcp-server/local/installing#install-and-run-the-server-locally-building-from-source) binary to run the MCP server, add the following JSON configuration to your [Visual Studio Code's `mcp.json` file](https://code.visualstudio.com/docs/copilot/customization/mcp-servers#_add-an-mcp-server). ```json { "inputs": [ { "id": "BUILDKITE_API_TOKEN", "type": "promptString", "description": "Enter your Buildkite API access token", "password": true } ], "servers": { "buildkite": { "command": "buildkite-mcp-server", "args": ["stdio"], "env": { "BUILDKITE_API_TOKEN": "${input:BUILDKITE_API_TOKEN}" } } } } ``` where `bkua_xxxxx` is the value of your [configured Buildkite API access token](/docs/apis/mcp-server/local/installing#configure-an-api-access-token), set with your required scopes. You can amend this configuration to enable [toolsets](/docs/apis/mcp-server/tools/toolsets) or configure read-only access, or both, for the binary version of the local MCP server. Learn more about how to do this in the [Using the binary](/docs/apis/mcp-server/tools/toolsets#configuring-the-local-mcp-server-using-the-binary) section of [Configuring the local MCP server](/docs/apis/mcp-server/tools/toolsets#configuring-the-local-mcp-server) on the [Toolsets](/docs/apis/mcp-server/tools/toolsets) page. ##### Windsurf You can configure [Windsurf](https://windsurf.com/) to work with your local Buildkite MCP server, running [in Docker](#windsurf-docker) or [as a binary](#windsurf-binary). To do this, add the relevant configuration to your [Windsurf's `mcp_config.json` file](https://docs.windsurf.com/windsurf/cascade/mcp#mcp-config-json). ###### Docker When using [Docker](/docs/apis/mcp-server/local/installing#install-and-run-the-server-locally-using-docker) to run the MCP server, add the following JSON configuration to your [Windsurf's `mcp_config.json` file](https://docs.windsurf.com/windsurf/cascade/mcp#mcp-config-json). ```json { "mcpServers": { "buildkite": { "command": "docker", "args": [ "run", "--pull=always", "-q", "-i", "--rm", "-e", "BUILDKITE_API_TOKEN", "buildkite/mcp-server", "stdio" ], "env": { "BUILDKITE_API_TOKEN": "bkua_xxxxx" } } } } ``` where `bkua_xxxxx` is the value of your [configured Buildkite API access token](/docs/apis/mcp-server/local/installing#configure-an-api-access-token), set with your required scopes. You can amend this configuration to enable [toolsets](/docs/apis/mcp-server/tools/toolsets) or configure read-only access, or both, for the Docker version of the local MCP server. Learn more about how to do this in the [Using Docker](/docs/apis/mcp-server/tools/toolsets#configuring-the-local-mcp-server-using-docker) section of [Configuring the local MCP server](/docs/apis/mcp-server/tools/toolsets#configuring-the-local-mcp-server) on the [Toolsets](/docs/apis/mcp-server/tools/toolsets) page. ###### Binary When using a [pre-built](/docs/apis/mcp-server/local/installing#install-and-run-the-server-locally-using-a-pre-built-binary) or [source-built](/docs/apis/mcp-server/local/installing#install-and-run-the-server-locally-building-from-source) binary to run the MCP server, add the following JSON configuration to your [Windsurf's `mcp_config.json` file](https://docs.windsurf.com/windsurf/cascade/mcp#mcp-config-json). ```json { "mcpServers": { "buildkite": { "command": "buildkite-mcp-server", "args": ["stdio"], "env": { "BUILDKITE_API_TOKEN": "bkua_xxxxx" } } } } ``` where `bkua_xxxxx` is the value of your [configured Buildkite API access token](/docs/apis/mcp-server/local/installing#configure-an-api-access-token), set with your required scopes. You can amend this configuration to enable [toolsets](/docs/apis/mcp-server/tools/toolsets) or configure read-only access, or both, for the binary version of the local MCP server. Learn more about how to do this in the [Using the binary](/docs/apis/mcp-server/tools/toolsets#configuring-the-local-mcp-server-using-the-binary) section of [Configuring the local MCP server](/docs/apis/mcp-server/tools/toolsets#configuring-the-local-mcp-server) on the [Toolsets](/docs/apis/mcp-server/tools/toolsets) page. ##### Zed You can configure the [Zed](https://zed.dev/) code editor with the Buildkite MCP server as a locally running binary using the Zed Buildkite MCP extension. To add the Buildkite MCP server extension to Zed: 1. Visit Zed's [Buildkite MCP server extension](https://zed.dev/extensions/mcp-server-buildkite) page. 1. Select the **Install MCP Server in Zed** button on this web page to open the **Extensions** window in Zed. 1. In the **Extensions** window, ensure the **Buildkite MCP** extension is shown and select its **Install** button. 1. In the **Configure mcp-server-buildkite** dialog, copy your [configured Buildkite API access token](/docs/apis/mcp-server/local/installing#configure-an-api-access-token) and paste this over the `BUILDKITE_API_TOKEN` value. 1. Select **Configure Server** to save the changes. Your configuration should be saved to the [Zed's main `settings.json` file](https://zed.dev/docs/reference/all-settings), which is usually located within your home directory's `.config/zed/` folder. Alternatively, you can copy and paste the following configuration as a new entry to [Zed's main `settings.json` file](https://zed.dev/docs/reference/all-settings), bearing in mind that if you had previously configured an MCP server in Zed, add just the `"mcp-server-buildkite"` object within the existing `"context_servers"` object of this file. ```json { "context_servers": { "mcp-server-buildkite": { "settings": { "buildkite_api_token": "bkua_xxxxx" } } } } ``` where `bkua_xxxxx` is the value of your [configured Buildkite API access token](/docs/apis/mcp-server/local/installing#configure-an-api-access-token), set with your required scopes. ##### ToolHive [ToolHive](https://toolhive.dev/) is a tool that allows you to abstract the API access token handling processes for your local Buildkite MCP server, away from your other AI tool infrastructure and the Buildkite platform. You can configure ToolHive to run your local Buildkite MCP server from its registry using ToolHive's command line interface (CLI) tool. To do this, ensure you have installed TooHive's [CLI tool](https://toolhive.dev/download) and do the following: 1. Use ToolHive's `thv secret set` command to store your [Buildkite API access token](/docs/apis/mcp-server/local/installing#configure-an-api-access-token) as a secret. ```bash cat ~/path/to/your/buildkite-api-token.txt | thv secret set buildkite-api-key ``` where `buildkite-api-token.txt` contains the value of your Buildkite API access token. 1. Run the Buildkite MCP server. ```bash thv run --secret buildkite-api-key,target=BUILDKITE_API_TOKEN buildkite ``` You can also configure ToolHive to run your local Buildkite MCP server from its registry using the ToolHive interface. To do this, ensure you have installed TooHive's [Desktop app](https://toolhive.dev/download) and do the following: 1. Access [ToolHive's **Secrets** page](https://docs.stacklok.com/toolhive/guides-ui/secrets-management#manage-secrets). 1. Add a new secret with the following values: * **Secret name**: `buildkite-api-key` * **Secret value**: Your [Buildkite API access token](/docs/apis/mcp-server/local/installing#configure-an-api-access-token)'s value. 1. Access [ToolHive's **Registry** page](https://docs.stacklok.com/toolhive/guides-ui/run-mcp-servers). 1. Search for `buildkite` and then select the filtered **buildkite** registry option. 1. Select **Install server** and on the **Configure buildkite** dialog's **Configuration** tab, specify the following values: * **Secrets**: Select `buildkite-api-key`. * **Environment variables** (_optional_): Specify the threshold for logging tokens. Omitting this field sets its value to 0, which means that no tokens are logged. ##### Accessing the API access token through 1Password For enhanced security, you can store your [Buildkite API access token](/docs/apis/mcp-server/local/installing#configure-an-api-access-token) in [1Password](https://1password.com/) and reference this token using the [1Password command-line interface (CLI)](https://developer.1password.com/docs/cli) instead of exposing it as a plain environment variable. Learn more about setting up this process in [Using 1Password](/docs/apis/mcp-server/local/installing#using-1password). If you are using [Docker](/docs/apis/mcp-server/local/installing#install-and-run-the-server-locally-using-docker) to run the local MCP server, configure the `docker run` command's environment variable for 1Password with both an `args` array and `env` object in the local MCP server's JSON configuration file. For example: ```json { ... "buildkite-1password-stored-token": { "command": "docker", "args": [ "run", "--pull=always", "-q", "-i", "--rm", "-e", "BUILDKITE_API_TOKEN_FROM_1PASSWORD", "buildkite/mcp-server", "stdio" ], "env": { "BUILDKITE_API_TOKEN_FROM_1PASSWORD": "op://Private/Buildkite API Token/credential" } } ... } ``` If you are using a [pre-built](/docs/apis/mcp-server/local/installing#install-and-run-the-server-locally-using-a-pre-built-binary) or [source-built](/docs/apis/mcp-server/local/installing#install-and-run-the-server-locally-building-from-source) binary to run the local MCP server, configure the `buildkite-mcp-server` command's environment variable for 1Password with an `env` object in the local MCP server's JSON configuration file. For example: ```json { ... "buildkite-1password-stored-token": { "command": "buildkite-mcp-server", "args": ["stdio"], "env": { "BUILDKITE_API_TOKEN_FROM_1PASSWORD": "op://Private/Buildkite API Token/credential" } } ... } ``` --- ### Overview URL: https://buildkite.com/docs/apis/webhooks #### Webhooks overview Buildkite webhooks send JSON payloads through HTTP requests to specific URL endpoints of third-party applications, which let these applications react to activities on the Buildkite platform as they happen. Common use cases for implementing Buildkite webhooks include: - Generating chat alerts in Slack that aren't covered by the [Slack Workspace](/docs/platform/integrations/slack-workspace) and [Slack](/docs/pipelines/integrations/notifications/slack) notification service integrations, as well as in other chat applications like Microsoft Teams. - Automating infrastructure, such as scaling agents. - Allowing your third party applications to: * Ingest analytics or data on specific activities from the Buildkite platform. * Display custom dashboards on data from the Buildkite platform. Buildkite provides webhook support for [Pipelines](/docs/apis/webhooks/pipelines), [Test Engine](/docs/apis/webhooks/test-engine) and [Package Registries](/docs/apis/webhooks/package-registries). Buildkite also provides support for _incoming webhooks_ from third-party applications that send HTTP requests and their own JSON payloads to the Buildkite platform. Learn more about this preview feature in [Pipeline triggers](/docs/apis/webhooks/incoming/pipeline-triggers). ##### Creating webhooks Learn more about how to add Buildkite webhooks from the **Add a webhook** procedures [Pipelines](/docs/apis/webhooks/pipelines#add-a-webhook) and [Package Registries](/docs/apis/webhooks/package-registries#add-a-webhook). Request headers for [Pipelines](/docs/apis/webhooks/pipelines#http-headers) and [Package Registries](/docs/apis/webhooks/package-registries#http-headers) webhooks are also provided to allow the authenticity of these webhook events to be verified. For Test Engine, see [Test Engine webhooks](/docs/apis/webhooks/test-engine) for details on how its webhooks are created. Also learn more about how to create pipeline triggers (a type of incoming webhook) from the [Create a new pipeline trigger](/docs/apis/webhooks/incoming/pipeline-triggers#create-a-new-pipeline-trigger) procedure. ##### Event families ###### Pipelines Buildkite Pipelines supports the following categories of webhook events, which are summarized in the [Events](/docs/apis/webhooks/pipelines#events) section of the [Pipelines webhooks](/docs/apis/webhooks/pipelines) overview page. | Event family | Description | |--------------|-------------| | [Build events](/docs/apis/webhooks/pipelines/build-events) | A pipeline build starts, fails, finishes, is scheduled, or is skipped. | | [Job events](/docs/apis/webhooks/pipelines/job-events) | A pipeline's job runs, finishes, is in a scheduled state, or is activated. | | [Agent events](/docs/apis/webhooks/pipelines/agent-events) | A Buildkite agent connects, disconnects, stops, is lost, or gets blocked. | | [Ping events](/docs/apis/webhooks/pipelines/ping-events) | A webhook's notification configuration has changed. | | [Agent-token events](/docs/apis/webhooks/pipelines/agent-token-events) | An agent token's registration has failed. | | [Integrations](/docs/apis/webhooks/pipelines/integrations) | Buildkite Pipeline events related to third-party application integrations. | | [Pipeline triggers](/docs/apis/webhooks/incoming/pipeline-triggers) | Create configurable incoming webhooks from third-party applications that trigger a Buildkite pipeline build. | ###### Test Engine Buildkite Test Engine supports webhook events relating to a [monitor on a test suite's workflow triggering an alarm or recover action](/docs/apis/webhooks/test-engine). ###### Package Registries Buildkite package registries support webhook events when [packages are published](/docs/apis/webhooks/package-registries). ##### Security best practices When configuring your third party applications to receive Buildkite webhook events, ensure the following security measures are implemented: - Serve your endpoint over TLS. - Restrict accepted IP ranges for Buildkite webhooks to Buildkite's outgoing addresses. - Ensure your endpoints only accept JSON payloads from Buildkite webhooks. ##### See also - [REST API overview](/docs/apis/rest-api) - [GraphQL API overview](/docs/apis/graphql-api) - [Amazon EventBridge integration](/docs/pipelines/integrations/observability/amazon-eventbridge) --- ### Overview URL: https://buildkite.com/docs/apis/webhooks/pipelines #### Pipelines webhooks Pipelines webhooks allow you to monitor and respond to events within your Buildkite organization, providing a real time view of activity and allowing you to extend and integrate Buildkite into your systems. > 📘 > This page is about configuring _outgoing webhooks_ for Buildkite Pipelines. To learn more about the Buildkite platform's _incoming webhooks_ feature, see [Pipeline triggers](/docs/apis/webhooks/incoming/pipeline-triggers). ##### Add a webhook To add a webhook for your pipeline event: 1. Select **Settings** in the global navigation > **Notification Services** to access the [**Notification Services** page](https://buildkite.com/organizations/-/services). 1. Select the **Add** button on **Webhook**. 1. Specifying your webhook's **Description** and **Webhook URL**. 1. If you are using self-signed certificates for your webhooks, clear the **Verify TLS Certificates** checkbox. 1. To allow the authenticity of your Pipeline webhook events to be verified, configure your webhook's **Token** value to be sent either as a plain text [`X-Buildkite-Token`](#webhook-token) value or an encrypted [`X-Buildkite-Signature`](#webhook-signature) in the request [header](#http-headers), bearing in mind that the latter provides the more secure verification method. 1. Select one or more of the listed [**Events**](#events) that will trigger this webhook, which include the following categories of webhooks: * [Build events](/docs/apis/webhooks/pipelines/build-events) * [Job events](/docs/apis/webhooks/pipelines/job-events) * [Agent events](/docs/apis/webhooks/pipelines/agent-events) * [Ping](/docs/apis/webhooks/pipelines/ping-events) and [agent token](/docs/apis/webhooks/pipelines/agent-token-events) events * Other events associated with [third-party application integrations](/docs/apis/webhooks/pipelines/integrations). 1. Select the **Pipelines** that this webhook will trigger: * **All Pipelines**. * **Only Some pipelines**, where you can select specific pipelines in your Buildkite organization. * **Pipelines in Teams**, where you can select pipelines accessible to specific teams configured in your Buildkite organization. * **Pipelines in Clusters**, where you can select pipelines associated with specific Buildkite clusters. 1. In the **Branch filtering** field, specify the branches that will trigger this webhook. You can leave this field empty to allow all branches to trigger the webhook, or select a subset of branches you would like to trigger it, based on [branch configuration](/docs/pipelines/configure/workflows/branch-configuration) and [pattern examples](/docs/pipelines/configure/workflows/branch-configuration#branch-pattern-examples). 1. Select the **Add Webhook Notification** button to save these changes and add the webhook. ##### Events You can subscribe to one or more of the following events: | Event | Description | `ping` | Webhook notification settings have changed | `build.scheduled` | A build has been scheduled | `build.running` | A build has started running | `build.failing` | A build is failing | `build.finished` | A build has finished | `build.skipped` | A job has been scheduled | `job.scheduled` | A job has been scheduled | `job.started` | A command step job has started running on an agent | `job.finished` | A job has finished | `job.activated` | A block step job has been unblocked using the web or API | `agent.connected` | An agent has connected to the API | `agent.lost` | An agent has been marked as lost. This happens when Buildkite stops receiving pings from the agent | `agent.disconnected` | An agent has disconnected. This happens when the agent shuts down and disconnects from the API | `agent.stopping` | An agent is stopping. This happens when an agent is instructed to stop from the API. It first transitions to stopping and finishes any current jobs | `agent.stopped` | An agent has stopped. This happens when an agent is instructed to stop from the API. It can be graceful or forceful | `agent.blocked` | An agent has been blocked. This happens when an agent's IP address is no longer included in the agent token's [allowed IP addresses](/docs/pipelines/security/clusters/manage#restrict-an-agent-tokens-access-by-ip-address) | `cluster_token.registration_blocked` | An attempted agent registration has been blocked because the request IP address is not included in the agent token's [allowed IP addresses](/docs/pipelines/security/clusters/manage#restrict-an-agent-tokens-access-by-ip-address) ##### HTTP headers The following HTTP headers are present in every webhook request, which allow you to identify the event that took place, and to verify the authenticity of the request: | `X-Buildkite-Event` | The type of event_Example:_ `build.scheduled` One of either the [token](#webhook-token) or [signature](#webhook-signature) headers will be present in every webhook request. The token value and header setting can be found under **Token** in your **Webhook Notification** service. Your selection in the **Webhook Notification** service will determine which is sent: | `X-Buildkite-Token` | The webhook's [token](#webhook-token). _Example:_ `309c9c842g8565adecpd7469x6005989` | `X-Buildkite-Signature` | The [signature](#webhook-signature) created from your webhook payload, webhook token, and the SHA-256 hash function._Example:_ `timestamp=1619071700,signature=30222eb518dc3fb61ec9e64dd78d163f62cb134a6ldb768f1d40e0edbn6e43f0` ##### HTTP request body Each event's data is sent JSON encoded in the request body. See each event's documentation ([agent](/docs/apis/webhooks/pipelines/agent-events), [build](/docs/apis/webhooks/pipelines/build-events#request-body-data), [job](/docs/apis/webhooks/pipelines/job-events), [ping](/docs/apis/webhooks/pipelines/ping-events)) to see which keys are available in the payload. For example: ```json { "event": "build.started", "build": { "keys": "vals" }, "sender": { "keys": "vals" } } ``` > 🚧 Fast transitions and webhooks > Note that if a build transitions between states very quickly, for example from blocked (`finished`) to unblocked (`running`), the webhook may be in a different state from the actual build. This is a known limitation of webhooks, in that they may represent a later version of the object than the one that triggered the event. ##### Webhook token By default, Buildkite will send a token with each webhook in the `X-Buildkite-Token` header. The token value and header setting can be found under **Token** in your **Webhook Notification** service. The token is passed in clear text. ##### Webhook signature Buildkite can optionally send an HMAC signature in place of a webhook token. The `X-Buildkite-Signature` header contains a timestamp and an HMAC signature. The timestamp is prefixed by `timestamp=` and the signature is prefixed by `signature=`. Buildkite generates the signature using HMAC-SHA256; a hash-based message authentication code [HMAC](https://en.wikipedia.org/wiki/HMAC) used with the [SHA-256](https://en.wikipedia.org/wiki/SHA-2) hash function and a secret key. The webhook token value is used as the secret key. The timestamp is an integer representation of a UTC timestamp. The raw request body is the signed message. > 📘 What the timestamp represents > The timestamp in the `X-Buildkite-Signature` header indicates the time when the webhook HTTP request was dispatched, not the time when the underlying event occurred. Its purpose is replay-attack prevention. The timestamp is included in the HMAC so stale webhooks can be rejected. > For accurate event timing, use the timestamps in the webhook payload instead, such as `build.created_at`, `build.started_at`, or `build.finished_at` for builds, and `job.started_at` or `job.finished_at` for jobs. To measure end-to-end delivery latency, compare the relevant payload timestamp (for example, `build.finished_at`) against your own receipt time. The token value and header setting can be found under **Token** in your **Webhook Notification** service. ###### Verifying HMAC signatures When using HMAC signatures, you'll want to verify that the signature is legitimate. Using the token as the secret along with the timestamp from the webhook, compute the expected signature based on the raw request body. There should be a library available in the programming language you are using that can perform this operation. Compare the code to the signature received in the webhook. If they do not match, your payload has been altered. The below example in Ruby verifies the signature and timestamp using the OpenSSL gem's HMAC : ```ruby require 'openssl' class BuildkiteWebhook def self.valid?(webhook_request_body, header, secret) timestamp, signature = get_timestamp_and_signatures(header) expected = OpenSSL::HMAC.hexdigest("sha256", secret, "#{timestamp}.#{webhook_request_body}") Rack::Utils.secure_compare(expected, signature) end def self.get_timestamp_and_signatures(header) parts = header.split(",").map { |kv| kv.split("=", 2).map(&:strip) }.to_h [parts["timestamp"], parts["signature"]] end end BuildkiteWebhook.valid?( request.body.read, request.headers["X-Buildkite-Signature"], ENV["BUILDKITE_WEBHOOK_SECRET"] ) ``` ###### Defending against replay attacks A [replay attack](https://en.wikipedia.org/wiki/Replay_attack) is when an attacker intercepts a valid payload and its signature, then re-transmits them. One way to help mitigate such attacks is to send a timestamp with your payload and only accept them within a short window (for example, 5 minutes). Buildkite sends a timestamp in the `X-Buildkite-Signature` header. The timestamp is part of the signed payload so that it is verified by the signature. An attacker will not be able to change the timestamp without invalidating the signature. To help protect against a replay attack, upon receipt of a webhook: 1. Verify the signature 1. Check the timestamp against the current time If the webhook's timestamp is within your chosen window of the current time, it can reasonably be assumed to be the original webhook. ##### Example implementations The following example repositories show how to receive a webhook event and trigger a LIFX powered build light. You can browse their source, fork them, and deploy them to Heroku directly from their GitHub readmes, or use them as an example to implement webhooks in your tool of choice. [:node: Node webhook example application github.com/buildkite/lifx-buildkite-build-light-node](https://github.com/buildkite/lifx-buildkite-build-light-node) [:ruby: Ruby webhook example application github.com/buildkite/lifx-buildkite-build-light-ruby](https://github.com/buildkite/lifx-buildkite-build-light-ruby) [:php: PHP webhook example application github.com/buildkite/lifx-buildkite-build-light-php](https://github.com/buildkite/lifx-buildkite-build-light-php) [:node: Webtask.io webhook example application github.com/buildkite/lifx-buildkite-build-light-webtask](https://github.com/buildkite/lifx-buildkite-build-light-webtask) ##### Request logs The last 20 webhook request and responses are saved, so you can debug and inspect your webhook. Each webhook's request logs are available on the bottom of their settings page. --- ### Build events URL: https://buildkite.com/docs/apis/webhooks/pipelines/build-events #### Build webhook events ##### Events | Event | Description | `build.scheduled` | A build has been scheduled | `build.running` | A build has started running | `build.failing` | A build is failing | `build.finished` | A build has finished | `build.skipped` | A build has been skipped ##### Request body data | Property | Type | Description | `build` | [Build](/docs/apis/rest-api/builds) | The build this notification relates to | `pipeline` | [Pipeline](/docs/apis/rest-api/pipelines) | The pipeline this notification relates to | `sender` | Object | The user who created the webhook Example request body: ```json { "event": "build.scheduled", "build": { "...": "..." }, "pipeline": { "...": "..." }, "sender": { "id": "8a7693f8-dbae-4783-9137-84090fce9045", "name": "Some Person" } } ``` > 📘 Job data not included > When using webhooks, the build object does not contain job data (as returned by calls to the [Build API](/docs/apis/rest-api/builds) of Buildkite's REST API). Learn more about obtaining job data from Buildkite Pipelines using webhooks in [Job events](/docs/apis/webhooks/pipelines/job-events). ##### Finding out if a build is blocked If a build is blocked, look for `blocked: true` in the `build.finished` event Example request body for blocked build: ```json { "event": "build.finished", "build": { "...": "...", "blocked": true, "...": "..." }, "pipeline": { "...": "..." }, "sender": { "id": "0adfbc27-5f72-4a91-bf61-5693da0dd9c5", "name": "Some Person" } } ``` > 📘 To determine if an EventBridge notification is blocked > However, to determine if an EventBridge notification is blocked, look for `"state": "blocked". `, like in this [sample Eventbridge request](/docs/pipelines/integrations/observability/amazon-eventbridge#events-build-blocked). ##### Trigger steps in build events When a build contains trigger steps, the `build.finished` webhook will include the `async` field in the step configuration. Example `build.finished` request body with trigger step: ```json { "event": "build.finished", "build": { "steps": [ { "type": "trigger", "async": true, "...": "..." } ], "...": "..." }, "pipeline": { "...": "..." }, "sender": { "id": "8a7693f8-dbae-4783-9137-84090fce9045", "name": "Some Person" } } ``` The `async` field indicates: - `true`: The trigger step continues immediately, regardless of the triggered build's success. - `false`: The trigger step waits for the triggered build to complete before continuing. --- ### Job events URL: https://buildkite.com/docs/apis/webhooks/pipelines/job-events #### Job webhook events ##### Events | Event | Description | `job.scheduled` | A command step job is in a scheduled state and is waiting to run on an agent | `job.started` | A command step job has started running on an agent | `job.finished` | A job has finished | `job.activated` | A block step job has been unblocked using the web or API ##### Request body data | Property | Type | Description | `job` | [Job](/docs/apis/rest-api/jobs) | The job this notification relates to | `build` | [Build](/docs/apis/rest-api/builds) | The build this notification relates to | `pipeline` | [Pipeline](/docs/apis/rest-api/pipelines) | The pipeline this notification relates to | `sender` | String | The user who created the webhook Example request body: ```json { "event": "job.started", "job": { "...": "..." }, "build": { "...": "..." }, "pipeline": { "...": "..." }, "sender": { "id": "8a7693f8-dbae-4783-9137-84090fce9045", "name": "Some Person" } } ``` ##### Trigger job events When a trigger step in the parent pipeline finishes, the `job.finished` webhook will include an `async` field that shows whether the step runs asynchronously. Example `job.finished` request body for a trigger job: ```json { "event": "job.finished", "job": { "id": "...", "type": "trigger", "name": "...", "state": "...", "async": true, "...": "..." }, "build": { "...": "..." }, "pipeline": { "...": "..." }, "sender": { "id": "8a7693f8-dbae-4783-9137-84090fce9045", "name": "Some Person" } } ``` The `async` field indicates: - `true`: The trigger step continues immediately, regardless of the triggered build's success. - `false`: The trigger step waits for the triggered build to complete before continuing. --- ### Agent events URL: https://buildkite.com/docs/apis/webhooks/pipelines/agent-events #### Agent webhook events ##### Events | Event | Description | `agent.connected` | An agent has connected to the API | `agent.lost` | An agent has been marked as lost. This happens when Buildkite stops receiving pings from the agent | `agent.disconnected` | An agent has disconnected. This happens when the agent shuts down and disconnects from the API | `agent.stopping` | An agent is stopping. This happens when an agent is instructed to stop from the API. It first transitions to stopping and finishes any current jobs | `agent.stopped` | An agent has stopped. This happens when an agent is instructed to stop from the API. It can be graceful or forceful | `agent.blocked` | An agent has been blocked. This happens when an agent's IP address is no longer included in the agent token's [allowed IP addresses](/docs/pipelines/security/clusters/manage#restrict-an-agent-tokens-access-by-ip-address) ##### Common event data The following properties are sent by all events. | Property | Type | Description | `agent` | [Agent](/docs/apis/rest-api/agents) | The agent this notification relates to | `sender` | String | The user who created the webhook Example request body: ```json { "event": "agent.connected", "agent": { "...": "..." }, "sender": { "id": "8a7693f8-dbae-4783-9137-84090fce9045", "name": "Some Person" } } ``` ##### Agent blocked event data The following properties are sent by the `agent.blocked` event. | Property | Type | Description | `blocked_ip` | String | The blocked request IP address | `agent` | [Agent](/docs/apis/rest-api/agents) | The agent this notification relates to | `cluster_token` | [Agent token](/docs/apis/rest-api/clusters/agent-tokens#token-data-model) | The agent token used in the registration attempt | `sender` | String | The user who created the webhook Example request body: ```json { "event": "agent.blocked", "blocked_ip": "202.188.43.20", "agent": { "...": "..." }, "cluster_token": { "...": "..." }, "sender": { "id": "8a7693f8-dbae-4783-9137-84090fce9045", "name": "Some Person" } } ``` --- ### Ping events URL: https://buildkite.com/docs/apis/webhooks/pipelines/ping-events #### Ping events ##### Events | `ping` | Webhook notification settings have changed ##### Request body data | `service` | The notification service that sent this webhook | `organization` | The [Organization](/docs/apis/rest-api/organizations) this notification belongs to | `sender` | The user who created the webhook Example request body: ```json { "event": "ping", "service": { "id": "49801950-1df0-474f-bb56-ad6a930c5cb9", "provider": "webhook", "settings": { "url": "https://server.com/webhook" } }, "organization": { "...": "..." }, "sender": { "id": "8a7693f8-dbae-4783-9137-84090fce9045", "name": "Some Person" } } ``` --- ### Agent token events URL: https://buildkite.com/docs/apis/webhooks/pipelines/agent-token-events #### Agent token events ##### Events | Event | Description | `cluster_token.registration_blocked` | An attempted agent registration has been blocked because the request IP address is not included in the agent token's [allowed IP addresses](/docs/pipelines/security/clusters/manage#restrict-an-agent-tokens-access-by-ip-address) ##### Request body data | Property | Type | Description | `blocked_ip` | String | The IP address of the blocked registration request | `cluster_token` | [Agent token](/docs/apis/rest-api/clusters/agent-tokens) | The agent token used in the registration attempt | `sender` | String | The user who created the webhook Example request body: ```json { "event": "cluster_token.registration_blocked", "blocked_ip": "202.188.43.20", "cluster_token": { "...": "..." }, "sender": { "id": "8a7693f8-dbae-4783-9137-84090fce9045", "name": "Some Person" } } ``` --- ### Integrations URL: https://buildkite.com/docs/apis/webhooks/pipelines/integrations #### Webhook integrations There are a number of third party services you can use with Buildkite webhooks. Some services (such as RequestBin and Zapier) are designed specifically with webhooks in mind, and others (such as AWS Lambda and Google Cloud Functions) are general purpose programming platforms which can be triggered with webhook HTTP requests. ##### AWS Lambda [AWS Lambda](https://aws.amazon.com/lambda/) is a service for running functions, and when combined with [AWS API Gateway](https://aws.amazon.com/api-gateway/), can be used to process your Buildkite webhooks. There are many ways to integrate webhooks with AWS Lambda. The following repositories demonstrate two ways to process Buildkite webhooks using AWS Lambda: - Rivet's [buildkite-webhook-aws-terraform](https://github.com/rivethealth/buildkite-webhook-aws-terraform) uses [AWS Lambda](https://aws.amazon.com/lambda/) and [AWS API Gateway](https://aws.amazon.com/api-gateway/) to publish Buildkite webhook events to an [AWS SNS](https://aws.amazon.com/sns/) topic. - Rivet's [buildkite-bitbucket-aws-terraform](https://github.com/rivethealth/buildkite-bitbucket-aws-terraform) demonstrates using [AWS Lambda](https://aws.amazon.com/lambda/), [AWS API Gateway](https://aws.amazon.com/api-gateway/) and [AWS SNS](https://aws.amazon.com/sns/) to send build statuses to an Atlassian Bitbucket Server. ##### Google Cloud Run functions [Google Cloud Run functions](https://cloud.google.com/functions) is a Google Cloud service for hosted code execution, and also supports exposing functions using URLs. See Google Cloud Run's [When should I deploy a function to Cloud Run?](https://docs.cloud.google.com/run/docs/functions-with-run) documentation for more information about its use cases, as well as their Quickstart guides on how to deploy a web app to Cloud Run (for example, [Node.js](https://docs.cloud.google.com/run/docs/quickstarts/build-and-deploy/deploy-nodejs-service)) to get started, and a Cloud Run function using the [Google Cloud console](https://docs.cloud.google.com/run/docs/quickstarts/functions/deploy-functions-console) or [gcloud CLI](https://docs.cloud.google.com/run/docs/quickstarts/functions/deploy-functions-gcloud). ##### Zapier [Zapier](https://zapier.com/) is a system for connecting APIs together, and has built in support for hundreds of services. For example, you could use Zapier to send an email when a build has finished, save a build artifact into a Dropbox folder, or post to a Slack room substituting values such as the build URL and number into the message body. To use Buildkite webhooks with Zapier create a new Zap and select Webhook. --- ### Overview URL: https://buildkite.com/docs/apis/webhooks/test-engine #### Test Engine webhooks Test Engine webhooks are configured as part of a test suite's [workflow](/docs/test-engine/workflows). These types of webhooks are triggered when the workflow's [monitor](/docs/test-engine/workflows/monitors) triggers an [alarm or recover action](/docs/test-engine/workflows/actions) event that [sends a webhook notification](/docs/test-engine/workflows/actions#send-webhook-notification). Webhooks are delivered to an HTTP POST endpoint of your choosing with a `Content-Type: application/json` header and a JSON encoded request body. To learn more about Test Engine webhooks and to see examples of their different payload types, see [Send webhook notification](/docs/test-engine/workflows/actions#send-webhook-notification) of the [Workflows > Actions](/docs/test-engine/workflows/actions) in the [Test Engine documentation](/docs/test-engine). --- ### Overview URL: https://buildkite.com/docs/apis/webhooks/package-registries #### Package Registries webhooks You can configure webhooks to be triggered in Package Registries when a package, image, chart, model, module, or file is created. Webhooks are delivered to an HTTP POST endpoint of your choosing with a `Content-Type: application/json` header and a JSON encoded request body. ##### Add a webhook To add a webhook for your package creation event: 1. Select **Package Registries** in the global navigation > your registry to configure webhooks on. 1. Select **Settings** tab > **Notification Services** page. 1. Select the **Add** button on **Webhook**. 1. Specifying your webhook's **Description** and **Webhook URL**. 1. If you are using self-signed certificates for your webhooks, clear the **Verify TLS Certificates** checkbox. 1. To allow the authenticity of your Package Registries webhook events to be verified, configure your webhook's **Token** value to be sent either as a plain text [`X-Buildkite-Token`](#webhook-token) value or an encrypted [`X-Buildkite-Signature`](#webhook-signature) in the request [header](#http-headers), bearing in mind that the latter provides the more secure verification method. 1. In the **Events** section, ensure the **package.created** event has been selected. 1. Select the **Save Webhook Settings** button to save these changes and add the webhook. ###### Package created The webhook is triggered when a package is created and published through an ecosystem-native CLI or using the [REST API](/docs/apis/rest-api/package-registries/packages). Example payload: ```json { "event": "package.created", "package": { { "id": "0191e23a-4bc8-7683-bfa4-5f73bc9b7c44", "url": "https://api.buildkite.com/v2/packages/organizations/my_great_org/registries/my-registry/packages/0191e23a-4bc8-7683-bfa4-5f73bc9b7c44", "web_url": "https://buildkite.com/organizations/my_great_org/packages/registries/my-registry/packages/0191e23a-4bc8-7683-bfa4-5f73bc9b7c44", "name": "banana", "organization": { "id": "0190e784-eeb7-4ce4-9d2d-87f7aba85433", "slug": "my_great_org", "url": "https://api.buildkite.com/v2/organizations/my_great_org", "web_url": "https://buildkite.com/my_great_org" }, "registry": { "id": "0191e238-e0a3-7b0b-bb34-beea0035a39d", "graphql_id": "UmVnaXN0cnktLS0wMTkxZTIzOC1lMGEzLTdiMGItYmIzNC1iZWVhMDAzNWEzOWQ=", "slug": "my-registry", "url": "https://api.buildkite.com/v2/packages/organizations/my_great_org/registries/my-registry", "web_url": "https://buildkite.com/organizations/my_great_org/packages/registries/my-registry" } } }, "sender": { "id": "01989b9e-f7e2-4577-92e7-dcdf141598aa", "name": "Developer" } } ``` ##### HTTP headers The following HTTP headers are present in every webhook request, which allow you to identify the event that took place, and to verify the authenticity of the request: | `X-Buildkite-Event` | The type of event_Example:_ `package.created` One of either the [token](#webhook-token) or [signature](#webhook-signature) headers will be present in every webhook request. The token value and header setting can be found under **Token** in your **Webhook Notification** service. Your selection in the **Webhook Notification** service will determine which is sent: | `X-Buildkite-Token` | The webhook's [token](#webhook-token). _Example:_ `309c9c842g8565adecpd7469x6005989` | `X-Buildkite-Signature` | The [signature](#webhook-signature) created from your webhook payload, webhook token, and the SHA-256 hash function._Example:_ `timestamp=1619071700,signature=30222eb518dc3fb61ec9e64dd78d163f62cb134a6ldb768f1d40e0edbn6e43f0` ##### Webhook token By default, Buildkite will send a token with each webhook in the `X-Buildkite-Token` header. The token value and header setting can be found under **Token** in your **Webhook Notification** service. The token is passed in clear text. ##### Webhook signature Buildkite can optionally send an HMAC signature in place of a webhook token. The `X-Buildkite-Signature` header contains a timestamp and an HMAC signature. The timestamp is prefixed by `timestamp=` and the signature is prefixed by `signature=`. Buildkite generates the signature using HMAC-SHA256; a hash-based message authentication code [HMAC](https://en.wikipedia.org/wiki/HMAC) used with the [SHA-256](https://en.wikipedia.org/wiki/SHA-2) hash function and a secret key. The webhook token value is used as the secret key. The timestamp is an integer representation of a UTC timestamp. The raw request body is the signed message. > 📘 What the timestamp represents > The timestamp in the `X-Buildkite-Signature` header indicates the time when the webhook HTTP request was dispatched, not the time when the underlying event occurred. Its purpose is replay-attack prevention. The timestamp is included in the HMAC so stale webhooks can be rejected. > For accurate event timing, use the timestamps in the webhook payload instead, such as `build.created_at`, `build.started_at`, or `build.finished_at` for builds, and `job.started_at` or `job.finished_at` for jobs. To measure end-to-end delivery latency, compare the relevant payload timestamp (for example, `build.finished_at`) against your own receipt time. The token value and header setting can be found under **Token** in your **Webhook Notification** service. ###### Verifying HMAC signatures When using HMAC signatures, you'll want to verify that the signature is legitimate. Using the token as the secret along with the timestamp from the webhook, compute the expected signature based on the raw request body. There should be a library available in the programming language you are using that can perform this operation. Compare the code to the signature received in the webhook. If they do not match, your payload has been altered. The below example in Ruby verifies the signature and timestamp using the OpenSSL gem's HMAC : ```ruby require 'openssl' class BuildkiteWebhook def self.valid?(webhook_request_body, header, secret) timestamp, signature = get_timestamp_and_signatures(header) expected = OpenSSL::HMAC.hexdigest("sha256", secret, "#{timestamp}.#{webhook_request_body}") Rack::Utils.secure_compare(expected, signature) end def self.get_timestamp_and_signatures(header) parts = header.split(",").map { |kv| kv.split("=", 2).map(&:strip) }.to_h [parts["timestamp"], parts["signature"]] end end BuildkiteWebhook.valid?( request.body.read, request.headers["X-Buildkite-Signature"], ENV["BUILDKITE_WEBHOOK_SECRET"] ) ``` ###### Defending against replay attacks A [replay attack](https://en.wikipedia.org/wiki/Replay_attack) is when an attacker intercepts a valid payload and its signature, then re-transmits them. One way to help mitigate such attacks is to send a timestamp with your payload and only accept them within a short window (for example, 5 minutes). Buildkite sends a timestamp in the `X-Buildkite-Signature` header. The timestamp is part of the signed payload so that it is verified by the signature. An attacker will not be able to change the timestamp without invalidating the signature. To help protect against a replay attack, upon receipt of a webhook: 1. Verify the signature 1. Check the timestamp against the current time If the webhook's timestamp is within your chosen window of the current time, it can reasonably be assumed to be the original webhook. ##### Edit, disable, re-enable or delete a webhook To do any of these actions a webhook: 1. Select **Package Registries** in the global navigation > your registry with your configured webhooks. 1. Select **Settings** tab > **Notification Services** to open its page. 1. Select the webhook to open its page, and to: * Edit the webhook, alter the **Description**, **Webhook URL**, **Verify TLS Certificates** and **Token** fields as required (see [Add a webhook](#add-a-webhook) for details), then select the **Save Webhook Settings** button. * Disable the webhook, select its **Disable** button and confirm the action. Disabled webhooks have a note at the top to indicate this state. - To re-enable the disabled webhook, select its **Enable** button. * Delete the webhook, select its **Delete** button and confirm the action. The webhook is removed from the **Notification Services** page. ##### Request logs The last 20 webhook request and responses are saved, so you can debug and inspect your webhook. Each webhook's request logs are available on the bottom of their settings page. --- ### Pipeline triggers URL: https://buildkite.com/docs/apis/webhooks/incoming/pipeline-triggers #### Pipeline triggers A _pipeline trigger_ is a type of incoming webhook that creates new builds of a Buildkite pipeline, based on events from external systems. To trigger pipelines from source control events, see [Source control](/docs/pipelines/source-control) for a list of source control systems that Buildkite supports and integrates with. Pipeline triggers are HTTP endpoints that create builds when they receive POST requests. Each pipeline trigger has a unique URL that accepts JSON payloads, making them ideal for integrating Buildkite with the other tools you use. A pipeline trigger is scoped to a specific Buildkite pipeline, and can be used to trigger builds from monitoring alerts, deployment systems, or any service that can send outbound webhooks. > 📘 Public preview feature > The pipeline triggers feature is currently in public preview. To provide feedback, please contact Buildkite's Support team at [support@buildkite.com](mailto:support@buildkite.com). ##### Supported incoming webhooks Buildkite's pipeline triggers feature supports the following types of incoming webhooks: - **Webhook**: A generic webhook from any service that can send HTTP POST requests. - **GitHub**: A [GitHub webhook](https://docs.github.com/en/webhooks) trigger with [signature verification support](https://docs.github.com/en/webhooks/using-webhooks/validating-webhook-deliveries). This is supplementary to Buildkite's [GitHub repository provider](/docs/pipelines/source-control/github) integration. - **Linear**: A [Linear webhook](https://linear.app/developers/webhooks) trigger with [signature verification support](https://linear.app/developers/webhooks#securing-webhooks). ##### Create a new pipeline trigger To create a new pipeline trigger using the Buildkite interface: 1. From your [Buildkite dashboard](https://buildkite.com/~/), ensure that **Pipelines** is selected in the global navigation, and then select your pipeline. 1. Select your pipeline's **Settings** button > **Triggers**. 1. On the **Triggers** page, select the **New Trigger** button to create a new pipeline trigger. 1. Select the **Add** button next to one of the [supported types of incoming webhooks](#supported-incoming-webhooks). 1. Configure your pipeline trigger, by completing its fields, noting that the **Description**, **Branch**, and **Commit** fields are required to generate a unique endpoint. | Description | The description for the pipeline trigger, which is its name in the list of existing triggers on the **Triggers** page. | Enabled | If this checkbox is selected, then the pipeline trigger will be active and accept incoming webhook events as soon as this pipeline trigger is created. Clear this checkbox if you don't want the pipeline trigger to be active immediately after its creation. | Build message | The message for your triggered build, which appears on the [pipeline page](/docs/pipelines/dashboard-walkthrough#pipeline-page) as part of its build history. If none is specified, this value defaults to **Triggered build**. | Commit | The commit ref the triggered build will run against. If none is specified, this value defaults to `HEAD`. | Branch | The branch the triggered build will run against. If none is specified, this value defaults to `main`. | Environment variables | Optional environment variables to set for the build. Each new environment variable should be entered on a new line. _Example:_ `FOO=bar BAZ=quux` 1. If you had chosen either **GitHub** or **Linear** as your incoming webhook for this pipeline trigger, you can optionally choose validate the authenticity of these webhook payloads. Learn more about this feature in [Webhook verification](#webhook-verification). To do this: 1. Expand the **Security** section and select **Validate/Verify webhook deliveries**. 1. In the **Secret/Signing secret** field, enter the webhook secret/token that you configured in your GitHub or Linear webhook settings. 1. After completing these fields, select **Create Trigger** to create the pipeline trigger. 1. On the next page, follow the instructions in the **Webhook URL** (or equivalent) field to copy and save your webhook trigger's URL to a secure location, as you won't be able to see its full value again through the Buildkite interface. **Important:** If you created a pipeline trigger for a **GitHub** or **Linear** incoming webhook, then before leaving from this page, follow any additional linked instructions to register this URL for your pipeline trigger (webhook) as part of your incoming GitHub or Linear webhook. That's it! You've completed creating your pipeline trigger, and the new pipeline trigger appears in the list of existing triggers on the **Triggers** page. See the following section on [Endpoint](#create-a-new-pipeline-trigger-endpoint) to learn more about the pipeline trigger and how it works, and you're now ready to [invoke your trigger](#invoke-a-pipeline-trigger). ###### Endpoint Each pipeline trigger has a unique endpoint with the following URL structure: ``` https://webhook.buildkite.com/deliver/bktr_************ ``` All requests sent to this endpoint must be `HTTP POST` requests with `application/json` encoded bodies. ###### Response A successful trigger request returns a `201 Created` response with an identifier for the webhook delivery: ```json { "id": "f62a1b4d-10f9-4790-bc1c-e2c3a0c80983" } ``` ###### Error responses | `400 Bad request` | `{ "message": "Invalid pipeline trigger token" }` | `403 Forbidden` | `{ "message": "Pipeline trigger is disabled" }` | `404 Not Found` | `{ "message": "Pipeline trigger not found" }` ##### Webhook verification When [creating](#create-a-new-pipeline-trigger) or editing your Buildkite pipeline trigger based on either the **GitHub** or **Linear** [incoming webhook types](#supported-incoming-webhooks), you can optionally validate the authenticity of these webhook payloads. This mitigates the risk of unauthorized parties tampering with webhook payloads from these services. If you want to validate the authenticity of these incoming webhook types, ensure you have configured their respective secret/token, which you'll need for your Buildkite pipeline trigger configuration. Learn more about to configure these secrets/tokens in the following relevant documentation: - [GitHub webhook signature verification](https://docs.github.com/en/webhooks/using-webhooks/validating-webhook-deliveries) - [Linear webhook security](https://linear.app/developers/webhooks#securing-webhooks) Buildkite pipeline triggers with verification enabled will ensure that all their incoming webhooks match the signature types in their request headers, before these webhooks and their payloads are accepted: - **GitHub**: HMAC-SHA256 signatures in the `X-Hub-Signature-256` header. - **Linear**: HMAC-SHA256 signatures in the `Linear-Signature` header. Be aware that this verification feature is not available for generic incoming webhooks (that is, the **Webhook** pipeline trigger option). ##### Invoke a pipeline trigger To create a build using a webhook pipeline trigger, send an HTTP POST request to the trigger URL. Each trigger accepts a JSON payload, which is accessible to all build steps (see [Accessing pipeline trigger data](#invoke-a-pipeline-trigger-accessing-pipeline-trigger-data) for details). Here's an example using `curl`: ```bash curl -H "Content-Type: application/json" \ -X POST "https://webhook.buildkite.com/deliver/bktr_************" \ -d '{ "id": "P2LA89X", "message": "A fix for this incident is being developed", "trimmed": false, "type": "incident_status_update", "incident": { "html_url": "https://acme.pagerduty.com/incidents/PGR0VU2", "id": "PGR0VU2", "self": "https://api.pagerduty.com/incidents/PGR0VU2", "summary": "A little bump in the road", "type": "incident_reference" } }' ``` You've just created your first build using a pipeline trigger. > 📘 > Be aware that the presence of a `"message": "Any value"` field in the JSON payload does not override the value of the **Build message** set when [creating the pipeline trigger](#create-a-new-pipeline-trigger). All such values in the payload form part of the [pipeline trigger's data](#invoke-a-pipeline-trigger-accessing-pipeline-trigger-data). ###### Accessing pipeline trigger data JSON payloads sent to a pipeline trigger URL are accessible in all steps of the triggered build. You can retrieve the webhook payload using the Buildkite agent CLI command [`buildite-agent meta-data`](/docs/pipelines/configure/build-meta-data). ###### Example The following sample JSON payload is obtained from a GitHub webhook event for [closing a GitHub pull request](https://docs.github.com/en/webhooks/webhook-events-and-payloads?actionType=closed#pull_request): ```json { "action": "closed", "number": 123, "organization": "Buildkite", "pull_request": { "url": "https://www.github.com/buildkite/dummy-repo", "id": 456, "number": 123, "state": "closed", "title": "Integrate into Buildkite pipeline triggers", "closed_at": "2025-10-14T02:14:39Z", "merged": false, "merged_at": null } } ``` Accessing this JSON payload posted to your pipeline trigger endpoint can be done using the [`buildkite:webhook` meta-data key](/docs/pipelines/configure/build-meta-data#special-meta-data-buildkite-webhook), which is a [special Buildkite meta-data key](/docs/pipelines/configure/build-meta-data#special-meta-data): ```yaml steps: - command: | WEBHOOK="$(buildkite-agent meta-data get buildkite:webhook)" ACTION="$(jq -r '.action' <<< "$WEBHOOK")" MERGED="$(jq -r '.pull_request.merged' <<< "$WEBHOOK")" if [[ "$ACTION" == "closed" && "$MERGED" == "false" ]]; then echo "PR was manually closed" fi ``` The `buildkite:webhook` meta-data itself is only available to builds triggered by any incoming webhook, and only for as long as the webhook data remains cached, which is typically for 7 days. ##### Limitations Be aware that pipeline triggers have the following limitations: - Custom webhook triggers do not support webhook signature verification (for example, HMAC signatures). - A pipeline trigger's URL cannot be rotated. If the trigger's `bktr_` value has been compromised, you'll need to delete and re-[create](#create-a-new-pipeline-trigger) a new trigger with the same attributes. - The **Commit** and **Branch** build attributes are only supported by their values defined in the pipeline trigger itself, when it was either [created](#create-a-new-pipeline-trigger) or last edited, and these values cannot be mapped from fields of the incoming webhook's JSON payload. - A successful POST request to a pipeline trigger will always trigger a build. Pipeline triggers cannot be selectively triggered based on any content from the incoming webhook's JSON payload. - Pipeline triggers can only be managed through the Buildkite interface. There is no support for managing pipeline triggers (that is, creating, editing or deleting pipeline triggers) through the Buildkite API. - There is no Buildkite interface or API support for listing builds created from a pipeline trigger. - Unlike JSON payloads, HTTP headers are not accessible to pipelines in requests to pipeline triggers. - A pipeline trigger's webhook cannot be restricted by IP address. - A pipeline trigger's JSON payload is limited to a maximum size of 5MB. - Trigger URL endpoints have a request limit of 300 requests per hour. This limit is shared across all pipeline triggers for an organization. - Webhook metadata payload retrieval is rate limited to 10 requests per minute per build. - Each pipeline is limited to 10 configurable triggers. ##### Next steps Learn more about how pipeline triggers integrate with other aspects of Buildkite Pipelines from the follow topic sections: - [Special meta-data](/docs/pipelines/configure/build-meta-data#special-meta-data)—covers details on how to retrieve meta-data from a Buildkite pipeline. - [`buildkite-agent meta-data` CLI command](/docs/agent/cli/reference/meta-data)—covers details on this actual meta-data retrieval command of the [Buildkite agent](/docs/agent) and all of its options. - [Incoming webhook security overview](/docs/pipelines/security/incoming-webhooks#what-kind-of-information-on-incoming-webhooks-is-logged-by-buildkite)—provides information on the type of data logged by incoming webhooks. --- ### Overview URL: https://buildkite.com/docs/apis/agent-api #### Agent REST API overview The agent REST API is used to retrieve agent metrics, register agents, de-register them, start jobs on agents, and finish jobs on them. The agent REST API's _publicly_ available endpoints include: - [`/metrics`](/docs/apis/agent-api/metrics): Used to retrieve information about current self-hosted agents associated with a Buildkite cluster. The [buildkite-agent-metrics](/docs/agent/self-hosted/monitoring-and-observability#buildkite-agent-metrics-cli) CLI tool uses the data returned by the metrics endpoint for agent autoscaling. - [`/stacks`](/docs/apis/agent-api/stacks): Used to implement a _stack_ on a self-hosted queue. A stack is a long-running controller process that watches the queue for jobs, and runs Buildkite agents on demand to run these jobs. All other endpoints in the agent API are intended only for use by the Buildkite agent, therefore stability and backwards compatibility are not guaranteed, and changes won't be announced. The agent also includes an internal API, called the [internal job API](/docs/apis/agent-api/internal-job), which is used to query and mutate the state of if a job running on the agent, using environment variables. The current version of the agent API is v3. ##### Schema All API access is over HTTPS, and accessed from the `agent.buildkite.com` and `agent-edge.buildkite.com` domains. Most API methods consist of a basic JSON request and response. Some parts of the API available from `agent-edge.buildkite.com` use [gRPC](https://grpc.io). ```bash curl https://agent.buildkite.com ``` ```json {"message":"👋","timestamp":1719276157} ``` ##### Authentication Unlike the [Buildkite REST API](/docs/apis/rest-api), which uses an [API access token](/docs/apis/rest-api#authentication), the agent REST API's _public_ endpoints use an [agent token](/docs/agent/self-hosted/tokens) for authentication. To authenticate using an agent token, set the `Authorization` HTTP header to the word `Token`, followed by a space, followed by the agent token. For example: ```bash curl -H "Authorization: Token $TOKEN" https://agent.buildkite.com/v3/metrics ``` --- ### Metrics URL: https://buildkite.com/docs/apis/agent-api/metrics #### Metrics API The metrics API endpoint provides information on idle and busy agents, jobs, and queues for the [Agent token](/docs/agent/self-hosted/tokens) supplied in the request `Authorization` header. ##### Get metrics Get agent metrics ```bash curl -H "Authorization: Token $BUILDKITE_AGENT_TOKEN" \ -X GET "https://agent.buildkite.com/v3/metrics" ``` ```json { "agents": { "idle": 1, "busy": 0, "total": 1, "queues": { "default": { "idle": 1, "busy": 0, "total": 1 } } }, "jobs": { "scheduled": 5, "running": 0, "waiting": 0, "total": 5, "queues": { "default": { "scheduled": 5, "running": 0, "waiting": 0, "total": 5 } } }, "organization": { "slug": "buildkite" } } ``` Success response: `200 OK` --- ### Stacks URL: https://buildkite.com/docs/apis/agent-api/stacks #### Stacks API The stacks API provides endpoints for implementing a stack reliably. A stack is defined as a software process that has these two abilities simultaneously: - The ability to pull/receive new jobs from the Buildkite API. - The ability to turn those job definitions into running agents. A stack can also be broadly understood as an orchestrator or a scheduler of Buildkite jobs. The stacks API powers Buildkite's [Agent Stack for Kubernetes](/docs/agent/self-hosted/agent-stack-k8s), and is designed to give advanced enterprise users custom control over the scheduling of jobs at larger scales. You can use the stacks API to build custom stack implementations in any language that dispatch jobs to your own compute infrastructure, such as Kubernetes, cloud VMs, serverless functions, or container services. ##### Authentication All stacks API endpoints require an [agent token](/docs/agent/self-hosted/tokens) passed in the `Authorization` header: ``` Authorization: Token ``` Agent tokens ([prefixed with `bkct_`](/docs/platform/security/tokens#supported-buildkite-tokens-agent-tokens)) are located on your cluster's **Agent Tokens** page, and these tokens grant access to all [self-hosted queues](/docs/agent/queues) within the cluster. ##### Endpoint summary | Method | Path | Description | | --- | --- | --- | | POST | `/v3/stacks/register` | [Register a stack](#register-a-stack) | | POST | `/v3/stacks/:key/deregister` | [De-register a stack](#de-register-a-stack) | | GET | `/v3/stacks/:key/scheduled-jobs` | [List scheduled jobs](#list-scheduled-jobs-metadata-only) | | PUT | `/v3/stacks/:key/scheduled-jobs/batch-reserve` | [Reserve jobs](#reserve-jobs) | | GET | `/v3/stacks/:key/jobs/:id` | [Get a job](#get-a-job-env-plus-command) | | POST | `/v3/stacks/:key/jobs/get-states` | [Get job states](#get-job-states) | | POST | `/v3/stacks/:key/jobs/:id/finish` | [Finish a job](#finish-a-job) | | POST | `/v3/stacks/:key/notifications` | [Create stack notifications](#create-stack-notifications) | ##### Register a stack Register a new stack or update an existing one. You must use this API to register a stack `key` before using any of the following APIs. You can choose to register a stack key ad-hoc once, or have it as part of your stack implementation. This endpoint is idempotent. The register payload includes a mandatory `queue_key` field, which tells Buildkite which self-hosted queue the stack is intended to serve. However, such binding isn't enforced so there is a possibility that you could use a single stack implementation to power all self-hosted queues. The number of active stacks per organization is limited, and each stack is subject to independent rate limits. Request payload: | Field | Type | Required | Description | | ----------- | ---------------- | -------- | --------------------------------------------------- | | `key` | string | Yes | Unique identifier for the stack in the org. Alphanumeric characters, underscores, and dashes only. Maximum 80 bytes. | | `type` | string | Yes | Type of stack: `kubernetes`, `elastic`, or `custom`. Third-party stacks should use `custom`. | | `queue_key` | string | Yes | Self-hosted queue key the stack plans to serve. Use `__default__` for the cluster's default queue. | | `metadata` | key-value object | Yes | Additional metadata for the stack. Must be a flat object with string keys (maximum 64 characters) and string values (maximum 256 characters). Maximum 100 keys. | Example: ```bash curl -H "Authorization: Token $BUILDKITE_CLUSTER_TOKEN" \ -H "Content-Type: application/json" \ -X POST "https://agent.buildkite.com/v3/stacks/register" \ -d '{ "key": "my-kubernetes-stack", "type": "kubernetes", "queue_key": "default", "metadata": { "version": "1.0.0", "region": "us-east-1" } }' ``` ```json { "id": "a1b2c3d4-e5f6-7890-abcd-ef1234567890", "organization_uuid": "12345678-abcd-ef01-2345-6789abcdef01", "key": "my-kubernetes-stack", "type": "kubernetes", "cluster_queue_key": "default", "metadata": { "version": "1.0.0", "region": "us-east-1" }, "last_connected_on": "2025-10-01T12:00:00.000Z", "state": "connected" } ``` Success response: `201 Created` (new stack) or `200 OK` (existing stack updated) ##### De-register a stack De-register a stack from the cluster. Ideally, when a stack stops, it should use this API to de-register its `key` from the Buildkite backend. This will ensure an organization doesn't exceed the stack count quota. ```bash curl -H "Authorization: Token $BUILDKITE_CLUSTER_TOKEN" \ -X POST "https://agent.buildkite.com/v3/stacks/my-kubernetes-stack/deregister" ``` Success response: `204 No Content` ##### List scheduled jobs (Metadata only) This is the most important API of the stacks APIs, and it fetches all jobs that have been scheduled to run by Buildkite's internal state machine. When a self-hosted queue is paused, `cluster_queue.dispatch_paused` will return `true`, and a stack implementation **must** respect this flag (that is, avoid starting new jobs whenever the queue is paused). A stack often makes scheduling decisions based on returned metadata and turns this job metadata into running agents using [--acquire-jobs](https://buildkite.com/resources/changelog/129-one-shot-agents-with-the-acquire-job-flag/). Until these jobs transition into another state, the API will keep returning them. To avoid starting duplicate jobs, we offer some utility APIs below. > 📘 Queue connection status > Polling this endpoint keeps the associated queue's status set to **Connected** in the Buildkite Pipelines interface. If a stack stops polling for more than approximately 30 seconds, the queue's status changes to **Disconnected**. Learn more in [Queue connection status](/docs/agent/queues/managing#queue-connection-status). Query parameters: | Parameter | Type | Required | Description | | ----------- | ------- | -------- | -------------------------------------------------- | | `queue_key` | string | Yes | Filter jobs by queue key | | `limit` | integer | No | Maximum number of jobs to return, max 1000 | | `after` | string | No | Cursor for pagination (from previous `end_cursor`) | The API returns jobs ordered by `scheduled_at` (oldest first). Use the `page_info.end_cursor` value from the response in the `after` parameter to fetch the next page. When `page_info.has_next_page` is `false`, you've reached the end of results. > 📘 A note on paginating scheduled jobs > Job creation is often asynchronous and eventually consistent, and paginating across scheduled jobs _does not_ cover a snapshot of scheduled jobs at the time the pagination started. In cases of high job throughput, new jobs may be added behind the current cursor, and reaching the end of the current cursor (where `has_next_page: false`) does not imply that you've seen every scheduled job. > To counteract this, we generally recommend only querying for the first (or first few) pages, and then using the `reserve-jobs` endpoint to reserve them. On further queries, jobs that have been reserved will not show up in the results of the `scheduled-jobs` endpoint. Example: ```bash curl -H "Authorization: Token $BUILDKITE_CLUSTER_TOKEN" \ -X GET "https://agent.buildkite.com/v3/stacks/my-kubernetes-stack/scheduled-jobs?queue_key=default&limit=10" ``` ```json { "jobs": [ { "id": "01234567-89ab-cdef-0123-456789abcdef", "priority": 1, "agent_query_rules": ["test=a"], "scheduled_at": "2023-10-01T12:00:00.000Z", "pipeline": { "slug": "my-pipeline", "uuid": "pipeline-uuid" }, "build": { "number": 123, "branch": "main", "uuid": "build-uuid" }, "step": { "key": "test" } } ], "page_info": { "has_next_page": false, "end_cursor": "base64-encoded-string-or-null" }, "cluster_queue": { "id": "queue-id", "dispatch_paused": false } } ``` > 📘 > The `pipeline`, `build`, and `step` values are nested objects of key/value pairs. All nested objects are always present in the response, even when their values are `null`. The `end_cursor` field in `page_info` can be a base64-encoded string or `null`. Success response: `200 OK` ##### Get a job (Env + command) In some cases, the job metadata returned from the API above isn't sufficient to make a full scheduling decision. In such cases, you can use this API to get the full payload data of a job individually. Specifically, the job payload data contains `env` and `command`. Due to the dynamic nature of Buildkite pipelines, these two fields can often grow to above 100KB. It's useful when you want to make scheduling decisions based on in-depth analysis of a job. ```bash JOB_UUID="01234567-89ab-cdef-0123-456789abcdef" curl -H "Authorization: Token $BUILDKITE_CLUSTER_TOKEN" \ -X GET "https://agent.buildkite.com/v3/stacks/my-kubernetes-stack/jobs/$JOB_UUID" ``` ```json { "id": "01234567-89ab-cdef-0123-456789abcdef", "env": { "BUILDKITE_JOB_ID": "01234567-89ab-cdef-0123-456789abcdef", "BUILDKITE_BUILD_NUMBER": "123" }, "command": "echo Hello 👋" } ``` Success response: `200 OK` ##### Reserve jobs In order to prevent pulling duplicate jobs, a stack can _reserve_ jobs that it has decided to execute. If this API is called, a stack _should only_ execute jobs that are successfully reserved, as shown in the `reserved` fields in the response. Until the reservation expires, the reserved jobs will not show up in subsequent list scheduled jobs API calls. If the reservation expires, the reserved jobs will return to the `scheduled` state. You can reserve multiple jobs for execution. This API can be repeatedly called to extend the expiration of reservation states. Alternatively, a stack implementation can maintain its own persistent layer to keep track of job lifecycle, in which case, calling this API will be unnecessary. Request payload: | Field | Type | Required | Description | | ---------------------------- | ------------- | -------- | ------------------------------- | | `job_uuids` | array[string] | Yes | Array of job UUIDs to reserve (maximum 1,000) | | `reservation_expiry_seconds` | integer | No | Reservation duration in seconds. Defaults to 900 (15 minutes). Maximum 3,600 (1 hour). | Example: ```bash curl -H "Authorization: Token $BUILDKITE_CLUSTER_TOKEN" \ -H "Content-Type: application/json" \ -X PUT "https://agent.buildkite.com/v3/stacks/my-kubernetes-stack/scheduled-jobs/batch-reserve" \ -d '{ "job_uuids": [ "01234567-89ab-cdef-0123-456789abcdef", "fedcba98-7654-3210-fedc-ba9876543210" ], "reservation_expiry_seconds": 1800 }' ``` ```json { "reserved": [ "01234567-89ab-cdef-0123-456789abcdef", "fedcba98-7654-3210-fedc-ba9876543210" ], "not_reserved": [] } ``` Success response: `200 OK` ##### Get job states Retrieve the current state of multiple jobs. This is useful when a stack is provisioning infrastructure for a job and the job is cancelled before the infrastructure is ready. A stack can choose to decommission infrastructure proactively to save cost. This API is also helpful to inform a stack when a job's responsibility can be safely shifted to the running agent. This API uses `POST` method for batch data loading. Request payload: | Field | Type | Required | Description | | ----------- | ------------- | -------- | ------------------------------------ | | `job_uuids` | array[string] | Yes | Array of job UUIDs to get states for (maximum 1,000) | Example: ```bash curl -H "Authorization: Token $BUILDKITE_CLUSTER_TOKEN" \ -H "Content-Type: application/json" \ -X POST "https://agent.buildkite.com/v3/stacks/my-kubernetes-stack/jobs/get-states" \ -d '{ "job_uuids": [ "01234567-89ab-cdef-0123-456789abcdef", "fedcba98-7654-3210-fedc-ba9876543210" ] }' ``` ```json { "states": { "01234567-89ab-cdef-0123-456789abcdef": "scheduled", "fedcba98-7654-3210-fedc-ba9876543210": "running" } } ``` Success response: `200 OK` ##### Finish a job Mark a job as finished when the stack cannot or will not execute it, or when it has completed successfully without spawning an agent. In some situations, an agent cannot be spawned due to infrastructure or other issues. In this case, for each job, a stack can call this API at most once to finish the job with details. This is a critical API to shorten the feedback cycle to end users. For example, in the Kubernetes stack, if a pod has an image pull issue, the k8s stack uses this API to fail a job with feedback. A job that is finished with this approach will have a special notification on the Buildkite Build page. Request payload: | Field | Type | Required | Description | | ------------- | ------- | -------- | ------------------------------------------------------------------------------------------------------ | | `exit_status` | integer | No | Exit status code for the job. Defaults to -1 if not provided. Use 0 to indicate successful completion. | | `detail` | string | Yes | Description of why the job finished (max 4KB) | Example: ```bash curl -H "Authorization: Token $BUILDKITE_CLUSTER_TOKEN" \ -H "Content-Type: application/json" \ -X POST "https://agent.buildkite.com/v3/stacks/my-kubernetes-stack/jobs/$JOB_UUID/finish" \ -d '{ "exit_status": -1, "detail": "Stack failed to start agent: insufficient resources" }' ``` Success response: `200 OK` ###### Retry attributes If you have [retry attributes](/docs/pipelines/configure/retry) configured on a step, be aware that these will apply to and affect a job that finished with an `exit_status` of `-1` (for example, a failure), or if the Buildkite platform generates a `signal_reason` of `stack_error` for this job, or both. If your pipeline has numerous steps with retry attributes, and many of their jobs happen to fail, this could result in all of these jobs undergoing automatic retries. To prevent this issue from occurring, in each of these steps' [automatic retry attributes](/docs/pipelines/configure/retry#retry-attributes-automatic-retry-attributes), set the `signal_reason` to `stack_error`, with a `limit` value of `0`, which prevents the job from being automatically retried when its associated attribute conditions are met. For example: ```yaml steps: - label: "Tests" command: "tests.sh" retry: automatic: - exit_status: -1 signal_reason: stack_error limit: 0 - exit_status: "*" limit: 2 ``` ##### Create stack notifications In situations when a stack may take more than a few seconds to provision infrastructure for a job, or when the stack is waiting for some external conditions to be satisfied, a stack can give short textual notifications to the Buildkite Build page. This can help with visibility and debugging. A notification `detail` can be a short string. A job cannot have more than 100 stack notifications, so a stack should use this API judiciously. This endpoint supports batch creation of notifications for multiple jobs. You can send up to 1000 notifications in a single request. ###### Request payload | Field | Type | Required | Description | | --------------- | ------------- | -------- | ---------------------------------------------------- | | `notifications` | array[object] | Yes | Array of notification objects (max 1000 per request) | Each notification object: | Field | Type | Required | Description | | ----------- | ------ | -------- | --------------------------------------------- | | `job_uuid` | string | Yes | UUID of the job to attach the notification to | | `detail` | string | Yes | Short notification message (max length 256) | | `timestamp` | string | No | ISO 8601 timestamp (defaults to current time) | ###### Constraints - Maximum 1000 notifications per request - Maximum 100 notifications per job - `detail` must not be empty and cannot exceed 256 characters - `timestamp` cannot be in the future or before job creation time - Notifications cannot be sent for jobs that finished more than 300 seconds ago ```bash curl -H "Authorization: Token $BUILDKITE_CLUSTER_TOKEN" \ -H "Content-Type: application/json" \ -X POST "https://agent.buildkite.com/v3/stacks/my-kubernetes-stack/notifications" \ -d '{ "notifications": [ { "job_uuid": "01234567-89ab-cdef-0123-456789abcdef", "detail": "Pod is starting up" }, { "job_uuid": "fedcba98-7654-3210-fedc-ba9876543210", "detail": "Waiting for resources", "timestamp": "2023-10-01T12:00:00.000Z" } ] }' ``` Response example with partial success: ```json { "errors": [ { "error": "detail is required", "indexes": [2] }, { "error": "job stack notification count exceeded", "indexes": [5] } ] } ``` Success response: `200 OK` The response includes an `errors` array. Each error object contains: - `error`: Description of the validation failure - `indexes`: Array of notification indexes (0-based) that failed with this error Valid notifications are created even if some fail validation. An empty `errors` array indicates all notifications were created successfully. ##### Rate limiting Each endpoint has an independent rate limit applied per stack (scoped to the combination of organization, cluster, and stack key). Rate limits use a one-second sliding window. Every response includes these headers: | Header | Description | | --- | --- | | `RateLimit-Scope` | The rate limit scope for this endpoint | | `RateLimit-Limit` | Maximum requests allowed per window | | `RateLimit-Remaining` | Requests remaining in the current window | | `RateLimit-Reset` | Seconds until the rate limit window resets | Default rate limits per endpoint: | Endpoint | Scope | Default limit (requests/second) | | --- | --- | --- | | List scheduled jobs | `list-scheduled-jobs` | 10 | | Reserve jobs | `batch-reserve` | 10 | | Get a job | `show-job` | 1,000 | | Get job states | `batch-load-job-states` | 20 | | Finish a job | `finish-job` | 100 | | Create stack notifications | `stack-notification` | 200 | | De-register | `default` | 10 | When the rate limit is exceeded, the API returns `429 Too Many Requests`: ```json { "message": "You have exceeded your API rate limit. Please wait 1 seconds before making more requests.", "scope": "list-scheduled-jobs", "limit": 10, "current": 11, "reset": 1 } ``` ##### Error responses All error responses return a JSON object with a `message` field: ```json { "message": "Description of the error" } ``` Common error codes: | Status | Meaning | | --- | --- | | `400 Bad Request` | Invalid parameters (for example, `job_uuids` is not an array or `limit` is not a positive integer) | | `401 Unauthorized` | Missing, invalid, or expired token | | `403 Forbidden` | Stack limit exceeded for the organization | | `404 Not Found` | Stack, job, or self-hosted queue not found | | `422 Unprocessable Entity` | Validation failure (for example, missing required fields during registration) | | `429 Too Many Requests` | Rate limit exceeded | | `503 Service Unavailable` | Organization is temporarily unavailable | --- ### Internal job URL: https://buildkite.com/docs/apis/agent-api/internal-job #### Internal job API The internal job API is one that's exposed locally/internally within the agent for the currently running job. You can use this API to query and mutate the state of this job through environment variables, which makes it easier for you to write scripts, hooks, and plugins in languages other than Bash, that can interact with the agent. This API uses a Unix domain socket, whose path is exposed to running jobs with the `BUILDKITE_AGENT_JOB_API_SOCKET` environment variable. Calls are authenticated using the Bearer HTTP Authorization scheme made available through a token in the `BUILDKITE_AGENT_JOB_API_TOKEN` environment variable. The API provides the following endpoints: - `GET /api/current-job/v0/env`: Returns a JSON object of all environment variables for the current job. - `PATCH /api/current-job/v0/env`: Accepts a JSON object of environment variables to set for the current job. - `DELETE /api/current-job/v0/env`: Accepts a JSON array of environment variable names to unset for the current job. An example `curl` call to the internal job API using the `GET` method, would have the following format: ```bash curl --unix-socket "$BUILDKITE_AGENT_JOB_API_SOCKET" \ -X GET \ -H "Authorization: Bearer $BUILDKITE_AGENT_JOB_API_TOKEN" \ "http://job/api/current-job/v0/env" ``` where `http://job/...` is a placeholder hostname, which is required for the HTTP-over-Unix socket (`--unix-socket`), but is ignored. This would return a response format similar to the following: ```json { "env": { "BUILDKITE_PIPELINE_SLUG": "my-pipeline", "BUILDKITE_BUILD_NUMBER": "123", "MY_CUSTOM_VAR": "value" } } ``` See the [`payloads.go` file of the `agent` source repository](https://github.com/buildkite/agent/blob/main/jobapi/payloads.go) for the full API request and response definitions. The internal job API is unavailable on agents running versions of Windows before build 17063, as this was when Windows added Unix Domain Socket support. If you enable this experiment on an unsupported Windows agent, the agent outputs a warning and the API is unavailable. --- ### Overview URL: https://buildkite.com/docs/apis/model-providers #### Model providers overview The _model providers_ feature provides [Buildkite agents](/docs/agent) with direct access to large language models (LLMs) through the Buildkite platform, enabling AI-powered workflows within your CI/CD environment. This feature provides secure, integrated access to LLMs, also known as _models_ or _AI models_, without requiring separate infrastructure setup. Local AI coding tools operate in isolation with limited context and no connection to your actual build environment. Model providers solve this by bringing AI capabilities directly into your pipelines, where they have access to: - Build logs, artifacts, and pipeline history - Organizational security policies and audit trails - Real-time build context for informed decision-making Once you have connected your Buildkite organization to a model provider, your AI agents can then respond to build failures from Buildkite pipelines, optimize performance, and improve your pipelines automatically. Every step of your software delivery process can benefit from AI that understands your actual build context. ##### Connect to a model provider Connecting your Buildkite organization to an AI model through the Buildkite platform can only be done by [Buildkite organization administrators](/docs/platform/team-management/permissions#manage-teams-and-permissions-organization-level-permissions). Currently, only [Anthropic](/docs/apis/model-providers/anthropic) models are supported. To connect to a model provider: 1. Select **Settings** in the global navigation to access the to access the [**Organization Settings**](https://buildkite.com/organizations/~/settings) page. 1. Select **Integrations > Model Providers** to access your organization's [**Model Providers**](https://buildkite.com/organizations/~/model-providers) page. 1. In **All Providers**, select the model provider to enable for your organization. 1. Choose your **Authentication Method**—[**Buildkite Hosted Token**](#connect-to-a-model-provider-buildkite-hosted-token) or [**Bring Your Own Token (BYO)**](#connect-to-a-model-provider-bring-your-own-token), depending on your security requirements and preferences. Your pipelines can then authenticate using existing Buildkite [job tokens](/docs/agent/self-hosted/tokens#additional-agent-tokens-job-tokens), which are accessible through the environment variable `$BUILDKITE_AGENT_ACCESS_TOKEN`. Learn more about integrating the Anthropic model on the [Anthropic model provider](/docs/apis/model-providers/anthropic) page. ###### Buildkite hosted token With the **Buildkite Hosted Token** authentication option, you can start using AI models immediately. Buildkite handles the infrastructure and authentication, and therefore, there's no need to: - Create accounts with model providers. - Manage API keys or secrets. - Configure additional infrastructure. > 📘 > The Buildkite hosted token authentication option is only available to Buildkite customers on the [Pro or Enterprise](https://buildkite.com/pricing/) plan. ###### Bring your own token For organizations with existing model provider relationships or specific security requirements, the **Bring Your Own Token (BYO)** authentication option lets you: - Use your own API keys with AI model providers. - Maintain direct billing relationships. - Control API access and quotas. - Benefit from Buildkite's usage tracking and integration. Once configured, integrate AI capabilities into your build workflows using the Buildkite agent API. > 📘 > When using this authentication method option, remember to use existing Buildkite [job tokens](/docs/agent/self-hosted/tokens#additional-agent-tokens-job-tokens) to authenticate the Buildkite agent to your model provider, and not your model provider's API access token. ###### Buildkite model provider API endpoints Once your [model provider has been connected](#connect-to-a-model-provider), your Buildkite agents can then interact directly with your connected model through the _Buildkite model provider API_ endpoints, which are based on this URL: ```url http://agent.buildkite.com/v3/ai ``` Or, using the [`$BUILDKITE_AGENT_ENDPOINT` environment variable](/docs/pipelines/configure/environment-variables#BUILDKITE_AGENT_ENDPOINT): ```url $BUILDKITE_AGENT_ENDPOINT/ai ``` Therefore, to interact with a specific model provider, such as Anthropic, append its name to the end of this model provider API endpoint: ```url $BUILDKITE_AGENT_ENDPOINT/ai/anthropic ``` ##### Monitoring usage To track your Buildkite organization's AI model usage through the Buildkite interface: 1. Select **Settings** in the global navigation to access the to access the [**Organization Settings**](https://buildkite.com/organizations/~/settings) page. 1. Select **Usage** to access your Buildkite organization's usage [**Usage > Summary**](https://buildkite.com/organizations/~/usage) page. 1. Select the [**Model Providers**](https://buildkite.com/organizations/~/usage?product=model_providers) tab to view your model provider usage. --- ### Anthropic URL: https://buildkite.com/docs/apis/model-providers/anthropic #### Anthropic model provider The Anthropic model provider enables organizations to integrate Claude AI models into Buildkite pipelines. This model provider supports both [**Buildkite Hosted Tokens**](/docs/apis/model-providers#connect-to-a-model-provider-buildkite-hosted-token) as well as [**Bring Your Own Token (BYO)**](/docs/apis/model-providers#connect-to-a-model-provider-bring-your-own-token), providing flexible access to Anthropic's AI capabilities. ##### Claude Code compatibility The Anthropic model provider is fully compatible with Claude Code, which allows you to run Claude Code directly within your Buildkite pipelines, enabling automated code generation, refactoring, and testing in your CI/CD environment. ###### Supported models Buildkite supports all current Anthropic Claude models, including Claude Sonnet 4.6, Claude Sonnet 4.5, Opus 4.1, and Haiku 4.5. ###### Using Claude Code in pipelines Claude Code's headless mode (`claude -p "prompt"`) lets you run Claude as a non-interactive step in Buildkite Pipelines. To connect Claude Code to the Buildkite model provider, set the following environment variables in your pipeline step: ```yaml env: ANTHROPIC_BASE_URL: "$BUILDKITE_AGENT_ENDPOINT/ai/anthropic" ANTHROPIC_API_KEY: "$BUILDKITE_AGENT_ACCESS_TOKEN" ``` A basic pipeline example: ```yaml steps: - label: "\:claude\: Code review" command: | claude -p "Review the changes in this PR and suggest improvements" \ --permission-mode bypassPermissions env: ANTHROPIC_BASE_URL: "$BUILDKITE_AGENT_ENDPOINT/ai/anthropic" ANTHROPIC_API_KEY: "$BUILDKITE_AGENT_ACCESS_TOKEN" ``` The `--permission-mode bypassPermissions` flag is required for CI environments where there is no human to approve tool use prompts. ###### Running as a non-root user Claude Code refuses to run with `--permission-mode bypassPermissions` as the root user for security reasons. If your Buildkite agent runs as root, use `su` to switch to a non-root user: ```yaml steps: - label: "\:claude\: Analyze failures" command: | su buildkite -c 'HOME=/home/buildkite claude -p "Analyze the build failures" --permission-mode bypassPermissions' env: ANTHROPIC_BASE_URL: "$BUILDKITE_AGENT_ENDPOINT/ai/anthropic" ANTHROPIC_API_KEY: "$BUILDKITE_AGENT_ACCESS_TOKEN" ``` > 🚧 > When using `su` or `su --preserve-environment`, the `HOME` environment variable may remain set to `/root`. Since the non-root user cannot write to `/root`, Claude Code hangs silently when it tries to initialize its config directory (`~/.claude/`). Always set `HOME` explicitly to the target user's home directory inside the `su -c` command, as shown in the example above. ##### Base URL Once you have [connected your Buildkite organization to your Anthropic model provider](/docs/apis/model-providers#connect-to-a-model-provider), you can access your Anthropic Claude models through the [Claude API](https://platform.claude.com/docs/en/api/overview), by appending these endpoints to the relevant [Buildkite model provider API endpoint](/docs/apis/model-providers#connect-to-a-model-provider-buildkite-model-provider-api-endpoints) as the base URL: ```url https://agent.buildkite.com/v3/ai/anthropic ``` ###### Supported endpoints The following [Claude API](https://platform.claude.com/docs/en/api/overview) endpoints are available through Buildkite model provider API: - [`POST /v1/messages` endpoint](https://platform.claude.com/docs/en/api/messages): Generates completions and chat responses. Token usage is automatically tracked for billing. - [`POST /v1/messages/count_tokens` endpoint](https://platform.claude.com/docs/en/api/messages/count_tokens): Calculates token usage before making requests to optimize costs. - [`GET /v1/models` endpoint](https://platform.claude.com/docs/en/api/models/list): Retrieves all available Anthropic models. - [`GET /v1/models/{model_id}` endpoint](https://platform.claude.com/docs/en/api/models/retrieve): Gets information about a specific model's capabilities and limits. These endpoints are accessed by appending them to the end of your Buildkite model provider API's base URL—for example, to access the Claude API `POST /v1/messages` endpoint from your Buildkite agent, use the following URL: ```url https://agent.buildkite.com/v3/ai/anthropic/v1/messages ``` ##### Authentication methods The Anthropic model provider supports two authentication header formats, both of which use a [job token](/docs/agent/self-hosted/tokens#additional-agent-tokens-job-tokens) for authentication. ###### Authorization header (standard Agent API) ```bash -H "Authorization: Token $BUILDKITE_AGENT_ACCESS_TOKEN" ``` ###### x-api-key header (Claude SDK compatible) ```bash -H "x-api-key: $BUILDKITE_AGENT_ACCESS_TOKEN" ``` ##### Basic example Here's a simple pipeline that generates unit tests for your code: ```yaml steps: - label: "Failure analysis" command: | curl -X POST "$BUILDKITE_AGENT_ENDPOINT/ai/anthropic/v1/messages" \ -H "Content-Type: application/json" \ -H "x-api-key: $BUILDKITE_AGENT_ACCESS_TOKEN" \ -d '{ "model": "claude-sonnet-4-5", "max_tokens": 1000, "messages": [ { "role": "system", "content": "..." }, { "role": "user", "content": "Analyze the test failures in this log" } ] }' ``` ##### Rate limits The following rate limits apply to Anthropic API requests: ###### Request rate limiting - **Default limit**: 50 requests per minute ###### Input token rate limiting - **Default limit**: 50,000 input tokens per minute per provider. - **Token calculation**: `total_input_token = cache_creation_input_tokens + input_tokens`. To request a higher rate limit for your Buildkite organization, please contact support@buildkite.com. ##### Response formats Anthropic provider supports both: - **Non-streaming responses**: Complete responses returned after processing. - **Streaming responses**: Real-time response chunks for long-running completions. ---