MCP tools overview

MCP tools form the fundamental components of an MCP server, and provide the mechanisms through which AI tools and agents can access a system's APIs, through its MCP server.

Learn more about MCP tools in the Core Server Features and Tools sections of the Understanding MCP servers page in the Model Context Protocol docs.

Available MCP tools

The Buildkite MCP server exposes the following categories of MCP tools.

The names of these tools (for example, list_pipelines) typically do not need to be used in direct prompts to AI tools or agents. However, each MCP tool name is designed to be understandable, so that it can be used directly in a prompt when you want your AI tool or agent to explicitly use that MCP tool to query the Buildkite platform.

As part of configuring your AI tool or agent with the remote or local Buildkite MCP server, you can restrict its access to specific categories of tools using toolsets.

While Buildkite's MCP server makes calls to the Buildkite REST API, note that in some cases, only a subset of the resulting fields are returned in the response to your AI tool or agent. This is done to reduce noise for your AI tool / agent, as well as reduce costs associated with text tokenization of the response (also known as token usage).

User, authentication and Buildkite organization

These MCP tools are associated with authentication and relate to querying details about the access token's user and Buildkite organization they belong to.

Tool Description
access_token

Uses the Get the current token REST API endpoint to retrieve information about the current API access token, including its scopes and UUID.

current_user

Uses the Get the current user REST API endpoint to retrieve details about the user account that owns the API token, including name, email, avatar, and account creation date.

Required token scope: read_user.

user_token_organization

Uses the Get an organization REST API endpoint to retrieve details about the Buildkite organization associated with the user token used for this request.

Required token scope: read_organizations.

Buildkite clusters

These MCP tools are used to retrieve details about the clusters and their queues configured in your Buildkite organization. Learn more about clusters in Clusters overview.

Tool Description
list_clusters

Uses the List clusters REST API endpoint to list all clusters in an organization with their names, descriptions, default queues, and creation details.

Required token scope: read_clusters.

get_cluster

Uses the Get a cluster REST API endpoint to retrieve detailed information about a specific cluster including its name, description, default queue, and configuration.

Required token scope: read_clusters.

list_cluster_queues

Uses the List queues REST API endpoint to list all queues in a cluster with their keys, descriptions, dispatch status, and agent configuration.

Required token scope: read_clusters.

get_cluster_queue

Uses the Get a queue REST API endpoint to retrieve detailed information about a specific queue including its key, description, dispatch status, and hosted agent configuration.

Required token scope: read_clusters.

Pipelines

These MCP tools are used to retrieve details about existing pipelines in your Buildkite organization, as well as create new pipelines, and update existing ones.

Tool Description
list_pipelines

Uses the List pipelines REST API endpoint to list all pipelines in an organization with their basic details, build counts, and current status.

Required token scope: read_pipelines.

get_pipeline

Uses the Get a pipeline REST API endpoint to retrieve detailed information about a specific pipeline including its configuration, steps, environment variables, and build statistics.

Required token scope: read_pipelines.

create_pipeline

Uses the Create a YAML pipeline REST API endpoint to set up a new CI/CD pipeline in Buildkite with YAML configuration, repository connection, and cluster assignment.

Required token scope: write_pipelines.

update_pipeline

Uses the Update a pipeline REST API endpoint to modify an existing Buildkite pipeline's configuration, repository, settings, or metadata.

Required token scope: write_pipelines.

Builds

These MCP tools are used to retrieve details about existing builds of a pipeline, as well as create new builds, and wait for a specific build to finish.

Tool Description
list_builds

Uses the List all builds REST API endpoint to list all builds for a pipeline with their status, commit information, and metadata.

Required token scope: read_builds.

get_build

Uses the Get a build REST API endpoint to retrieve detailed information about a specific build including its jobs, timing, and execution details.

Required token scope: read_builds.

create_build

Uses the Create a build REST API endpoint to trigger a new build on a Buildkite pipeline for a specific commit and branch, with optional environment variables, metadata, and author information.

Required token scope: write_builds.

wait_for_build

Waits for a specific build to be completed. This tool uses the Get a build REST API endpoint to retrieve the status of the build from its logs. If the build is still running, the wait_for_build tool automatically calls this same endpoint again, and does so repeatedly with increasingly less frequency, to reduce text tokenization usage and traffic, until the returned build status is completed.

Required token scope: read_builds.

Jobs

These MCP tools are used to retrieve the logs of jobs from a pipeline build, as well as unblock jobs in a pipeline build. A job's logs can then be processed by the logs tools of the MCP server, for the benefit of your AI tool or agent.

Tool Description
get_job_logs

Uses the Get a job's log output REST API endpoint to get the log output and metadata for a specific job, including content, size, and header timestamps. Automatically saves to file for large logs to avoid token limits.

Required token scope: read_build_logs.

unblock_job

Uses the Unblock a job REST API endpoint to unblock a blocked job in a Buildkite build to allow it to continue execution.

Required token scope: write_builds.

Logs

These MCP tools are used to process the logs of jobs, for the benefit of your AI tool or agent. These MCP tools leverage the Buildkite Logs Search & Query Library (used by the Buildkite MCP server), which converts the complex Buildkite logs returned by the Buildkite platform into Parquet file versions of these log files, making the logs more consumable for AI tools, agents and large language models (LLMs).

For improved performance, these Parquet log files are also cached and stored. Learn more about this in Smart caching and storage.

Tool Description
search_logs

Search log entries using regex patterns with optional context lines.

tail_logs

Show the last N entries from the log file (that is, N lines for recent errors and status checks).

get_logs_info

Get metadata and statistics about the Parquet log file.

read_logs

Read log entries from the file, optionally starting from a specific row number.

Artifacts

These MCP tools are used to retrieve details about artifacts from a pipeline build, as well as obtain the artifacts themselves.

Tool Description
list_artifacts

Uses the List artifacts for a build REST API endpoint to list a build's artifacts across all of its jobs, including file details, paths, sizes, MIME types, and download URLs.

Required token scope: read_artifacts.

get_artifact

Uses the Get an artifact REST API endpoint to get detailed information about a specific artifact including its metadata, file size, SHA-1 hash, and download URL.

Required token scope: read_artifacts.

Annotations

These MCP tools are used to retrieve details about the annotations resulting from a pipeline build.

Tool Description
list_annotations

Uses the List annotations for a build REST API endpoint to list all annotations for a build, including their context, style (success/info/warning/error), rendered HTML content, and creation timestamps.

Required token scope: read_builds.

Test Engine

These MCP tools are used to retrieve details about Test Engine tests and their runs from a test suite, along with other Test Engine-related data.

Tool Description
get_test

Uses the Get a test REST API endpoint to retrieve a specific test in Buildkite Test Engine. This provides additional metadata for failed test executions.

Required token scope: read_suites.

list_test_runs

Uses the List all runs REST API endpoint to list all test runs for a test suite in Buildkite Test Engine.

Required token scope: read_suites.

get_test_run

Uses the Get a run REST API endpoint to retrieve a specific test run in Buildkite Test Engine.

Required token scope: read_suites.

get_failed_executions

Uses the Get failed execution data REST API endpoint to retrieve failed test executions for a specific test run in Buildkite Test Engine. Optionally retrieves the expanded failure details such as full error messages and stack traces.

Required token scope: read_suites.

get_build_test_engine_runs

Get Test Engine runs data for a specific build in Buildkite Pipelines. This can be used to look up test runs.

Smart caching and storage

To improve performance in accessing log data from the Buildkite platform, the Buildkite MCP server downloads and stores the logs of jobs in Parquet file format to either of the following areas.

These Parquet log files are stored and managed by the MCP server and all interactions with these files are managed by the MCP server's log tools.

If the job is in a terminal state (for example, the job was completed successfully, had failed, or was canceled), then the job's Parquet format logs are downloaded and stored indefinitely.

If the job is in a non-terminal state (for example, the job is still running or is blocked), then the job's Parquet logs are retained for 30 seconds.

Storage locations

If you are running the local MCP server, the following table indicates the default locations for these Parquet log files.

Environment Default Parquet log file location

A physical machine (for example, a desktop or laptop computer)

The .bklog sub-directory of the home directory.

A containerized environment (for example, using Docker or Kubernetes)

The /tmp/bklog sub-directory of the file system's root directory level.

You can override these default Parquet log file locations through the $BKLOG_CACHE_URL environment variable, which can be used with either a local file system path or an s3:// path, where the latter may be better suited for pipeline usage, for example:

# Local development with persistent cache
export BKLOG_CACHE_URL="file:///Users/me/bklog-cache"

# Shared cache across build agents
export BKLOG_CACHE_URL="s3://ci-logs-cache/buildkite/"