buildkite-agent artifact

The Buildkite Agent’s artifact command provides support for uploading and downloading of build artifacts, allowing you to share binary data between build steps no matter the machine or network.

See the Using Build Artifacts guide for a step-by-step example.

Uploading artifacts

You can use this command in your build scripts to store artifacts. Artifacts are accessible using the web interface and can be downloaded by future build steps. Artifacts can be stored in the Buildkite-managed artifact store, or your own storage location, depending on how you have configured your Buildkite Agent.

For documentation on configuring a custom storage location, see:

You can also configure the agent to automatically upload artifacts after your step’s command has completed based on a file pattern (see the Using Build Artifacts guide for details).

Usage

buildkite-agent artifact upload [options] <pattern> [destination]

Description

Uploads files to a job as artifacts.

You need to ensure that the paths are surrounded by quotes otherwise the built-in shell path globbing will provide the files, which is currently not supported.

You can specify an alternate destination on Amazon S3, Google Cloud Storage or Artifactory as per the examples below. This may be specified in the 'destination' argument, or in the 'BUILDKITE_ARTIFACT_UPLOAD_DESTINATION' environment variable. Otherwise, artifacts are uploaded to a Buildkite-managed Amazon S3 bucket, where they’re retained for six months.

Example

$ buildkite-agent artifact upload "log/**/*.log"

You can also upload directly to Amazon S3 if you'd like to host your own artifacts:

$ export BUILDKITE_S3_ACCESS_KEY_ID=xxx
$ export BUILDKITE_S3_SECRET_ACCESS_KEY=yyy
$ export BUILDKITE_S3_DEFAULT_REGION=eu-central-1 # default is us-east-1
$ export BUILDKITE_S3_ACL=private # default is public-read
$ buildkite-agent artifact upload "log/**/*.log" s3://name-of-your-s3-bucket/$BUILDKITE_JOB_ID

You can use Amazon IAM assumed roles by specifying the session token:

$ export BUILDKITE_S3_SESSION_TOKEN=zzz

Or upload directly to Google Cloud Storage:

$ export BUILDKITE_GS_ACL=private
$ buildkite-agent artifact upload "log/**/*.log" gs://name-of-your-gs-bucket/$BUILDKITE_JOB_ID

Or upload directly to Artifactory:

$ export BUILDKITE_ARTIFACTORY_URL=http://my-artifactory-instance.com/artifactory
$ export BUILDKITE_ARTIFACTORY_USER=carol-danvers
$ export BUILDKITE_ARTIFACTORY_PASSWORD=xxx
$ buildkite-agent artifact upload "log/**/*.log" rt://name-of-your-artifactory-repo/$BUILDKITE_JOB_ID

Options

  • --job value - Which job should the artifacts be uploaded to [$BUILDKITE_JOB_ID]
  • --content-type value - A specific Content-Type to set for the artifacts (otherwise detected) [$BUILDKITE_ARTIFACT_CONTENT_TYPE]
  • --agent-access-token value - The access token used to identify the agent [$BUILDKITE_AGENT_ACCESS_TOKEN]
  • --endpoint value - The Agent API endpoint (default: "https://agent.buildkite.com/v3") [$BUILDKITE_AGENT_ENDPOINT]
  • --no-http2 - Disable HTTP2 when communicating with the Agent API. [$BUILDKITE_NO_HTTP2]
  • --debug-http - Enable HTTP debug mode, which dumps all request and response bodies to the log [$BUILDKITE_AGENT_DEBUG_HTTP]
  • --no-color - Don't show colors in logging [$BUILDKITE_AGENT_NO_COLOR]
  • --debug - Enable debug mode [$BUILDKITE_AGENT_DEBUG]
  • --experiment value - Enable experimental features within the buildkite-agent [$BUILDKITE_AGENT_EXPERIMENT]
  • --profile value - Enable a profiling mode, either cpu, memory, mutex or block [$BUILDKITE_AGENT_PROFILE]
  • --follow-symlinks - Follow symbolic links while resolving globs [$BUILDKITE_AGENT_ARTIFACT_SYMLINKS]

Artifact upload examples

Uploading a specific file:

buildkite-agent artifact upload log/test.log

Uploading all the jpegs and pngs, in all folders and subfolders:

buildkite-agent artifact upload "*/**/*.jpg;*/**/*.jpeg;*/**/*.png"

Uploading all the log files in the log folder:

buildkite-agent artifact upload "log/*.log"

Uploading all the files and folders inside the coverage directory:

buildkite-agent artifact upload "coverage/**/*"

Uploading a file name with special characters, for example, hello??.html:

buildkite-agent artifact upload "hello\?\?.html"

Artifact upload glob syntax

Glob path patterns are used throughout Buildkite for specifying artifact uploads.

The source path you supply to the upload command will be replicated exactly at the destination. If you run:

buildkite-agent artifact upload log/test.log

Buildkite will store the file at log/test.log. If you want it to be stored as test.log without the full path, then you'll need to change into the file's directory before running your upload command:

cd log
buildkite-agent artifact upload test.log

Keep in mind while you’re writing your path pattern:

  • patterns must match whole path strings, not just substrings
  • there are two wildcards available that match non-separator characters (on Linux / is a separator character, and on Windows \ is a separator character):
    • * to match a sequence of characters
    • ? to match a single character
  • character ranges surrounded by [] support the ^ as a negator
  • special characters can be escaped with \\
  • multiple paths are separated with ;
  • surround the pattern with quotes

Downloading artifacts

Use this command in your build scripts to download artifacts.

Usage

buildkite-agent artifact download [options] <query> <destination>

Description

Downloads artifacts matching <query> from Buildkite to <destination> directory on the local machine.

Note: You need to ensure that your search query is surrounded by quotes if using a wild card as the built-in shell path globbing will expand the wild card and break the query.

If the last path component of <destination> matches the first path component of your <query>, the last component of <destination> is dropped from the final path. For example, a query of 'app/logs/*' with a destination of 'foo/app' will write any matched artifact files to 'foo/app/logs/', relative to the current working directory.

You can also change working directory to the intended destination and use a <destination> of '.' to always create a directory hierarchy matching the artifact paths.

Example

$ buildkite-agent artifact download "pkg/*.tar.gz" . --build xxx

This will search across all the artifacts for the build with files that match that part. The first argument is the search query, and the second argument is the download destination.

If you're trying to download a specific file, and there are multiple artifacts from different jobs, you can target the particular job you want to download the artifact from:

$ buildkite-agent artifact download "pkg/*.tar.gz" . --step "tests" --build xxx

You can also use the step's jobs id (provided by the environment variable $BUILDKITE_JOB_ID)

Options

  • --step value - Scope the search to a particular step by using either its name or job ID
  • --build value - The build that the artifacts were uploaded to [$BUILDKITE_BUILD_ID]
  • --include-retried-jobs - Include artifacts from retried jobs in the search [$BUILDKITE_AGENT_INCLUDE_RETRIED_JOBS]
  • --agent-access-token value - The access token used to identify the agent [$BUILDKITE_AGENT_ACCESS_TOKEN]
  • --endpoint value - The Agent API endpoint (default: "https://agent.buildkite.com/v3") [$BUILDKITE_AGENT_ENDPOINT]
  • --no-http2 - Disable HTTP2 when communicating with the Agent API. [$BUILDKITE_NO_HTTP2]
  • --debug-http - Enable HTTP debug mode, which dumps all request and response bodies to the log [$BUILDKITE_AGENT_DEBUG_HTTP]
  • --no-color - Don't show colors in logging [$BUILDKITE_AGENT_NO_COLOR]
  • --debug - Enable debug mode [$BUILDKITE_AGENT_DEBUG]
  • --experiment value - Enable experimental features within the buildkite-agent [$BUILDKITE_AGENT_EXPERIMENT]
  • --profile value - Enable a profiling mode, either cpu, memory, mutex or block [$BUILDKITE_AGENT_PROFILE]

Artifact download examples

Downloading a specific file into the current directory:

buildkite-agent artifact download build.zip .

Downloading a specific file into a specific directory (note the trailing slash):

buildkite-agent artifact download build.zip tmp/

Downloading all the files uploaded to log (including all subdirectories) into a local log directory (note that local directories will be created to match the uploaded file paths):

buildkite-agent artifact download "log/*" .

Downloading all the files uploaded to coverage (including all subdirectories) into a local tmp/coverage directory (note that local directories are created to match the uploaded file path):

buildkite-agent artifact download "coverage/*" tmp/

Downloading all images (from any directory) into a local images/ directory (note that local directories are created to match the uploaded file path, and that you can run multiple download commands into the same directory):

buildkite-agent artifact download "*.jpg" images/
buildkite-agent artifact download "*.jpeg" images/
buildkite-agent artifact download "*.png" images/

Artifact download pattern syntax

Artifact downloads support pattern-matching using the * character.

Unlike artifact upload glob patterns, these operate over the entire path and not just between separator characters. For example, a download path pattern of log/* matches all files under the log directory and all subdirectories.

There is no need to escape characters such as ?, [ and ].

Downloading artifacts outside a running build

The buildkite-agent artifact download command relies on environment variables that are set by the agent while a build is running.

For example, executing the buildkite-agent artifact download command on your local machine would return an error about missing environment variables. However, when this command is executed as part of a build, the agent has set the required variables, and the command will be able to run.

If you want to download an artifact from outside a build use our Artifact Download API.

Searching artifacts

Return a list of artifacts that match a query.

Usage

buildkite-agent artifact search [options] <query>

Description

Searches for build artifacts specified by <query> on Buildkite

Note: You need to ensure that your search query is surrounded by quotes if using a wild card as the built-in shell path globbing will provide files, which will break the search.

Example

$ buildkite-agent artifact search "pkg/*.tar.gz" --build xxx

This will search across all uploaded artifacts in a build for files that match that query. The first argument is the search query.

If you're trying to find a specific file, and there are multiple artifacts from different jobs, you can target the particular job you want to search the artifacts from using --step:

$ buildkite-agent artifact search "pkg/*.tar.gz" --step "tests" --build xxx

You can also use the step's job id (provided by the environment variable $BUILDKITE_JOB_ID)

Output formatting can be altered with the -format flag as follows:

$ buildkite-agent artifact search "*" -format "%p\n"

The above will return a list of filenames separated by newline.

Options

  • --step value - Scope the search to a particular step by using either its name or job ID
  • --build value - The build that the artifacts were uploaded to [$BUILDKITE_BUILD_ID]
  • --include-retried-jobs - Include artifacts from retried jobs in the search [$BUILDKITE_AGENT_INCLUDE_RETRIED_JOBS]
  • --format value - Output formatting of results. Defaults to "%j %p %c\n" (Job ID, path, created at time).

The following directives are available:

%i UUID of the artifact

%p Artifact path

%c Artifact creation time (an ISO 8601 / RFC-3339 formatted UTC timestamp)

%j UUID of the job that uploaded the artifact, helpful for subsequent artifact downloads

%s File size of the artifact in bytes

%S SHA1 checksum of the artifact

%u Download URL for the artifact, though consider using 'buildkite-agent artifact download' instead (default: "%j %p %c\n") * --agent-access-token value - The access token used to identify the agent [$BUILDKITE_AGENT_ACCESS_TOKEN] * --endpoint value - The Agent API endpoint (default: "https://agent.buildkite.com/v3") [$BUILDKITE_AGENT_ENDPOINT] * --no-http2 - Disable HTTP2 when communicating with the Agent API. [$BUILDKITE_NO_HTTP2] * --debug-http - Enable HTTP debug mode, which dumps all request and response bodies to the log [$BUILDKITE_AGENT_DEBUG_HTTP] * --no-color - Don't show colors in logging [$BUILDKITE_AGENT_NO_COLOR] * --debug - Enable debug mode [$BUILDKITE_AGENT_DEBUG] * --experiment value - Enable experimental features within the buildkite-agent [$BUILDKITE_AGENT_EXPERIMENT] * --profile value - Enable a profiling mode, either cpu, memory, mutex or block [$BUILDKITE_AGENT_PROFILE]

Fetching the SHA of an artifact

Use this command in your build scripts to verify downloaded artifacts against the original SHA-1 of the file.

Usage

buildkite-agent artifact shasum [options...]

Description

Prints to STDOUT the SHA-1 for the artifact provided. If your search query for artifacts matches multiple agents, and error will be raised.

Note: You need to ensure that your search query is surrounded by quotes if using a wild card as the built-in shell path globbing will provide files, which will break the download.

Example

$ buildkite-agent artifact shasum "pkg/release.tar.gz" --build xxx

This will search for all the files in the build with the path "pkg/release.tar.gz" and will print the SHA-1 checksum of each one to STDOUT.

If you would like to target artifacts from a specific build step, you can do so by using the --step argument.

$ buildkite-agent artifact shasum "pkg/release.tar.gz" --step "release" --build xxx

You can also use the step's job id (provided by the environment variable $BUILDKITE_JOB_ID)

Options

  • --step value - Scope the search to a particular step by using either its name of job ID
  • --build value - The build that the artifacts were uploaded to [$BUILDKITE_BUILD_ID]
  • --include-retried-jobs - Include artifacts from retried jobs in the search [$BUILDKITE_AGENT_INCLUDE_RETRIED_JOBS]
  • --agent-access-token value - The access token used to identify the agent [$BUILDKITE_AGENT_ACCESS_TOKEN]
  • --endpoint value - The Agent API endpoint (default: "https://agent.buildkite.com/v3") [$BUILDKITE_AGENT_ENDPOINT]
  • --no-http2 - Disable HTTP2 when communicating with the Agent API. [$BUILDKITE_NO_HTTP2]
  • --debug-http - Enable HTTP debug mode, which dumps all request and response bodies to the log [$BUILDKITE_AGENT_DEBUG_HTTP]
  • --no-color - Don't show colors in logging [$BUILDKITE_AGENT_NO_COLOR]
  • --debug - Enable debug mode [$BUILDKITE_AGENT_DEBUG]
  • --experiment value - Enable experimental features within the buildkite-agent [$BUILDKITE_AGENT_EXPERIMENT]
  • --profile value - Enable a profiling mode, either cpu, memory, mutex or block [$BUILDKITE_AGENT_PROFILE]

Using your private AWS S3 bucket

You can configure the buildkite-agent artifact command to store artifacts in your private Amazon S3 bucket. To do so, you’ll need to export some artifact environment variables.

Environment Variable Required Default Value Description
BUILDKITE_ARTIFACT_UPLOAD_DESTINATION Yes N/A An S3 scheme URL for the bucket and path prefix, for example, s3://your-bucket/path/prefix/
BUILDKITE_S3_DEFAULT_REGION No N/A Which AWS Region to use to locate your S3 bucket, if absent or blank buildkite-agent will also consult AWS_REGION, AWS_DEFAULT_REGION, and finally the EC2 instance metadata service.
BUILDKITE_S3_ACL No public-read The S3 Object ACL to apply to uploads, one of private, public-read, public-read-write, authenticated-read, bucket-owner-read, bucket-owner-full-control.
BUILDKITE_S3_SSE_ENABLED No false If true, bucket uploads request AES256 server side encryption.
BUILDKITE_S3_ACCESS_URL No https://$bucket.s3.amazonaws.com If set, overrides the base URL used for the artifact’s location stored with the Buildkite API.

You can set these environment variables from a variety of places. Exporting them from an environment hook defined in your agent hooks-path directory ensures they are applied to all jobs:

export BUILDKITE_ARTIFACT_UPLOAD_DESTINATION="s3://name-of-your-s3-bucket/$BUILDKITE_PIPELINE_ID/$BUILDKITE_BUILD_ID/$BUILDKITE_JOB_ID"
export BUILDKITE_S3_DEFAULT_REGION="eu-central-1" # default: us-east-1

IAM Permissions

Make sure your agent instances have the following IAM policy to read and write objects in the bucket, for example:

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Effect": "Allow",
            "Action": [
                "s3:GetObject",
                "s3:GetObjectAcl",
                "s3:GetObjectVersion",
                "s3:GetObjectVersionAcl",
                "s3:ListBucket",
                "s3:PutObject",
                "s3:PutObjectAcl",
                "s3:PutObjectVersionAcl"
            ],
            "Resource": [
               "arn:aws:s3:::my-s3-bucket",
               "arn:aws:s3:::my-s3-bucket/*"
            ]
        }
    ]
}

If you are using the Elastic CI Stack for AWS, provide your bucket name in the ArtifactsBucket template parameter for an appropriate IAM policy to be included in the instance’s IAM role.

Credentials

buildkite-agent artifact upload will use the first available AWS credentials from the following locations:

  • Buildkite environment variables, BUILDKITE_S3_ACCESS_KEY_ID, BUILDKITE_S3_SECRET_ACCESS_KEY, BUILDKITE_S3_SESSION_TOKEN
  • AWS environment variables, AWS_ACCESS_KEY_ID, AWS_SECRET_ACCESS_KEY, AWS_SESSION_TOKEN
  • Web Identity environment variables, AWS_ROLE_ARN, AWS_ROLE_SESSION_NAME, AWS_WEB_IDENTITY_TOKEN_FILE
  • EC2 or ECS role, your EC2 instance or ECS task’s IAM Role

If your agents are running on an AWS EC2 Instance, adding the policy above to the instance’s IAM Role and using the instance profile credentials is the most secure option as there are no long lived credentials to manage.

If your agents are running outside of AWS, or you’re unable to use an instance profile, you can export long lived credentials belonging to an IAM user using one of the environment variable groups listed above. See the Managing Pipeline Secrets documentation for how to securely set up these environment variables.

Access Control

By default the agent will create objects with the public-read ACL. This allows the artifact links in the Buildkite web interface to show the S3 object directly in the browser. You can set this to private instead, exporting a value for the BUILDKITE_S3_ACL environment variable:

export BUILDKITE_S3_ACL="private"

If you set your S3 ACL to private you won't be able to click through to the artifacts in the Buildkite web interface. You can use an authenticating S3 proxy such as aws-s3-proxy to provide web access protected by HTTP Basic authentication, which will allow you to view embedded assets such as HTML pages with images. To set the access URL for your artifacts, export a value for the BUILDKITE_S3_ACCESS_URL environment variable:

export BUILDKITE_S3_ACCESS_URL="https://buildkite-artifacts.example.com/"

Using your private Google Cloud bucket

You can configure the buildkite-agent artifact command to store artifacts in your private Google Cloud storage bucket. For instructions for how to set this up, see our Google Cloud Installation Guide.

Using your Artifactory instance

You can configure the buildkite-agent artifact command to store artifacts in Artifactory. For instructions for how to set this up, see our Artifactory Guide.