---
title: "Post-incident review for 20th October 2025"
date: "2025-10-24"
author: "The Buildkite Team"
description: "How the recent AWS outage affected Buildkite services, how we responded, and what we learned."
readingTime: "3 minute read"
---

# Post-incident review for 20th October 2025

How the recent AWS outage affected Buildkite services, how we responded, and what we learned.

<p>On October 20th, 2025, customers experienced high latency and increased error rates across the majority of our services as a consequence of the <a rel="noopener noreferrer" href="https://aws.amazon.com/message/101925/"><u>major us-east-1 AWS outage</u></a>.</p><p>Initially there was no customer impact and customers continued to run builds as usual. When traffic began to increase at the start of US business hours, our services were unable to scale up to meet the increased load as the AWS outage prevented any provisioning of additional server capacity.</p><p>The impact on error rates and response time varied between accounts, as our shards of compute capacity reached their limits at different times.</p><h2>Impact details</h2><p>All customers saw the Buildkite web interface experience latency between 17:00 – 19:20 UTC. During this time the error rate also spiked to 8%. Latency was 1.5 seconds on average, but some customers saw up to 7 seconds of latency.</p><div>Image of Dashboard latency during the incident</div><div>Image of Error rates (with p90 and p95 latencies) during the incident</div><p>Customers experienced different impacts to both job dispatch and notifications due to our sharded architecture. These two latencies combine to form the core workflow that customers rely on.</p><p>The worst-affected customers experienced over an hour&apos;s delay in job dispatch times, combined with a notification latency of an additional hour between 13:00 – 20:30 UTC. The majority of our customers experienced only minor delays to job dispatch and notifications during this window.</p><div>Image of Job dispatch by shard during the incident</div><div>Image of Notification latency during the incident</div><div>Image of Notification latency (p99) by shard during the incident</div><p>Latency on the REST and GraphQL APIs increased between 13:00 – 16:30 UTC affecting all customers. Between 18:00 – 19:40 UTC error rates also increased to up to 10% of all customer requests.</p><p>This resulted in significant delays for any customers that depend on our public APIs to run jobs (<a href="http://api.buildkite.com">api.buildkite.com</a> and <a href="http://graphql.buildkite.com">graphql.buildkite.com</a>), such as those using <a href="https://buildkite.com/docs/agent/v3/agent-stack-k8s">agent-stack-k8s</a> versions prior to 0.28 which uses GraphQL. Customers interacting with organisation and pipeline settings would also have been impacted by this latency.</p><div>Image of REST and GraphQL API latencies (p95) during the incident</div><div>Image of API error rates (with p90 and p95 latencies) during the incident</div><h2>Incident summary</h2><p>When we first became aware of the AWS incident, we paused all deploys to our production environment, in order to prevent scaling in and maintain our existing capacity. We <a href="https://www.buildkitestatus.com/incidents/h3ksq8zzyzc2">preemptively opened a Statuspage incident</a> to keep customers informed and escalated the on-call team. Many of our third-party communication and coordination tools experienced significant immediate impact which hindered our on-call team until alternatives could be shared.</p><p>Buildkite customer experience remained stable between 07:11 – 11:52 UTC and our on-call team stood down as AWS was reporting signs of recovery.</p><p>As more customers came online and began using Buildkite services, the EC2 launch failures in AWS prevented our autoscaling from increasing capacity to support standard workloads. Our sharded architecture meant that different shards were scaled to meet different amounts of traffic at this time.</p><p>From 13:15 UTC, Buildkite customers started to experience latency and increasing error rates to varying degrees due to the increase in traffic. Our on call team was alerted and began investigating. They opened a <a href="https://www.buildkitestatus.com/incidents/3bjtdp9tll09">new status page</a> to update our customers on the degraded performance.</p><p>From 17:00 UTC, all our available compute was in use and there was no further scaling possible. As a result, latency and error rates increased across all services and shards (particularly notifications and web traffic).</p><p>Between 17:10 and 17:52 UTC, we were able to shift some load from under-provisioned shards to an unused shard that we had scaled up for a load-testing experiment. Due to the length of the outage overlapping with peak Buildkite load, this mitigation was only viable for the first few hours before all shards were at maximum capacity.</p><p>Starting at 17:54 UTC, AWS allowed some compute to be provisioned at a severely limited rate. During this time we prioritized increasing capacity for the shards experiencing the worst impact.</p><p>By 20:36 UTC, all shards had returned to normal latency and error rates, and from 21:00, AWS stopped rate limiting our attempts to scale services.</p><p>All temporary load rebalancing was reverted by 01:32 on 21st October 2025 UTC.</p><h2>Changes we&apos;re making</h2><p>We are already investigating various avenues for resilience against region-wide events and ensuring we have backup avenues of communication in these major events.</p>