1. Resources
  2. /
  3. Blog
  4. /
  5. Jenkins is free, but costs more than you think

Jenkins is free, but costs more than you think

Updated 12 minute read

"It's very hard to argue with free." This take from a senior engineer explains why Jenkins pipelines are still such a common component of so many software-delivery stacks. 

For small teams with relatively simple needs, Jenkins can be just fine, of course. But as codebases grow and teams expand, what began as a budget-conscious choice can often become a source of mounting frustration.

Jenkins has earned its place in software history for good reasons. Its flexibility and plugin ecosystem can be genuinely powerful for some teams. But these same advantages can also make it surprisingly difficult to maintain Jenkins at scale. It's not uncommon, for example, for teams using Jenkins heavily to call out issues like:

  • Increasing downtime, slowing or blocking their ability to ship consistently
  • A growing maintenance burden to keep Jenkins controllers and plugins updated and running smoothly
  • Distraction of top talent as engineers get pulled away from product work to deal with maintaining and scaling the build system
  • Ever-increasing infrastructure costs to meet the needs of more Jenkins-related compute

Each of these hidden (and not-so-hidden) issues compound over time, especially as teams and organizations scale. Taken together, as you’ll see in this post, it becomes much harder to argue that Jenkins is free — and nowhere is this clearer than when unstable builds block your ability to ship code.

Hidden cost 1: Unstable builds that drag down velocity

Your developers are waiting — again. The Jenkins controller just crashed during a crucial deployment. PRs pile up while the team scrambles to restart the controller. It's a common scene in engineering organizations worldwide. According to DORA research, 81% of respondents report that recovering from failed deployments often takes an hour or more — and that's when the underlying CI/CD system is running properly.

As one of the older CI/CD tools, Jenkins is frequently the culprit in situations like these. Many developers share stories of Jenkins environments that crash under load, need frequent restarts, or fail unexpectedly, shattering their focus and crippling their velocity. Teams grow frustrated, projects fall behind, and the business suffers on both fronts.

We have a large team managing and maintaining Jenkins. It is the most fragile and finicky thing I have ever encountered. If it's down for minutes it's a hair-on-fire panic.

Reddit user

A Jenkins crash rarely impacts just one developer; it might block a dozen or more simultaneously. In the meantime, code reviews stall. If there’s a dedicated QA team, they're left waiting for testing environments to be updated with the latest changes. Product managers might even postpone demos if new features haven't yet been tested or deployed. The cascade of issues often ends with delayed delivery of software to customers.

The time your team loses to issues like these adds up fast. The earlier DORA research shows that one in five respondents deploy multiple times a day, with many (over 40%) deploying at least weekly. If a build normally takes 30 minutes and fails a third of the time, at just ten builds a day, that's 90 minutes in failed build time alone. Add to that the time it takes to respond to such failures (even if only to click the re-run button), and it's easy to see how hundreds or even thousands of engineering hours can be lost annually across an organization simply to build-system flakiness.

Not every Jenkins pipeline crashes regularly, of course; Jenkins wouldn’t maintain its popularity if that were the case. Many companies use Jenkins successfully across teams — but those that do also tend to have:

  • Less complex pipelines and delivery requirements
  • Less active codebases that use very few Jenkins plugins
  • Support from a dedicated infrastructure team
  • Many — sometimes dozens — of Jenkins controllers deployed across the organization

As you grow, stability issues like these begin to hit harder and cost a lot more — and that's not counting the work that's required to keep the average Jenkins controller running and up to date. That maintenance burden is the hidden cost we’ll tackle next.

Hidden cost 2: Time lost to server and plugin maintenance

When you use Jenkins, your team runs much more than just Jenkins. You install the Git plugin for version control functionality, then add the Mailer plugin to support build notifications. Another plugin for this, another for that.

Not only do you need to keep Jenkins controllers and agents running themselves, you also need to attend to each plugin's configuration and security updates. Many, often small, administrative burdens like these accumulate to become much more significant time sinks that frequently go unnoticed.

Jenkins's plugin architecture is a source of its appeal — "there's a plugin for everything" — but as installations grow, it becomes a well-known source of frustration. There are more than 2,000 plugins for Jenkins, each one its own unique open-source project. This works, but consumers of plugins frequently face issues — version clashes, abandoned projects, and more — and when they go to update one plugin to fix a critical bug, they often trigger a cascade of other failures that can consume hours of troubleshooting with each maintenance cycle.

Jenkins is massively flexible but it quickly becomes incredibly brittle if you have more than a few trivial plugins. I can't tell you the number of hours our teams have wasted trying to do a 'simple' upgrade.

Reddit user

In that sense, Jenkins isn't just one open-source project — it's thousands of them. How well you're able to use Jenkins often rests in the hands of the mini-communities that collectively maintain all of the Jenkins plugins you rely on. Many of the most popular plugins are kept up to date, which is good — but many are also abandoned, and others are kept in a legacy state. Forks, as a result, sometimes become necessary.

All of this means security issues abound, often without clear paths to remediation. Jenkins updates can require specific Java versions, but a given controller's plugins may not yet support newer Java versions, or even certain versions of Jenkins itself. Teams must make hard choices: risk updates that could topple the entire pipeline, or keep known vulnerabilities in place — sometimes indefinitely.

Upgrade paths for Jenkins and all of the plugins is awful. Most teams just end up permanently ignoring them for fear they will break, accumulating stacks of unpatched [security issues].

Reddit user

And of course, with each new Jenkins controller you get a whole new set of plugins (sometimes 50 or more) to manage along with it. Automation can certainly help, but even then, it’s an effort to keep everything in sync. With every new plugin upgrade or workaround, someone on the team must ensure that the broader set of Jenkins clusters gets updated accordingly to keep everything in running lock step.

Again, less complex Jenkins installations may have fewer of these issues, and at the lower end of the scale, the workarounds can be manageable. Rather than accept the upgrade tradeoffs, teams may choose to:

  • Dramatically limit plugin usage, choosing instead to replicate the functionality themselves
  • Fork and refactor (and then maintain) abandoned plugins 
  • Write wrappers or shims to address compatibility issues

But whether you choose implement the workarounds or continue to roll the dice, both choices require ongoing effort from your team. Many report having to spend days each month on basic upkeep. As you grow, this time cost grows along with you; hours that could go toward developing new features for your customers end up getting lost to plugin-related problems. 

All of this upkeep also tends to fall onto the plates of your most senior engineers, who are the focus of the next hidden cost.

Hidden cost 3: Top engineers get stuck with build problems

One of your key contributors is exhausted. The build is blocked again. Someone needs to clear the path to keep the team moving forward, which means their own work has once again effectively stopped. Velocity has slowed to a crawl, and your lead engineer seems to be drifting toward burnout.

Jenkins takes specialized skills, and its flexibility means that every installation can be a bit of a unique snowflake. The result is that teams often lose their most capable contributors to the Jenkins trap, with some spending most (or even all) of their time dealing with build issues. 

And while that cost is certainly real, an even bigger cost is all the work that didn't get done by those key contributors while they were tending to a build system that everyone would prefer "just worked".

Our senior manager refuses to consider moving out of Jenkins. Instead, he is ready to allocate 3-4 engineers dedicated to working on it.

Reddit user

Larger organizations may choose to create centralized Jenkins support teams to consolidate ownership of issues like these and keep things running smoothly.

But there are tradeoffs to this approach as well. Engineering teams used to running their own installations they may lose some flexibility as Jenkins ops are consolidated and streamlined. At a minimum, your Jenkins knowledge becomes siloed. What was meant to fix technical blockades could create human bottlenecks when that dedicated team is unavailable. Then there’s the risk of a complete system breakdown when the resident "Jenkins whisperer" leaves the organization.

Managing Jenkins is important work that few engineers actively seek. But one way or another, it always ends up falling to someone.

I wouldn't recommend learning Jenkins unless you specifically want to spend 40 hours a week using Jenkins. Once you start working on Jenkins it ends up becoming your full-time job.

Reddit user

Diverting your best technical minds to maintenance tasks represents a massive opportunity cost. The attention that could be spent on the company’s hardest problems, delivering value to customers, or to leveling up other engineers, instead goes to managing finicky build tools and trying to keep the delivery pipeline from collapsing. It happens — but it's rarely the goal most engineering teams set out to achieve.

With all this engineering time and effort going toward keeping Jenkins up and running, at least the software itself is free — right? 

Hidden cost 4: Running Jenkins at scale takes lots of infrastructure

Your finance department sends a message with questions about the cloud infrastructure bill. Like many organizations, you run a lot of tools in the cloud, and much of it starts to blend together. In addition to production and test servers, you have experiments, forgotten projects, etc., and mixed in there somewhere is your critical Jenkins delivery infrastructure.

In pursuit of reliable builds, Jenkins teams need servers that can handle the load. Once they’ve figured out how to keep Jenkins and all of its plugins up to date, their next priority is avoiding the crashes that can take down the pipeline. Unlike the unplanned maintenance time and team-oriented distractions that are harder to quantify, infrastructure is a regular and clear cost you can count on.

For the people who say Jenkins is free, look at your hosting costs, maintenance costs, and innovation blocking costs.

Reddit user

In a typical setup, each Jenkins controller gets its own virtual machine for dispatching jobs to agent nodes. This architecture helps to distribute the load, but with a threshold of around 100 concurrent jobs, Jenkins controllers can easily crash when that load goes up — for example, with an active team committing to a common codebase like a monorepo.

That means for redundancy and failover, one controller typically becomes two or three. These alternate sibling controllers sit there idle, waiting for one of the others to fail; they're there to step in when the active Jenkins controller inevitably crashes. Redundancy like this (which is considered a best practice) comes with more than just the cost of the idle servers alone. It also requires appropriately shared and redundant storage, networking, and data transfer — and of course, the cost of the human expertise required to provision and maintain it all.

On top of this, an increasingly common practice is to deploy a dedicated Jenkins cluster per repository. That means 2-3 controllers (and agents, networking, storage, etc.) for every project. Given most teams manage more than just one project, in an organization of 100 engineers, for example, you might have 20 or more repositories under active development — which in turn would call for between 40 and 60 Jenkins controller instances.

Even for a fairly small engineering organization, the cost to run Jenkins reliably could climb well into the tens of thousands of dollars per month — and that's just for the controller infrastructure. Consider the following table, assuming a typical installation such as above — which again leaves out the much greater cost of the human engineers required to provision and manage it:

Cost estimates to run Jenkins on AWS

PurposeAWS InstanceQuantityAWS PricingMonthly Cost
Controllersm5.xlarge60$0.192 / h$8,400
Storagegp3 (1TB/controller)60$0.08/GB$4,800
Data TransferRegional (5TB/controller)300 TB$0.01/GB$3,000
Total$16,200

These costs can easily be (and often are) much higher, depending on the size of the team and other variables. Again, not every organization requires this level of redundancy; smaller teams might choose to run a single controller and fewer agents for as long as possible, opting to tolerate failures in exchange for a lower cloud-infrastructure bill. However the bill shakes out, though, it should be clear that running Jenkins at scale is far from "free".

Scaling Jenkins server to do enterprise CI/CD is where it typically goes off the rails.

— Reddit user

No Jenkins installation escapes the laws of scale. As teams grow, the cost of provisioning and managing all the servers, storage, network resources and all the rest inevitably grows along with you. It doesn't level off.

And when you add it all up, the bill is usually far more than what you'd pay for a more modern CI/CD platform — platforms that are built with scalability and flexibility in mind. For high-velocity teams, it may make sense to reassess whether continuing to invest in the care and feeding of your Jenkins infrastructure is really the best use of your resources. (Hint: We know from experience — and from our customers who've migrated from Jenkins — that it probably isn't.)

Discover what life looks like after Jenkins

The true cost of "free" Jenkins becomes clear when you add up all of these hidden costs. Flaky builds that steal developer time and crush morale. Plugin maintenance that burns through your engineers' time and forces you to consider risky tradeoffs. Top contributors that get pulled away from core product work to babysit their teams' build systems (and often end up leaving). And all the while, the infrastructure costs continue to mount, rivaling or exceeding the cost more flexible, scalable commercial alternatives.

The engineering team at Faire, for example, knows the story all too well. Their Jenkins installation couldn’t keep up with their monster monolith; build load times expanded, and downtime kept the team from shipping code to support their network of retailers.

With the switch to Buildkite, Faire’s pull request wait times decreased by 50% or more — which allowed engineers to return their focus to product work rather than fighting with their tools. The team at Uber migrated from Jenkins to Buildkite and unlocked capabilities that not only helped them scale, but also enabled them to make some dramatic improvements to the developer experience:

Monorepos at scale: Building CI for 1,000 daily commits at Uber

Register to watch the webinar

Buildkite presenter Mike Morgan with Uber engineering team members discussing monorepos at scale, as Uber builds a CI system to handle 1,000 daily commits
Recorded on
June 26, 2024
Length
40 minutes

Think beyond what you don’t have to pay with Jenkins. When you add up the real costs, "free" can end up being extraordinarily expensive.


Related posts

Start turning complexity into an advantage

Create an account to get started with a 30-day free trial. No credit card required.

Buildkite Pipelines

Platform

  1. Pipelines
  2. Pipeline templates
  3. Public pipelines
  4. Test Engine
  5. Package Registries
  6. Mobile Delivery Cloud
  7. Pricing

Hosting options

  1. Self-hosted agents
  2. Mac hosted agents
  3. Linux hosted agents

Resources

  1. Docs
  2. Blog
  3. Changelog
  4. Webinars
  5. Plugins
  6. Case studies
  7. Events
  8. Comparisons

Company

  1. About
  2. Careers
  3. Press
  4. Brand assets
  5. Contact

Solutions

  1. Replace Jenkins
  2. Workflows for AI/ML
  3. Testing at scale
  4. Monorepo mojo
  5. Bazel orchestration

Legal

  1. Terms of Service
  2. Acceptable Use Policy
  3. Privacy Policy
  4. Subprocessors
  5. Service Level Agreement

Support

  1. System status
  2. Forum
© Buildkite Pty Ltd 2025