Rational Partners
A snail representing slow technology team delivery

A Snails Life... by Andy Perry is licensed under CC BY-ND 2.0.

Why Your Tech Team Isn't Delivering (From Someone Who Fixes Them).

Roja Buck

"We need to hire more developers."

We hear this in roughly half the initial conversations we have with CEOs who believe their technology team is too slow. It is almost never the right answer. More software engineers does not equal the ability to handle more complexity. The challenges we normally see are that the architecture is wrong for a scaling point, the processes are inadequate, or the team structure does not match the problems being solved. The answer is not throwing fifteen engineers at it. The answer is working out what the actual problem is and how to solve it.

Years of engagements across every sector have given us a clear picture of the five root causes that actually explain why technology teams deliver slowly. They are consistent, regardless of company size, industry, or technology stack. If your engineering team is not delivering at the pace your business requires, the explanation is almost certainly one — or more — of these.

More software engineers does not equal the ability to handle more complexity. The answer is not throwing fifteen engineers at the problem. It is working out what the real problem is, and solving it.

1. Architecture Debt

The problem: Every new feature is harder to build than the last one because the foundation is deteriorating.

Technical debt compounds like financial debt. Miss one year of maintenance, and the next year costs three times as much. A company that has been building features without investing in the underlying architecture eventually reaches a point where the cost of every change becomes disproportionate to its complexity. A feature that should take two weeks takes two months, because the developer must work around accumulated shortcuts, undocumented dependencies, and architectural decisions that made sense three years ago but are now actively harmful.

The symptoms are distinctive. Development estimates keep growing. Simple changes have unexpected side effects. The team spends more time debugging than building. And the phrase "we need to refactor" appears in every sprint planning session but never gets prioritised because there is always something more urgent.

The diagnostic question: "What percentage of engineering time is spent on technical debt reduction, and has that percentage changed over the past year?"

A healthy engineering organisation allocates ten to twenty per cent of its capacity to continuous technical improvement — not as an annual cleanup sprint, but as an ongoing discipline integrated into normal delivery. If the answer is "we have a tech debt backlog but we never get to it," the debt is compounding and will eventually become the dominant constraint on delivery speed.

What the fix looks like: The first step is an honest assessment of the architecture's fitness for the company's current and anticipated scale. Not everything needs fixing — the skill is in identifying the specific architectural limitations that are causing the most pain and addressing those first. A monolithic application that has served the company well for five years might need decomposing — but into a modular monolith with clear boundaries, not into a premature microservices architecture that creates more problems than it solves. The fix typically takes three to six months for the highest-priority issues, with ongoing improvement as a permanent practice thereafter.

2. Process Chaos

The problem: Nobody is quite sure what is being built, why, or when it will be done.

If you cannot explain what your engineering team is building this month and why, your process is broken. The symptoms vary — requirements change mid-sprint, there is no roadmap clarity, the definition of "done" is different for every developer, or priorities are set by whoever shouts loudest rather than by any structured framework. The result is the same: the team is busy but not productive. They are writing code, but the code does not reliably translate into business outcomes.

The most common variant of this problem is what we call "priority whiplash" — the CEO or product owner changes direction so frequently that the engineering team never completes anything before the next urgent pivot arrives. Each pivot wastes partially completed work and forces context switching, which research consistently identifies as one of the most significant destroyers of engineering productivity.

The diagnostic question: "Can you show me the current sprint backlog and explain how each item connects to a business objective?"

A well-functioning team can answer this in minutes, with clear traceability from business strategy to quarterly objectives to sprint items. A struggling team will show you a list of tickets with no obvious connection to anything strategic, or they will reveal that the sprint backlog was completely rewritten two days ago because priorities changed.

What the fix looks like: Establishing pragmatic agile practices — not dogmatic Scrum, but a structured cadence with documented definitions of "Ready" and "Done," consistent sprint planning, and a clear roadmap that uses a Now/Next/Later framework to provide clarity without false precision. The process should be light enough that it does not feel bureaucratic but robust enough that the team can predict what they will deliver and actually deliver it. This is typically implementable within four to six weeks.

3. Knowledge Fragility

The problem: Critical knowledge lives in one person's head.

Close to half of the engineering teams we review have no redundancy for their most critical systems. The critical system might be the billing engine, the authentication layer, or the data pipeline that feeds the company's core reports. In each case, one person built it, one person maintains it, and one person understands how it actually works.

This creates two problems. The obvious one is risk: if that person leaves, falls ill, or is simply on holiday when the system fails, the company is exposed. The less obvious one is speed: that person becomes a bottleneck. Every change to their system requires their review, their approval, and often their direct involvement. They are not a single point of failure just in the disaster sense — they are a single point of failure in the delivery sense, every day.

The diagnostic question: "For each of our critical systems, how many people can make a meaningful change without assistance?"

If the answer for any critical system is "one," you have a key-person dependency that should be treated as a business risk, not just a technology concern. Single-person dependency impacts company valuation — investors identify and price this risk during due diligence.

What the fix looks like: Knowledge distribution is not about writing documentation (though that helps). It is about deliberate pairing, rotation, and shared ownership. The person who owns the critical system should be pair programming with a colleague on every significant change. Code reviews should be mandatory, with at least one reviewer who is learning the system. Within three to six months, the bus factor for every critical system should be at least two. This is one of the simplest fixes in this list, but it requires discipline and a willingness to accept a short-term slowdown for long-term resilience.

4. Tooling and Infrastructure Problems

The problem: The engineering team is fighting their tools instead of building the product.

Manual deployment is the most reliable indicator of broader process immaturity. If deploying a new version of the software requires a person to follow a checklist of manual steps — logging into servers, running scripts, updating configuration files — then every deployment is a risk, every deployment is slow, and every deployment requires the attention of someone who should be building features.

But deployment is just the most visible example. Slow build times (developers waiting twenty minutes for code to compile), flaky tests (tests that sometimes pass and sometimes fail for no apparent reason), inconsistent development environments (it works on my machine), and inadequate monitoring (nobody knows the system is down until a customer complains) all compound to create an environment where the friction of building and shipping software is unnecessarily high.

The diagnostic question: "How long does it take from a developer merging their code to that code being available to customers?"

Best practice in modern software engineering is continuous deployment — code merged in the morning is available to customers in the afternoon, ideally within minutes. If the answer is "we deploy every two weeks" or "deployment takes a full day," the tooling and infrastructure are creating a bottleneck that directly limits delivery speed. Manual deployment that requires downtime is unacceptable for any modern SaaS business.

What the fix looks like: Investing in the deployment pipeline. Automated testing that runs on every code change. Automated deployment that requires no human intervention. Infrastructure-as-code so that environments are reproducible and consistent. Monitoring that provides real-time visibility into system health. This is foundational work — not glamorous, not visible to the business — but the productivity gains are transformative. Teams that move from manual fortnightly deployment to continuous deployment typically see a thirty to fifty per cent improvement in delivery velocity within three months.

5. Wrong Team Composition

The problem: The team does not have the right skills for the problems it is trying to solve.

Companies default to "we need more developers" when the real issue is structural. You might have eight backend developers and no frontend expertise, which means every user-facing feature requires external contractors. You might have a team of generalists when the architecture requires specialists in distributed systems, data engineering, or security. You might have no dedicated product management, which means engineers are making product decisions based on incomplete understanding of user needs.

The composition problem is often masked by headcount. The CEO sees twelve engineers and assumes that is sufficient. But twelve engineers without a product manager, a DevOps engineer, and a technical lead is a very different proposition from eight engineers with all three of those roles filled. The smaller team will almost always deliver more, faster, and with higher quality.

The diagnostic question: "Do we have the right mix of skills for what we are trying to build in the next twelve months?"

This is a question that requires honest self-assessment from the technical leadership, informed by the actual roadmap rather than the current workload. If the answer is "we need to hire a data engineer and we have been saying that for six months," the composition gap is already costing you.

What the fix looks like: A structured assessment of the skills needed versus the skills available, mapped against the technology roadmap. This often reveals that the team needs restructuring more than expansion — reorganising into cross-functional squads with clear ownership areas, each containing the mix of skills needed to deliver independently. The formation of dedicated product teams, with product managers working alongside engineers rather than throwing requirements over a wall, is frequently the single highest-impact change. Restructuring typically takes two to three months, with productivity improvements visible within the first month.

The Pattern Behind the Patterns

These five root causes rarely exist in isolation. Architecture debt creates process chaos because the complexity overwhelms simple frameworks. Process chaos creates knowledge fragility because undocumented workarounds become critical knowledge. Knowledge fragility slows delivery because everything depends on the same overloaded individuals. The slowness prompts calls to "hire more developers," which compounds the composition problem without addressing any of the underlying causes.

Fixing this requires starting from a clear diagnosis and addressing the root causes in the right order.

In forty per cent of the teams we assess, losing one engineer would cripple a critical system. That person is not just a disaster risk — they are a delivery bottleneck every single day.

The Fix Sequence

Address root causes in this order for the fastest results:

  1. Process first — the fastest to change and provides immediate relief. Implementable within four to six weeks.
  2. Composition gaps — add the missing roles the team needs to function. Takes one to two months including hiring.
  3. Tooling investment — the productivity multiplier. Teams moving to continuous deployment see thirty to fifty per cent velocity gains within three months.
  4. Knowledge distribution — reduce fragility by ensuring every critical system has at least two people who understand it. Three to six months.
  5. Architecture debt — the longest and most expensive fix, but also the most impactful. Six to twelve months.

Fixing this usually takes six to twelve weeks for the diagnosis and first improvements. Full transformation takes six to twelve months. The investment required is not primarily financial — it is leadership attention, honest assessment, and the discipline to address root causes rather than symptoms.

Frequently Asked Questions

References

  1. Frederick P. Brooks Jr.. The Mythical Man-Month: Essays on Software Engineering. Addison-Wesley (1995).
  2. Forsgren, N., Humble, J., Kim, G.. Accelerate: The Science of Lean Software and DevOps. IT Revolution Press (2018).
  3. DORA / Google Cloud. Accelerate State of DevOps Report 2024. Google Cloud (2024).
  4. Gloria Mark, Daniela Gudith, Ulrich Klocke. The Cost of Interrupted Work: More Speed and Stress. Proceedings of CHI '08, ACM (2008).
  5. McKinsey & Company. Yes, You Can Measure Software Developer Productivity. McKinsey Digital (2023).
  6. Stripe. The Developer Coefficient. Stripe (2018).

Is your tech team not delivering?

Let's diagnose what's actually going wrong and build a plan to fix it. Most problems are solvable faster than you think.