
“Abstract pattern on a tree stump” by Maksim Sokolov (Maxergon) is licensed under CC BY-SA 4.0.
The Real Cost of Technical Debt: People, AI and the Risk Equation.
Every codebase has technical debt. Every single one. The startup that shipped fast to hit product-market fit, the scale-up that bolted features on to close enterprise deals, the portfolio company that outgrew its original architecture. Technical debt is not a sign of failure. It is a side effect of building software under real-world constraints: limited time, limited budget, imperfect information.
The question that matters is not "does this company have technical debt?" It does. The question is whether that debt is understood, managed, and appropriate for the company's stage. Because when it is not, the cost is measurable, material, and almost always larger than anyone expected.
Across more than a hundred technology assessments of PE and VC-backed companies, we have seen technical debt range from a minor footnote to a deal-altering finding. The difference is rarely the quantity of debt. It is whether anyone was paying attention.
What Technical Debt Actually Costs
The headline statistics are striking. Stripe's Developer Coefficient report found that developers spend 42% of their time dealing with technical debt and maintenance. Deloitte's 2026 Global Technology Leadership Study estimates that technical debt accounts for 21% to 40% of an organisation's total IT spending. McKinsey's analysis of 500 engineering teams found that those with high technical debt took 40% longer to ship features compared to low-debt teams.
For an investor, translate those percentages into what they actually mean for a portfolio company with a twenty-person engineering team:
- If a third of engineering time goes to managing debt rather than building value, you are paying for seven engineers to tread water.
- If features take 40% longer to ship, your time-to-market projections in the investment case are wrong.
- If 20-40% of the IT budget is servicing debt rather than driving growth, your value creation plan is leaking before it starts.
These are not edge cases. They are the central tendency across hundreds of companies in published research, and they are consistent with what we observe in our own assessments.
Technical Debt Is Not Just Bad Code
Most discussions about technical debt focus on code quality: messy code, missing comments, outdated libraries. That is one dimension. In practice, the debt that causes the most damage often has nothing to do with the quality of individual lines of code.
When we assess a company using the 5P Framework (People, Process, Product, Platform, Protection), technical debt shows up across all five pillars:
Platform debt is the most visible. Architecture choices that made sense at one scale become constraints at another. A monolithic application that served 10,000 users may not serve 100,000. A database that handles current load may not survive a 3x increase. Infrastructure that was set up manually rather than through code becomes fragile, unrepeatable, and dependent on whoever configured it.
Process debt is more insidious. Manual deployment processes that take a full day. No automated testing, so every release carries the risk of breaking something in production. No continuous integration, so developers work in isolation for weeks before discovering their code conflicts with everyone else's. These are not code problems. They are engineering practice problems, and they compound: the worse the process, the riskier every change becomes, so the team changes less, which means debt accumulates faster.
People debt is often the most expensive. Key person concentration, where one or two individuals hold all the knowledge of how critical systems work, is a finding in the majority of our assessments. When that person leaves, and eventually they do, the cost is not just recruitment. It is the months of lost velocity while a replacement learns a system with no documentation, no tests, and no runway. Key person risk is one of the five technology red flags that can kill a PE deal — and one of the most consistently underestimated.
Protection debt hides until it does not. No penetration testing. Developers with unrestricted production access. No disaster recovery plan. Encryption gaps. These are findings we flag in nearly every assessment, and each one represents a latent liability on the balance sheet that only becomes visible when something goes wrong.
Product debt is the one nobody talks about. Features built for one customer that now serve a thousand. A roadmap driven by whoever shouts loudest rather than by a coherent product strategy. If you have read our recent piece on the product problem disguised as a technology problem, you will recognise the pattern: technical debt accumulates fastest when there is no product discipline to prevent unnecessary work from entering the codebase in the first place.
“Technical debt is not a sign of failure. It is a side effect of building software under real-world constraints. The danger is not the existence of debt. It is the failure to recognise when the context has changed.”
When Acceptable Becomes Dangerous
Here is where the practitioner's perspective diverges from the textbook. Not all technical debt is bad. A Series A company that took shortcuts to ship a product and prove market fit made a rational decision. The debt was deliberate, the trade-off was understood, and the company would not exist without it.
The danger is not the existence of debt. It is the failure to recognise when the context has changed.
What we observe across assessments is a transition point. A company's technical debt shifts from acceptable to dangerous when one or more of the following occurs:
The team outgrows the architecture. What works for five engineers breaks for fifteen. Coordination costs increase, deployment conflicts multiply, and velocity drops in ways that are difficult to attribute because the decline is gradual.
The customer base outgrows the infrastructure. Performance degrades. Incidents become more frequent. The team spends increasing time firefighting rather than building. Scaling problems are architecture problems, and architecture problems are expensive to fix under pressure.
Regulatory exposure increases. A company that could tolerate informal security practices at seed stage cannot afford them when handling sensitive data at scale, or when a buyer's due diligence team is about to examine every aspect of the technology.
An exit or funding round approaches. This is where technical debt becomes financially material. In a technology due diligence assessment, debt that was invisible to the board becomes visible to an external assessor. And assessors, including us, quantify it: the cost to remediate, the time required, and the risk it poses to the investment thesis.
The companies that navigate this well are the ones that acknowledged the debt early and managed it continuously. The ones that struggle are the ones that treated it as a future problem until the future arrived.
The 10-20% Rule
We hold a specific view on this, informed by what we see work and what we see fail.
Engineering teams should allocate 10-20% of their capacity to continuous technical improvement, running alongside feature delivery throughout the year. Not an annual "clean-up sprint." Not a quarterly "tech debt week." A permanent, protected allocation that prevents debt from compounding.
The arithmetic is straightforward. If you spend zero time on technical improvement, your velocity declines each quarter as the codebase becomes harder to work with. Within a year, the accumulated friction typically costs more in lost productivity than the improvement time would have cost. Within two years, you face a choice between a painful remediation programme and a rewrite, both of which are orders of magnitude more expensive than continuous maintenance would have been.
The companies where we see the healthiest technology functions share this characteristic: they treat technical improvement as a first-class engineering activity, not as something that happens when there is nothing else to do. Their deployment pipelines are maintained. Their dependencies are updated. Their test coverage is sustained. None of this is glamorous work, but it is the compound interest of engineering discipline.
AI Changes the Risk Equation
There is an important development that is reshaping how we think about technical debt in assessments: the maturation of AI-powered development tools.
Twelve months ago, discovering the full extent of technical debt in a complex codebase required weeks of expert review. Today, an experienced engineer with the right AI tooling can map a codebase's architecture, identify dependency risks, surface undocumented patterns, and flag potential vulnerabilities in a fraction of the time. The "dark corners" of a codebase, the modules that nobody on the current team fully understands, are less dark than they were.
This matters in three ways.
Identification is faster and cheaper. AI tools can analyse entire repositories, trace dependencies across services, and surface patterns that would take a human reviewer days to find. The "we did not know it was there" defence is weaker than it has ever been.
Remediation is more credible. AI-assisted refactoring, automated test generation, and migration tooling mean that fixing technical debt is genuinely faster and less risky than it was three years ago. When a company presents a remediation plan, the timelines are more believable because the tooling exists to accelerate the work. Studies suggest AI-assisted development can improve certain refactoring tasks by up to 50% in productivity.
The "black box" risk is reduced. One of the most dangerous findings in an assessment is a critical system that nobody on the current team built, nobody fully understands, and nobody can confidently modify. AI tools can read, explain, and begin to generate tests for legacy code that was previously difficult to approach with confidence. This does not eliminate key person risk, but it meaningfully reduces its impact.
None of this makes technical debt something to ignore. The debt still exists. It still costs money. It still slows delivery. But the risk profile is different. The time and cost to remediate are lower. The unknowns are fewer. The confidence in a remediation plan is higher.
This shifts the conversation. Technical debt is still a finding in an assessment. But the question has moved from "how bad is this?" to "what is the remediation plan, and is the team using the tools available to execute it?" A company that has significant debt but a credible, AI-accelerated remediation plan is a very different proposition from one that has the same debt and no plan.
There is also a cautionary note. AI-generated code that is not reviewed, tested, and maintained creates new debt at an accelerated rate. Research suggests that unmanaged AI-generated code can drive maintenance costs to four times traditional levels within two years. The tool is powerful, but it requires engineering discipline to use effectively. The quality of the engineering leadership matters more than the quality of the tools.
What Remediation Actually Looks Like
When we step in as fractional CTOs after conducting an assessment, one of the first things we establish is what remediation will actually involve. The answer is almost never a rewrite. For CTOs going through this process, our guide to surviving PE due diligence explains what assessors focus on and how to engage with the process constructively.
Rewrites are appealing in theory and catastrophic in practice. They take longer than projected, they introduce new bugs while fixing old ones, and they halt feature delivery for months. The companies that successfully address technical debt do it incrementally.
What works:
- Establish the 10-20% allocation and protect it. This is a leadership decision, not an engineering decision.
- Prioritise by business impact, not technical elegance. Fix the debt that affects delivery velocity, customer experience, or security posture. Tolerate the debt that is ugly but stable.
- Automate first. Deployment pipelines, testing, infrastructure as code. These force multipliers make all subsequent work faster and safer.
- Measure and communicate. Track deployment frequency, change failure rate, time to recover. These metrics make technical health visible to non-technical stakeholders in a language they understand.
The timeline depends on the severity, but a typical remediation plan for a mid-market portfolio company runs twelve to eighteen months of continuous improvement. Not twelve months of dedicated remediation: twelve months of the 10-20% allocation working systematically through the priority list while normal delivery continues.
FAQ
How do you quantify technical debt in a due diligence assessment?
We assess it across the five dimensions of the 5P Framework: architecture constraints, process maturity, team concentration risks, security posture, and product discipline. Each finding is classified as good, suboptimal, or problematic, with a description of the business impact and estimated remediation effort. The output is a commercial assessment of risk, not a code quality report.
Is technical debt always a red flag in due diligence?
No. Stage-appropriate debt is expected and rational. A seed-stage company with a monolithic application and manual deployments has made a reasonable trade-off. The same findings in a company with fifty engineers and enterprise customers would be a material concern. Context determines severity.
Should we insist on a rewrite before or after acquisition?
Almost never. Rewrites are high-risk and rarely deliver on their promise. Incremental improvement, guided by an experienced technology leader with a clear priority list, is more effective and less disruptive. We have never recommended a full rewrite in over a hundred assessments.
How does AI tooling affect the due diligence process itself?
It accelerates discovery. AI tools can map codebases, trace dependencies, and surface patterns faster than manual review alone. This does not replace expert judgement; it cannot assess whether an architectural decision was the right trade-off for the business context. But it means assessments are more thorough and the dark corners are better illuminated.
What is the typical cost of remediating technical debt?
It depends on severity, but the framework is straightforward: if you protect 10-20% of engineering capacity for continuous improvement, the cost is already budgeted. The expensive scenario is the one where years of accumulated debt require a dedicated remediation programme, which can consume 30-50% of engineering capacity for six to twelve months. Prevention is dramatically cheaper than cure.

We assess technology risk and then fix what we find. If you are evaluating a company for acquisition, or your portfolio company's engineering velocity is not where it should be, we will give you a straight answer.