
“Abstract” by tanakawho is licensed under CC BY-NC 2.0.
What We Actually Assess in Technology Due Diligence (From 100+ Transactions).
There is a version of technology due diligence that involves downloading a checklist from the internet, spending two days with a company's engineers, and producing a report that confirms the technology is either "broadly adequate" or "requires investment." That version is not particularly useful.
After conducting tech due diligence across more than a hundred transactions, PE buyouts, secondary acquisitions, VC rounds from seed through Series C, and pre-sale readiness assessments, we have a clear view of what a genuinely useful assessment looks like and where the standard approach consistently falls short.
This is not a checklist; for a starting-point framework, see our technology due diligence checklist. What follows is a window into how we think about these assessments and what patterns we see. The full methodology, the diagnostic sequences, the calibration data from hundreds of engagements, the experience of CTOs who have led the organisations they now assess, is what makes the difference between a useful report and an expensive one.
What Most DD Assessments Get Wrong
Before describing what we assess, it is worth being direct about the common failure modes. These are patterns we see regularly in reports commissioned from firms that treat DD as a documentation exercise rather than an expert assessment.
Assessing technology in isolation from the investment thesis. An architecture that cannot scale to one million users is a material finding in a consumer business projecting rapid growth. It is irrelevant in a B2B SaaS business with fifty enterprise clients and no plans to change that model. Generic DD reports flag the finding without connecting it to whether it actually matters for this deal.
Treating all technical debt as equivalent. Technical debt exists in every technology organisation. The question is whether it is cosmetic (the code is ugly but functional), operational (it slows the team but does not limit the product), or structural (it prevents the business from scaling to its plan). Only structural debt requires price adjustment. Conflating the three categories produces reports that are alarming but not actionable.
Ignoring the team's velocity. The state of a codebase at the point of assessment tells you where the organisation is. The trajectory over the past twelve months tells you where it is going. A team that has shipped significant improvements, reduced incident rates, and increased deployment frequency over the past year is a different proposition from a team where every metric is flat or declining. Static assessments miss the most important signal.
Underweighting people. Technology is built by humans. A clean codebase maintained by a weak team will deteriorate. A messy codebase owned by an exceptional team will improve. The quality of the technology leadership (the CTO's depth, independence, and commercial acuity) is frequently the most material factor in our assessment, and it is consistently underweighted by assessors who spend most of their time in the code.
“The state of the codebase tells you where the organisation is. The trajectory over the past twelve months tells you where it is going. Static assessments miss the most important signal.”
How We Actually Work
A Rational Partners technology due diligence engagement runs over four to six weeks for a PE buyout, two to three weeks for a growth-stage transaction. The structure is deliberate: we move from broad orientation to deep investigation to synthesis, and at every stage we are testing hypotheses against evidence rather than collecting information for its own sake.
Week one: orientation and hypothesis formation. We review available documentation: architecture diagrams, system inventories, historical incident reports, team structure, hiring data. We identify the areas of highest likely risk given the business model and investment thesis. We form working hypotheses about where the material findings will be. These hypotheses are usually right. Occasionally they are wrong in instructive ways.
Weeks two to four: deep investigation. We conduct structured interviews with the CTO, senior engineers, the product leader, and - where relevant - the infrastructure team. We review the codebase directly: not line-by-line, but at the architectural level, looking at how the system is structured, how dependencies are managed, how change is propagated through the system. We examine the development process in practice, not just in description: deployment frequency, test coverage, incident history, sprint delivery data. We run our own security checks.
Final week: synthesis and report. We draft findings, validate them with the team where appropriate, and produce a report that connects every finding to its commercial implication. The report is structured around the 5P Framework: Product, Platform, Process, Protection, People. Findings are rated by severity and a remediation roadmap that distinguishes what needs to happen before completion from what can be addressed post-completion.
What the Data Shows
Across more than a hundred assessments, certain patterns appear with enough frequency to be instructive.
The Most Common Findings
Security gaps are near-universal at sub-Series B. The majority of companies we assess below Series B have never commissioned a penetration test. Access controls are frequently inadequate; development teams with production database access is the norm rather than the exception at this stage. Secrets management is commonly informal: credentials in code repositories, environment variables in shared documents, no formal vault solution.
This is a finding we see so consistently that its presence is not, in itself, alarming. What we are assessing is the severity (how exposed is the business?) and the team's attitude (do they understand the risk and have a credible plan to address it?). A team that acknowledges the gap, can explain why it exists, and has a prioritised remediation plan is in a fundamentally different position from a team that is unaware of it or dismissive about it.
Key person risk is the most underestimated commercial risk. We find critical knowledge concentration in almost every assessment: a CTO who is the single point of failure for the architecture, a senior engineer who built the payment system and is the only person who understands how it works, a DevOps engineer whose departure would leave nobody able to manage production infrastructure.
What makes this underestimated is that the financial cost is concrete and calculable. A key person departure during a hold period can delay the value creation plan by twelve to eighteen months. We have seen deals where the departure of a single undocumented engineer effectively reset the engineering roadmap - not because the knowledge was irreplaceable, but because it was not documented and there was nobody to ask.
Technical debt is almost always worse than management believes. In the vast majority of assessments, our evaluation of technical debt severity is higher than management's self-assessment. This is not because management is dishonest; it is because people closest to a system develop a tolerance for its problems that outsiders do not share. The patterns of complexity and the accumulation of shortcuts become normalised. Operators who have seen hundreds of codebases provide a calibration that internal teams cannot. For the full structured checklist of what a thorough assessment covers, see our technology due diligence checklist.
Process maturity is highly correlated with team longevity and leadership quality. Companies with a CTO who has been in post for two or more years and has a background in engineering management almost always have better process discipline than those with a technical founder who has never managed an engineering team. This is predictable, but it remains a reliable signal.
The Least Common Findings
Genuinely unfixable problems are rare. In our experience, fewer than ten to fifteen per cent of the issues surfaced in a tech DD are genuinely unfixable within twelve months. The rest are a matter of prioritisation, investment, and competent execution. The framing of "technology is broken" or "technology is fine" misses the more useful question: what would it cost and how long would it take to get this to where it needs to be, and does that cost and timeline fit the investment thesis?
Catastrophic security failures are less common than they should be. Given the prevalence of basic security gaps, serious breaches at the companies we assess are rarer than the underlying risk profile would predict. This is partly luck, partly the fact that mid-market UK businesses are not the primary target for sophisticated attackers. It does not make the risk less real; it makes it more insidious, because the absence of a past incident is frequently used to justify inaction.
“In the vast majority of assessments, our evaluation of technical debt severity is higher than management's self-assessment. People closest to a system develop a tolerance for its problems that outsiders do not share.”
What We Actually Look at in the Code
We are not software auditors. We do not review every file. What we are looking for is structural patterns: evidence of how decisions are made and whether those decisions are consistent with a technology organisation that understands what it is building.
Module boundaries and coupling. How the system is divided into components and how those components communicate tells us more about maintainability than code quality metrics. Tight coupling, where changing one component requires changes across many others, is the root cause of the "everything takes longer than expected" pattern that shows up in management accounts of their technology.
Dependency management. Whether dependencies are current, whether there is a process for managing them, and whether the team knows which of their third-party dependencies have known vulnerabilities. Outdated dependencies are the single most common entry point in security incidents that affect businesses at this stage.
Test coverage and quality. Not just the percentage, but what is tested and what the tests actually verify. We look at whether the most critical business logic - the payment processing, the core product algorithm, the data transformation pipeline - has meaningful test coverage, and whether those tests would catch the classes of errors that have actually caused incidents in this business.
Deployment and release management. The deployment scripts, the CI/CD configuration, the release process. How long does a deployment take? What can go wrong, and what happens when it does? Is there a rollback mechanism, and has it been tested?
Documentation. Not documentation for its own sake, but whether the decisions that matter are recorded somewhere. Architecture decision records, runbooks for critical operational procedures, onboarding documentation. The quality of documentation is a direct proxy for the knowledge distribution risk.
How We Think About What We Find
The question is never simply "is this good or bad?" That framing produces reports that are alarming but not actionable.
The more useful questions are: is this finding material to the investment thesis? Is it normal for a company at this stage, or is it genuinely unusual? How much would it cost to address, and how long would it take? Does the team have the capability to fix it, or would remediation require external support?
Our actual assessment methodology is more granular than what follows, but a simplified version of the approach might look like this. We evaluate each area through two lenses:
Lens 1 (What did we find? A simplified scale might be: Good (fit for purpose), Suboptimal (room for improvement, no immediate risk), or Problematic (poses serious risk, action required). In practice, our assessments are more nuanced than three levels) but the principle is the same: findings need to be rated in a way that connects directly to commercial decisions.
Lens 2: How mature is this for the stage? A simplified scale might be: Above expectations, On Par, or Below expectations, always calibrated to the company's stage, sector, and growth trajectory. A seed-stage company with no penetration testing might be On Par. A Series C company with the same gap is Below. Applying the same absolute standard to both is not rigorous; it is lazy.
The combination matters. A finding might be Problematic (it poses real risk) but On Par for the stage (most companies at this size have the same gap). That context changes the commercial response entirely. Our assessments go considerably deeper than this simplified framing, but the principle of evaluating both what exists and what should exist at this stage runs through everything we do.
What Separates Useful DD from Expensive Theatre
The difference between a genuinely useful technology due diligence report and an expensive box-ticking exercise comes down to three things.
Commercial connection. Every finding should be connected to a commercial implication. A finding that cannot be connected to deal price, completion conditions, post-completion remediation cost, or investment thesis risk is not a finding; it is noise. Reports that list technical observations without commercial connection are not useful to buyers, sellers, or investors.
Prioritisation. A report that treats all findings with equal weight is not prioritised; it is comprehensive, which is a different and less useful thing. The ten findings that affect the deal are more important than the forty that do not. An experienced assessor knows the difference.
Actionability. A report that describes problems without proposing solutions requires the buyer to commission additional work to understand what to do about what they have just paid to discover. The remediation roadmap (what needs to happen, in what order, in what timeframe, at what cost) is not an optional supplement to the assessment. It is the point of the assessment.
Related Reading
- Technology Due Diligence Checklist - the structured 5P Framework assessment guide
- What a Technology DD Report Should Look Like, anatomy of a credible DD deliverable
- You Failed Tech DD. Now What?, how to respond to difficult due diligence findings
- Technology Audit - independent technology assessment for transactions and pre-sale readiness
- For Portfolio Companies, post-acquisition technology leadership
Frequently Asked Questions
References
- BVCA. Private Equity and Venture Capital Report 2024. British Venture Capital Association (2024).
- Forsgren, N., Humble, J., Kim, G.. Accelerate: The Science of Lean Software and DevOps. IT Revolution Press (2018).
- DORA / Google Cloud. Accelerate State of DevOps Report 2024. Google Cloud (2024).
- IBM Security. Cost of a Data Breach Report 2025. IBM / Ponemon Institute (2025).
- McKinsey & Company. Unlocking Success in Digital Transformations. McKinsey (2018).
- NCSC. 10 Steps to Cyber Security. National Cyber Security Centre, GOV.UK (2024).
Need an assessment you can rely on?
We conduct technology due diligence for PE firms, VC investors, and management teams preparing for transaction scrutiny. Delivered by operators who have done it across 100+ transactions.