
“Shooting Checkliste” by Tekke is licensed under CC BY-ND 2.0.
Technology Due Diligence Checklist: What a Proper Assessment Actually Covers.
Search for "technology due diligence checklist" and you will find hundreds of templates. Most of them are wrong — not because the questions are inaccurate, but because they treat technology due diligence as a form-filling exercise rather than an expert assessment. The value is not in the list. The value is in knowing what the answers mean.
What follows is a starting point — the kind of ground-level questions you should be asking across the key dimensions of any technology organisation. It is structured around our 5P Framework — Product, Platform, Process, Protection, People.
This is deliberately a lightweight overview. A proper technology due diligence, conducted by deeply experienced CTOs who have led technology organisations themselves, goes significantly further — probing not just what a team says but what the evidence shows, pattern-matching against hundreds of prior assessments, and connecting findings to the specific commercial context of the transaction. A checklist can tell you what to look at. It cannot tell you what the answers mean.
Before the Checklist: Scope and Context
The right checklist depends on the transaction. A seed-stage assessment and a PE buyout of a £50M revenue business require fundamentally different depth. Getting this wrong in either direction is expensive — too shallow and you miss material risks; too deep and you destroy deal momentum without proportionate benefit.
Seed / Pre-Series A: Two to three days. Focus on architecture choices and whether the technical founders can actually build what they are describing. Product and Platform matter most; Process and Protection are stage-appropriate gaps.
Series A / Series B: Two to three weeks. The product is real and in use. People, Process, and Platform now matter as much as the technology itself. You are assessing whether the team and systems can scale.
PE buyout: Four to six weeks. Comprehensive across all Five Pillars. The acquisition is based on a value creation plan — technology needs to be assessed against the specific growth assumptions in that plan, not in the abstract.
Compressed / auction processes: Two to three weeks maximum. Prioritise: security, key person risk, architecture scalability, and any findings that would require deal price adjustment. Everything else can be a condition of completion.
People
People is always the first pillar and frequently the most important one. Technology is not built by systems — it is built by engineers, led by CTOs, and shaped by the culture that leadership creates. A brilliant architecture in the hands of a dysfunctional team deteriorates. A modest codebase run by an exceptional team improves.
CTO Quality and Independence
The CTO is the single most important person in any technology assessment. We are looking for genuine technical depth combined with the commercial acuity to translate technology decisions into business outcomes. The warning signs are a CTO who cannot explain their architecture choices in plain language, who has not kept pace with developments in their own domain, or who defers to the CEO on every technology decision.
Independence matters particularly in founder-led businesses. A CTO who cannot push back on the CEO — who agrees with every idea and never raises technical concerns — is not a CTO. They are a technical executor, and that is a different role with a different risk profile.
What you could ask: "Tell me about a significant technical decision you pushed back on and how that conversation went." A CTO with genuine independence has a story. A CTO without it will struggle to find one.
Key Person Risk
The most common finding in technology due diligence is excessive dependency on one or two individuals. The CTO who is the single point of failure for the entire architecture. The senior engineer who built the payment system and is the only person who understands how it works. The DevOps engineer whose departure would leave nobody able to manage the production infrastructure.
Key person risk is not inherently a dealbreaker, but it must be quantified, understood, and addressed in the post-completion plan. We score it across three dimensions: knowledge concentration (how much does this person know that nobody else does?), mitigation (what documentation, pairing, or succession planning exists?), and flight risk (what is the probability they leave, and what would trigger it?).
What you could ask: "If your CTO was unavailable for six months tomorrow, what would happen?" The quality of the answer tells you more than the answer itself.
Team Structure, Seniority, and Hiring Pipeline
We assess whether the team structure matches the company's stage and ambitions. A ten-person engineering team where nine are junior developers and one is the CTO is a structure that cannot scale — there is insufficient senior capacity to mentor, review, and guide the work. A team of all seniors with no mid-level engineers is expensive and difficult to hire into.
The hiring pipeline matters as much as the current team. How long does it take to hire an engineer? What is the offer acceptance rate? Has the company ever failed to fill a critical role? The answers reveal how attractive the company is as an employer and whether the growth plan's hiring assumptions are credible.
What you could ask: "Walk me through your last five engineering hires — how long did it take, where did you find them, and which ones have exceeded expectations?"
Retention and Attrition
Annual engineering attrition above fifteen per cent warrants investigation. Above twenty-five per cent, something is seriously wrong. We always ask to see attrition data by seniority and tenure — a company where junior engineers leave after eighteen months is different from one where senior engineers are leaving, and both are different from one where nobody has left in three years (which can indicate a team that has become too comfortable).
Exit interview data, where it exists, is among the most useful information in any technology assessment. We regularly find that the reasons given in exit interviews — compensation, leadership quality, technical frustration — directly predict the findings in other parts of the assessment.
Process
Process is about how the team builds software, not just what they build. A team with a strong process can improve weak technology. A team without process will degrade strong technology over time. Process findings are also among the most fixable — a well-run remediation programme can transform delivery discipline within eight to twelve weeks.
CI/CD and Deployment Frequency
The frequency with which a team deploys to production is one of the most reliable proxies for engineering maturity. Teams that deploy multiple times per day have invested in automation, testing, and confidence in their systems. Teams that deploy monthly are managing risk through infrequency — avoiding deployment to avoid failure — which is the wrong way around.
We look for: automated build pipelines, staging environments that mirror production, automated test suites that run on every commit, and deployment processes that require no manual intervention. The absence of any of these is a finding; the absence of all of them in a growth-stage company is a critical finding.
What you could ask: "How many times did you deploy to production last week, and what does a deployment involve from a human effort perspective?"
Automated Testing Coverage
No automated testing is a critical red flag. Manual testing does not scale — it is slow, inconsistent, and dependent on people rather than systems. We look for a test pyramid: a foundation of fast, numerous unit tests; a middle layer of integration tests that validate component interactions; and a small number of end-to-end tests that verify critical user journeys.
We are also sceptical of test coverage percentages in isolation. A codebase with eighty per cent test coverage where all the tests are trivial assertions is not meaningfully better protected than one with forty per cent coverage of the critical paths. We look at what is tested, not just how much.
What you could ask: "What happens when an engineer pushes broken code — how quickly is it caught and what prevents it from reaching production?"
Incident Response and Post-Mortems
How a team handles incidents reveals its maturity more than almost anything else. A mature team has documented incident response procedures, defined severity levels, clear escalation paths, and a post-mortem culture that treats failures as learning opportunities rather than occasions for blame.
We look for evidence that post-mortems actually happen, not just that they are intended to happen. The test is simple: ask for the last three post-mortem documents. A team with a real post-mortem culture produces them without difficulty. A team without one will tell you that post-mortems are "on the roadmap" or "something we do informally."
Sprint Delivery Discipline
Consistently delivering eighty to ninety per cent of sprint commitments indicates good planning discipline and realistic estimation. Teams that consistently deliver less than sixty per cent are either over-promising, under-estimating, or being disrupted by unplanned work — all of which indicate process problems that compound over time.
We also look at the ratio of planned to unplanned work. High-performing teams spend most of their capacity on planned work. Teams where a significant proportion of sprint capacity is consumed by urgent fixes and reactive work are in a reactive cycle that suppresses their ability to make strategic progress.
“Deployment frequency is one of the most reliable proxies for engineering maturity we have found. Teams that deploy multiple times per day have invested in automation and confidence. Teams that deploy monthly are managing risk through infrequency — which is the wrong way around.”
Product
The Product pillar assesses whether the technology is fit for purpose — whether it delivers what the business needs today and whether it can evolve to deliver what the business needs tomorrow. This is where we find the largest divergence between what management believes about their technology and what the evidence shows.
Architecture Scalability
We are looking for a straightforward question: will this architecture support the growth assumptions in the investment thesis? If the business plan projects ten times revenue growth over five years, does the technology have any chance of scaling to support that?
This is not about fashionable architectural patterns. A well-designed modular monolith outperforms a poorly implemented microservices architecture every time. Microservices are earned through scale, not assumed from the start — a seed-stage company with a microservices architecture is almost always experiencing the complexity costs without the benefits. We look for architecture that matches the team's size and capabilities, with a clear path to evolution as the business grows.
What you could ask: "Walk me through what happens when you need to add a significant new capability — how long does it take and what has to change?"
Technical Debt Quantification
Every technology organisation has technical debt. The question is not whether it exists but whether it is understood, managed, and proportionate. We are concerned when a team cannot quantify its technical debt — when "we have debt" is the extent of the analysis. We are more concerned when debt is concentrated in the most critical parts of the system, creating risk that is invisible until something breaks.
We distinguish between three types of debt: cosmetic (the code is ugly but functional — low priority), operational (it slows the team but does not limit the product — medium priority), and structural (it prevents the business from scaling — high priority). Only structural debt is a dealbreaker, but all three need to be understood.
What you could ask: "What percentage of engineering time is allocated to technical debt reduction, and what specific improvements have been completed in the past quarter?" The answer reveals both the severity of the debt and the team's approach to managing it.
Roadmap Credibility
A technology roadmap that is disconnected from realistic engineering capacity is not a plan — it is a wish list. We assess whether the roadmap reflects a genuine understanding of what the team can deliver, whether priorities are driven by data rather than opinion, and whether the roadmap has a track record of being delivered.
We also look for roadmap governance: who decides what goes on the roadmap, how conflicts between technical and commercial priorities are resolved, and whether technical improvement work has protected time. Teams that respond to every commercial request by adding it to the roadmap without removing anything else are not managing their roadmap — they are collecting feature requests.
Analytics and Instrumentation
A product that cannot measure its own performance is flying blind. We look for meaningful instrumentation: not just uptime monitoring but user behaviour analytics, feature adoption metrics, error rates by component, and performance data that the team actually uses to make decisions.
The absence of analytics is a common finding in companies that have grown quickly from an early-stage mindset. The team has been focused on building features; the infrastructure to understand how those features perform has not kept pace. This is fixable but requires deliberate investment.
Protection
Protection covers security, compliance, and resilience — the areas where a failure is not just a technology problem but an existential business risk. Security findings are among the most important in any technology assessment, and they are frequently the most underestimated by management teams who have not experienced a serious incident.
Penetration Testing
Annual penetration testing is the minimum expectation for any company handling customer data. A company that has never commissioned a penetration test has an unknown security posture — and unknowns in security are always bad news. We assess not just whether testing has been done but whether the findings were properly addressed and whether the testing scope was appropriate for the business's risk profile.
External penetration testing is necessary but not sufficient. We also look for internal vulnerability scanning, dependency auditing (are the libraries the codebase depends on up to date and free of known vulnerabilities?), and SAST/DAST tooling in the CI/CD pipeline.
Access Controls and Secrets Management
The principle of least privilege applies universally: engineers should have access to the systems they need for their specific role and no more. Every developer having access to the production database is a security risk that grows with team size. We look for role-based access controls, multi-factor authentication across all systems, and a formal approach to managing secrets — API keys, database credentials, and environment variables stored in a vault rather than in code repositories or shared spreadsheets.
The most common security finding we see is not sophisticated — it is secrets in code repositories. Credentials committed to version control, even accidentally and subsequently removed, are a material risk. We always check.
GDPR and Data Handling
We assess data handling practices against GDPR requirements: what personal data is collected, where it is stored, how long it is retained, who has access to it, and how subject access requests are handled. For businesses operating in regulated sectors, we extend this to sector-specific compliance requirements.
A data protection impact assessment is evidence of mature thinking about privacy; its absence in a data-intensive business is concerning. We also look for data classification policies — whether the organisation understands which of its data is sensitive and has applied appropriate controls accordingly.
Disaster Recovery
A disaster recovery plan that has never been tested is barely better than no plan at all. We look for documented recovery procedures with defined recovery time objectives and recovery point objectives — and evidence that those procedures have been tested, not just written. The questions are simple: if your primary systems went offline right now, how long would it take to restore service? When did you last test that assumption?
The companies that recover quickly from serious incidents are the ones that have practised. The ones that lose days or weeks discover in the middle of a crisis that their backups did not complete, their runbooks are out of date, and nobody is certain who has the authority to make decisions.
Platform
Platform covers the infrastructure and operational foundation on which the technology runs. Platform findings often have the clearest financial implications — infrastructure resilience failures translate directly into downtime costs, and cloud cost inefficiency is a drain on margin that grows quietly until it becomes impossible to ignore.
Infrastructure Resilience
We look for multi-availability-zone deployment for any production system where downtime has a material business impact. Single-AZ deployment is a single point of failure. For most growth-stage businesses, the cost of multi-AZ deployment is trivially small relative to the cost of an extended outage.
Beyond redundancy, we assess observability: does the team know when something is going wrong before a customer tells them? Mature monitoring includes infrastructure health, application performance, and business-level metrics — not just "is the server up?" but "is the checkout flow completing at the expected rate?"
What you could ask: "How many minutes of unplanned downtime have you had in the past twelve months, and how did you know it was happening?"
Cloud Cost Management
Unmanaged cloud environments accumulate waste quickly. Unused instances, oversized databases, redundant services that nobody has decommissioned, and development environments left running at weekends are common findings that together represent significant and unnecessary spend.
We regularly identify twenty to forty per cent waste in cloud environments that have never been actively optimised. For a company spending £50,000 per month on infrastructure, that represents potential savings of £10,000 to £20,000 per month with no impact on performance — savings that typically fund their own optimisation within a few months.
We also assess the relationship between cloud spend and revenue. Healthy businesses see cloud costs grow slower than revenue as they scale. Businesses where cloud costs are growing faster than revenue are running an architecture that does not benefit from scale — a problem that compounds.
Vendor Dependencies and Concentration Risk
Single-vendor dependency creates risk. A business whose entire production operation runs on a single cloud provider with no multi-cloud or on-premise fallback has a concentration risk. A business whose core product functionality depends on a single third-party API that could be deprecated or repriced has a commercial risk embedded in its architecture.
We assess the criticality of each vendor dependency and the feasibility of switching if necessary. For the most critical dependencies, we look for contractual protections, SLA guarantees, and — where appropriate — architectural designs that reduce lock-in.
Monitoring and Observability
Monitoring tells you when something is broken. Observability tells you why. The distinction matters: a team with monitoring but not observability knows when the site is down but has to guess at the cause. A team with genuine observability — distributed tracing, structured logging, business-level alerting — can diagnose and resolve incidents in minutes rather than hours.
Infrastructure monitoring (is the server running?) is table stakes. Application performance monitoring (is the application responding within acceptable latency?) is standard. Business-level observability (is the conversion rate normal?) is the hallmark of a mature engineering operation.
How to Use This Checklist
The questions above are a starting point, not a complete methodology. The value in technology due diligence is not in collecting answers — it is in interpreting them in context.
A startup with no automated testing is at a different risk level from a Series C business with the same gap. An architecture that cannot support the existing ten thousand users is a different finding from an architecture that cannot support the projected one million. A CTO who joined six months ago and is actively improving things is a different risk from one who has been in post for five years and has stopped learning.
For buyers: Use this framework to structure your assessment, and apply weight to findings based on their relevance to your investment thesis. A finding that does not affect the value creation plan is less material than one that does. A simplified approach: for each answer, ask two questions — "how serious is this?" and "is this normal for a company at this stage?" The combination tells you far more than either question alone.
For sellers: Use this framework as a self-assessment before going to market. Identifying and addressing material findings in advance — rather than defending them under scrutiny — is the most effective way to protect your valuation and maintain deal momentum. Pre-sale technology audits commissioned twelve to eighteen months before a transaction consistently produce better outcomes than reactive remediation. For more on what experienced assessors actually look for beyond the checklist, see What We Actually Assess in Technology Due Diligence.
For management teams: This is an honest picture of the ground that experienced assessors cover. If you cannot answer these questions confidently, that is worth knowing before an investor asks them. Bear in mind that a professional assessment goes considerably deeper — experienced CTOs bring pattern recognition from hundreds of engagements that no checklist can replicate.
Related Reading
- Technology Audit — independent technology assessment for PE and VC transactions
- You Failed Tech DD. Now What? — how to respond to difficult due diligence findings
- Is Your Tech Team Any Good? — ten signs that reveal whether your engineering team is performing
- Fractional CTO for PE/VC Portfolio Companies — post-acquisition technology leadership
Frequently Asked Questions
References
- BVCA. Private Equity and Venture Capital Report 2024. British Venture Capital Association (2024).
- Forsgren, N., Humble, J., Kim, G.. Accelerate: The Science of Lean Software and DevOps. IT Revolution Press (2018).
- DORA / Google Cloud. Accelerate State of DevOps Report 2024. Google Cloud (2024).
- IBM Security. Cost of a Data Breach Report 2025. IBM / Ponemon Institute (2025).
- NCSC. 10 Steps to Cyber Security. National Cyber Security Centre, GOV.UK (2024).
- UK DSIT. Cyber Security Breaches Survey 2025. GOV.UK (2025).
Need a technology due diligence assessment?
We assess technology for PE firms, VC investors, and management teams preparing for transaction scrutiny. Structured around the 5P Framework, delivered by operators who have done it across 100+ transactions.