Rational Partners
An army of small toy robots representing AI at scale

robot army by peyri is licensed under CC BY-ND 2.0.

AI Due Diligence: What You Need to Know.

Roja Buck

Every investment memorandum now mentions AI. Every management presentation includes slides about machine learning, large language models, or "AI-powered" features. Every growth plan assumes AI will contribute to efficiency gains, new revenue, or competitive differentiation.

The problem is that most technology due diligence processes are not equipped to evaluate these claims. The assessor may be an experienced CTO who can evaluate architecture, team quality, and delivery processes — but AI introduces questions that require specific, current, hands-on expertise to answer well. Is the model genuinely proprietary, or is it a prompt wrapped around a third-party API? Is the data foundation adequate for the AI ambitions in the business plan? Will the AI features survive a vendor price change or an API deprecation?

This guide covers what investors and boards should understand about AI in the context of technology due diligence — and what to look for in an assessor's findings.

Why AI Due Diligence Is Different

Traditional technology due diligence evaluates whether the technology can support the business plan. The assessor checks architecture, security, team capability, process maturity, and infrastructure resilience. These are well-understood disciplines with decades of accumulated pattern recognition behind them.

AI introduces three complications that most DD frameworks were not designed to handle.

First, AI claims are easy to make and difficult to verify. A company can describe its product as "AI-powered" when the AI component is a single API call to a third-party model. Without hands-on experience building production AI systems, an assessor cannot distinguish between genuine AI capability — proprietary models, meaningful training data, production-grade inference pipelines — and a thin integration that any competent engineering team could replicate in a week.

Second, AI value is frequently overstated in investment narratives. Revenue attributed to AI, efficiency gains from automation, competitive moats from proprietary data — these claims appear in almost every management presentation. Evaluating them requires understanding not just whether the technology works, but whether it works at a level that justifies the narrative. A chatbot that resolves 30% of support queries sounds impressive until you learn that the industry baseline for off-the-shelf solutions is already 25%.

Third, AI introduces risks that traditional DD does not cover. Vendor concentration on a single AI provider. Training data that may contain copyrighted or biased content. Regulatory exposure under evolving frameworks like the EU AI Act. Model drift that degrades performance over time without active monitoring. These are not theoretical concerns — they are findings we see in real assessments.

What Good AI Assessment Covers

AI due diligence is not a separate exercise from technology due diligence — it is an integral part of it. AI capability, risk, and maturity show up across every pillar of a thorough assessment. Here is what a competent evaluation should examine.

Is the AI Real?

This is the first and most important question. It sounds blunt, but a surprising number of products described as "AI-powered" contain minimal actual AI. The assessment should establish: what models are in use, whether they are proprietary or third-party, what training data they were built on, how the system performs in production (not just in demos), and whether the AI component is genuinely core to the product or a feature that could be removed without materially affecting the proposition.

The distinction between building with AI and building on AI matters commercially. A company that has invested in proprietary models, training pipelines, and production ML infrastructure has a defensible position. A company that calls an API has a feature — one that every competitor can replicate when the same API is available to them.

Can the Data Support the Ambitions?

AI is only as good as the data it operates on. The assessment should evaluate: what data the company collects, whether it is structured and accessible, whether data governance practices are adequate, and whether the data volume and quality can support the AI initiatives in the business plan.

Data issues are among the most expensive findings in AI due diligence because they are rarely quick to fix. A company that has spent years accumulating unstructured, poorly governed data cannot build a high-quality AI capability on top of it without significant investment in data infrastructure first. The business plan rarely accounts for this.

What Happens When the Vendor Changes Terms?

Concentration risk on a single AI provider is one of the most common findings in current assessments. A product built entirely on OpenAI's API is exposed to price changes, rate limit changes, terms of service changes, and availability issues — none of which the company controls. The assessment should evaluate vendor diversification strategy, the feasibility of switching providers, and whether the architecture allows for it.

This is not an argument against using third-party AI — it is often the right choice. But the risk should be understood, quantified, and reflected in the investment thesis rather than ignored.

Is the Team Equipped?

AI capability is concentrated in a very small number of people globally. Most engineering teams have experimented with AI tools but have not built production AI systems. The assessment should evaluate: who in the organisation has genuine AI depth, whether that knowledge is concentrated in one person (key person risk, amplified), and whether the team has the capability to maintain and evolve AI features after they are built.

An organisation with a single AI specialist who built the prototype and is the only person who understands the model architecture has a key person risk that is qualitatively different from — and usually more severe than — a traditional key person dependency on a CTO.

Are AI-Specific Risks Managed?

Responsible AI is not just an ethical concern — it is a commercial and regulatory one. The assessment should evaluate: whether the organisation has considered bias in its models and data, whether it has a position on data provenance and copyright, whether it is tracking the regulatory landscape (the EU AI Act imposes specific obligations on certain AI use cases), and whether AI governance is formalised or informal.

Companies that have thought carefully about these questions are better positioned than those that have not — both for regulatory compliance and for the kind of scrutiny that sophisticated acquirers and investors now apply.

What to Look for in an AI Assessment

If you are commissioning technology due diligence and AI is material to the investment thesis, the quality of the assessment depends heavily on the assessor's own AI expertise.

A simplified test: ask the assessor what production AI systems they have personally built or deployed. If the answer involves only theoretical knowledge, advisory work, or managing teams that use AI — rather than direct hands-on experience — the assessment will struggle to distinguish genuine AI capability from well-presented marketing.

Our assessors bring both practitioner experience and PhD-level AI expertise to every engagement where AI is material. This is not because AI assessment requires a PhD — it is because evaluating model quality, data architecture, and production inference systems at the depth required for a credible DD finding requires current, hands-on knowledge that evolves rapidly.

For a broader view of how AI fits into technology due diligence across PE portfolios, our piece on the AI diligence gap covers the systemic challenges investors face. For the full scope of what a thorough technology assessment covers beyond AI, see our technology due diligence checklist.

Frequently Asked Questions

References

  1. European Commission. EU Artificial Intelligence Act. EUR-Lex (2024).
  2. ICO. Guidance on AI and Data Protection. Information Commissioner's Office (2024).
  3. DSIT. AI Regulation: A Pro-Innovation Approach. GOV.UK (2024).

Need AI assessed as part of a technology due diligence?

Our assessors combine hands-on AI delivery experience with PhD-level expertise. AI is integrated into every technology assessment we deliver — not as a bolt-on, but as a fundamental part of how we evaluate technology organisations.