
“Geometric Abstract Art Created with Solid Edge v19” by Keerthan Y U is licensed under CC BY 4.0.
AI Readiness Assessment: What Investors Should Actually Measure.
Every board deck now mentions AI. Every portfolio company has an "AI strategy." Every investment memorandum describes AI as a growth lever. The question that operating partners, investment directors, and portfolio managers increasingly need answered is not whether AI matters; it is whether the claims being made about AI readiness are real.
The standard approach to answering that question is a maturity model. Rate yourself one to five across a dozen dimensions, plot the results on a spider chart, present it to the board. The problem is that self-reported maturity questionnaires are not assessments. They are exercises in institutional self-perception, and institutional self-perception is consistently wrong about AI.
After assessing AI readiness across dozens of portfolio companies, for PE firms, VC investors, and operating partners managing technology-intensive portfolios, we have a clear view of what an AI readiness assessment needs to measure and where the standard approach consistently fails.
Why Most AI Readiness Assessments Fail
Before describing what to measure, it is worth being direct about why the current approach does not work.
Self-reported maturity is unreliable. In a 2025 randomised controlled trial by METR, experienced open-source developers using AI tools believed they were 20 per cent faster. They were actually 19 per cent slower. A 40-percentage-point perception gap. This is not a reflection of AI's limitations; it is a reflection of how poorly people assess their own AI capability. Self-reported questionnaires capture this perception, not the reality.
Generic frameworks miss the investment context. A five-dimension maturity model that could apply to any company is not useful for an investor evaluating a specific portfolio company against a specific value creation plan. AI readiness is not an abstract quality; it is readiness to capture specific opportunities and defend against specific threats within a specific investment timeline.
The assessors lack practitioner experience. The people conducting most AI readiness reviews have theoretical knowledge, advisory experience, or a management consulting framework. They have not personally built or deployed production AI systems. This matters because AI capability is difficult to evaluate from the outside. A company that claims "we use AI across the business" might have a thin API wrapper that any competent team could replicate in a week, or it might have genuinely differentiated AI capability embedded in its core product. Distinguishing the two requires hands-on experience that most assessors do not have.
The output is not actionable. A spider chart showing "Data Maturity: 2.5 / 5" tells an operating partner nothing about what to do. What specific data problems exist? How long would they take to fix? How much would it cost? What is the commercial impact of not fixing them? The assessment needs to produce a roadmap, not a rating.
“A spider chart showing 'Data Maturity: 2.5 out of 5' tells an operating partner nothing about what to do. The assessment needs to produce a roadmap, not a rating.”
What Investors Should Actually Measure
An AI readiness assessment that serves investment decisions needs to cover seven dimensions. These are not arbitrary: they reflect the areas where we consistently find material gaps between what companies claim and what they can evidence.
1. Strategy and Use-Case Maturity
The first question is not "do you have an AI strategy?" It is "can you name a specific use case, the team responsible, the metric it will move, and the date it ships?"
The difference between those two questions is the difference between a company that is ready and one that is not. We see the same pattern repeatedly: a board-approved AI strategy document that describes ambitious goals in general terms, with no named owner, no measurable target, and no delivery date. Compare that with a company that says "we are deploying AI-assisted ticket routing by March, led by our platform team, targeting a 30 per cent reduction in average resolution time." The second company may or may not succeed, but it is operating with the specificity that makes success possible.
For investors evaluating a portfolio, the strategy dimension answers the most basic question: does this company know what it is doing with AI, or is it performing readiness?
2. Data Foundations
Data problems are among the most expensive findings in any AI readiness assessment because they are rarely quick to fix. A company cannot deploy effective AI on poor data, and most companies overestimate their data quality because they have never had to use their data for anything as demanding as machine learning.
What we assess: data quality and completeness, governance frameworks, lineage and cataloguing, privacy compliance (particularly under GDPR), and whether the data infrastructure could support production ML workloads or only experimental ones. A company with a centralised data warehouse but no formal governance framework, where data quality is manually validated before reporting cycles and there is no automated lineage tracking, is in a fundamentally different position from one with mature data operations, even if both describe themselves as "data-driven."
3. Technology and Infrastructure
Cloud maturity, integration architecture, observability, and increasingly MLOps or LLMOps readiness. This is straightforward for experienced CTOs to assess, but there is a specific challenge with AI: the tooling landscape is evolving so rapidly that assessors need current, hands-on experience to distinguish genuine capability from vendor marketing.
A company running production ML workloads on a well-architected platform with proper monitoring, model versioning, and deployment automation is genuinely ready. A company that has installed an LLM integration plugin and calls it "AI infrastructure" is not. The difference is obvious to a practitioner and invisible to someone reading a self-assessment form.
4. Talent and Operating Model
This is where the AI development maturity spectrum becomes an assessment tool rather than just a framework. Where is the engineering team on the spectrum from reactive prompting to agentic engineering?
A team at Level 1 (what Andrej Karpathy called "vibe coding") describes AI in terms of tools: "we use Cursor" or "we have GitHub Copilot." A team at Level 3 or 4 describes AI in terms of practices: "we write specifications first," "we have shared context files that any team member can use," "our CI pipeline includes AI-assisted code review." The difference in productivity between these levels is not incremental; it is the difference between capturing 20 per cent of available value and capturing 80 per cent.
We also assess knowledge distribution. AI capability concentrated in one engineer is a key person risk that is qualitatively more severe than a traditional CTO dependency. If the person who built the AI pipeline leaves, the company does not just lose expertise; it loses the ability to maintain, update, or extend the systems that depend on that expertise.
5. Security and Privacy
Shadow AI adoption is near-universal. Employees are using ChatGPT, Claude, and other tools for work-related tasks, often sending company data, including customer information, proprietary code, and strategic documents, to third-party processors without any data processing agreement in place.
We have assessed companies where the security team had no visibility into AI tool usage, where customer support agents were pasting ticket contents into consumer AI tools, and where engineering teams were using AI coding assistants that sent code to external servers without the CTO's knowledge. In one healthcare company, a finalist platform for a major procurement would have sent all patient data to a third-party AI vendor's servers in the US, a breach of data sovereignty requirements that nobody in the evaluation process had identified.
Security assessment needs to cover the five levels of AI security, from Level 0 (trusting vendors as-is, likely in breach) through to Level 3 and 4 (private cloud or on-premise deployment with full data control). Most mid-market companies are operating at Level 0 or 1 and do not know it.
6. Financial Impact
This is the dimension where most assessments are weakest and where investors need the most rigour. "Quantify" means money, not vague productivity percentages.
An AI readiness assessment for an investor should produce specific numbers: "Implementing AI-assisted ticket routing will reduce average resolution time by 30 per cent, saving an estimated £420,000 annually in support costs with a payback period of four months." That requires baseline metrics from the company, assumption models for AI impact grounded in evidence, and financial arithmetic that an investment committee can scrutinise.
Vague claims like "we estimate 20 to 30 per cent productivity improvement" are not findings. They are guesses dressed as analysis. An investor reads financial models for a living; the assessment output needs to meet that standard.
7. Competitive Vulnerability
This is the dimension that most AI readiness models omit entirely, and it is arguably the most important for investors.
AI is not just an internal capability question; it is a competitive dynamics question. Is the company's market being disrupted by AI-native entrants? Is its pricing model (particularly seat-based SaaS) vulnerable to outcome-based competitors? Are incumbents embedding AI in ways that erode the company's differentiation?
Intercom's transformation is the worked example: a company that bet £100 million on AI, accepted 40 per cent employee turnover, shipped a working product within six weeks of ChatGPT's launch, and went from declining revenue to approaching £100 million in AI-derived ARR. Every portfolio company in a comparable market needs to be assessed against the possibility that a competitor will make the same move.
The vulnerability assessment asks: if an AI-native competitor entered this market tomorrow, what would they do differently? How quickly could this company respond? And does the management team understand the threat, or are they focused exclusively on internal efficiency gains?
The Portfolio View
For PE and VC firms, the value of an AI readiness assessment extends beyond individual companies. Assessed consistently across a portfolio, the seven dimensions produce a readiness-versus-vulnerability map, a view that answers a different question: where should the fund invest in AI enablement to create the most value?
A company with high readiness and low vulnerability is in a strong position: it has the capability and the market is not forcing urgency. A company with low readiness and high vulnerability is the priority: it lacks capability and faces competitive pressure. The portfolio view makes these trade-offs visible in a way that individual company assessments do not.
Cross-portfolio insights compound the value further. A pattern discovered in one company, whether an effective AI integration approach, a vendor that delivers, a governance framework that satisfies regulators, or a training programme that measurably improves productivity, can be deployed across the portfolio in weeks rather than months. The fund is not paying for the same learning twenty times.
The Incentive Problem Nobody Talks About
There is a structural reason why AI strategies exist on paper but rarely reach production, and it has nothing to do with technology.
If you are a CTO in a mid-market company, what is your incentive to drive AI aggressively into your business? Your engineering team might shrink. Your budget might get cut. Your role might change. The more effective AI becomes at augmenting software development, the less obvious it is that you need the same headcount, and the CTO is the person most exposed to that conclusion.
This is not cynicism. It is a rational response to incentive structures that most boards have not thought about. A board that mandates "an AI strategy" without understanding these dynamics will get impressive presentations and very little deployment. An AI readiness assessment that does not evaluate the incentive environment (whether the leadership team has genuine motivation to drive adoption, or whether the incentives work against it) is missing the most common reason that AI strategies fail.
We sit outside the political structure, with no stake in team size or budget allocation. In our experience, engineers who wield AI well become more valuable, not less. But that message needs to come from someone who is not threatened by it.
What a Useful Assessment Produces
The output of an AI readiness assessment should be a document that an investment committee can act on, not a spider chart that gets filed.
Per company:
- A scored assessment across all seven dimensions, substantiated by direct technical evaluation, not self-reported questionnaires
- A prioritised 12-month AI impact plan tied to specific business outcomes, with quantified ROI and payback windows
- A 90-day execution plan with named owners, acceptance criteria, and measurable milestones
- A risk register covering AI-specific risks: vendor concentration, data privacy, regulatory exposure, key person dependency
- Staffing recommendations: what capability exists, what needs to be hired, what can be trained
- A board-ready summary that a non-technical operating partner can present to the investment committee
Across the portfolio:
- A readiness-versus-vulnerability heatmap showing where to prioritise investment
- Cross-portfolio themes and shared opportunities
- Recommendations for shared platforms, vendors, or training programmes that create portfolio-level efficiency
The assessment is not the end point. It is the foundation for an AI enablement programme that moves from assessment to strategy to execution, delivered by the same practitioners who conducted the assessment, not handed off to a different firm.
Related Reading
- What We Actually Assess in an AI Readiness Review, a practitioner's view of how we evaluate each dimension
- AI Due Diligence: What You Need to Know - AI-specific assessment within technology due diligence
- What Is Vibe Coding? And Why It's Not Enough, the AI development maturity spectrum
- How Mid-Market CEOs Can Deploy AI at Speed, the five-level AI security framework
- The AI Diligence Gap - why PE firms need systematic AI assessment
- AI for Boards, how investors should evaluate portfolio company readiness
- AI Enablement Services: strategy, assessment, and training
- AI Bootcamp - intensive AI training for engineering teams
Frequently Asked Questions
References
- METR. Measuring the Impact of Early 2025 AI on Experienced Open-Source Developer Productivity. METR (2025).
- Peng, S., Kalliamvakou, E., Cihon, P., Demirer, M.. The Impact of AI on Developer Productivity: Evidence from GitHub Copilot. MIT / Microsoft Research (2023).
- Gartner. Gartner Predicts 60% of Code Will Be AI-Generated by End of 2026. Gartner (2025).
- McKinsey & Company. The State of AI in 2025. McKinsey Global Survey (2025).
Need an AI readiness assessment you can trust?
We assess AI readiness across PE and VC portfolios - scored, quantified, and delivered by practitioners who build with AI daily. Not a maturity questionnaire. A technical evaluation that produces an actionable roadmap.