Rational Partners

The 5P Framework.

The 5P Framework

Five pillars that determine whether a technology organisation can scale.

The 5P Framework

People

People

Team capability, structure, key person risk, and leadership evaluation.

Process

Process

Development practices, delivery capability, and team effectiveness.

Product

Product

Technology architecture, code quality, and technical debt assessment.

Protection

Protection

Security posture, compliance, and risk management assessment.

Platform

Platform

Infrastructure, scalability, and operational readiness evaluation.

A Framework Built on Operational Experience

This is not a checklist. Checklists miss the things that matter. Our framework is built for people who have scaled technology organisations themselves, because distinguishing a concerning finding from a stage-appropriate reality requires judgement that only comes from doing the job. For a practical overview of what we look for, see our technology due diligence checklist and the detailed account of what we assess in tech DD.

The five pillars interact in ways more revealing than any single assessment. A struggling architecture almost always has roots in team structure or deployment practices. Security gaps are frequently symptoms of infrastructure immaturity. Understanding these connections turns findings into strategy.

People

What We Assess

Team structure and whether it supports or blocks delivery. Leadership depth across the organisation, not just the CTO. Key person risk: where knowledge sits, and what happens if those people leave. Hiring capability, team composition, and whether the structure can support planned growth.

Where We Stand

Cross-functional squads with clear ownership. Long-lived product squads that own a domain end-to-end build deeper expertise and deliver faster than teams organised by function.

Strong technical leadership is non-negotiable. A non-technical CTO creates a void filled by whoever shouts loudest or codes fastest. Founder-driven technical decisions create a bottleneck that worsens as the organisation grows.

Key person risk is a valuation risk. In roughly forty percent of our assessments, critical knowledge sits with one or two people. An acquirer will find this and price it into the deal. Knowledge must be distributed through documentation, pairing, and deliberate rotation.

Niche technology choices compound hiring problems. Languages like Elixir or Scala mean premium salaries, smaller candidate pools, and longer time-to-fill. These costs grow every year.

What We Find

The most common finding is key person dependency — one engineer who built the platform and is the only person who understands it. The second is a mismatch between team structure and delivery ambition: three simultaneous product initiatives with a single team, or thirty engineers still operating as one undifferentiated group.

"Key person risk is not merely an operational risk — it is a valuation risk. An acquirer or investor will identify this dependency and price it into the deal."

What concerns us: no engineering management layer between CTO and individual contributors, high attrition with no clear explanation, and cultures where weekend work is normalised rather than treated as a process failure.

What impresses us: CTOs who can articulate the growth plan for every engineer, systematic knowledge sharing, and progression frameworks built before they were strictly needed.

Process

What We Assess

How code moves from an engineer's laptop to production. Whether agile practices drive genuine iteration or just ceremony. Deployment frequency and automation. Technical debt management. Roadmap planning. Incident management, architectural decision-making, and how product and engineering collaborate on priorities.

Where We Stand

Trunk-based development with feature flags. Long-lived feature branches increase integration conflicts, delay feedback, and make releases riskier. Trunk-based development separates deployment from release. This is standard practice in well-functioning teams, not aspirational.

Manual deployment is the strongest signal of process immaturity. It does not scale, it introduces human error, and it constrains iteration speed. Teams that deploy manually almost always have gaps in testing, inconsistent environments, and limited observability. It is the canary.

Technical debt needs continuous attention, not annual cleanup. "Tech debt sprints" create a false sense of progress while debt accumulates the rest of the year. Allocate ten to twenty percent of engineering time to technical improvement every sprint, every quarter. Do not make it compete with features for prioritisation.

Now/Next/Later over detailed long-range estimates. Six-month project estimates are false precision. Focus on MVP thinking and iterative releases.

What We Find

The second most common finding is "agile theatre" — teams that run Scrum ceremonies without substance. Standups where nothing changes. Retrospectives that produce action items never completed. Sprint planning as formality rather than genuine prioritisation.

We also frequently find disconnected product and engineering roadmaps. When the engineering team maintains a separate technical roadmap, technical debt is systematically deprioritised because it has no product sponsor.

"Show us how you deploy, and we can predict what we will find across every other pillar. Process maturity radiates outward from the deployment pipeline."

What concerns us: deployments that require specific individuals, teams that cannot deploy on Fridays, and QA functions that act as release gates rather than quality enablers.

What impresses us: teams that deploy to production multiple times a day without fanfare, and CI/CD pipelines treated as products with their own owner and roadmap.

Product

What We Assess

Architecture and whether it fits the current scale and planned growth. Code quality patterns that reveal how the team thinks about maintainability and testability. Technical debt: how much exists, how well the team understands it, and whether there is a credible plan. Technology choices and their alignment with business direction.

"Architecture debt is the most common silent killer of scaling companies. In roughly sixty percent of our assessments, we find critical architectural decisions that will block scaling within eighteen months."

Where We Stand

Microservices are often premature. The ratio of services to engineers matters. When an organisation has more services than engineers, the operational overhead exceeds the benefit. Most teams under fifty engineers lack the platform capability to run a distributed system well. The modular monolith isolates functionality without the complexity of network latency, eventual consistency, and distributed tracing. Architecture should match team size, not engineering aspiration.

Default to buy, build only when necessary. Custom authentication systems are almost always inadequate for enterprise requirements. The right question is not "can we build this?" but "should we, given the lifetime cost of maintaining it?"

Mainstream technologies with large ecosystems. Language choices should be driven by talent availability and ecosystem maturity, not engineering preferences. Niche languages create compounding operational risk.

The test pyramid is non-negotiable. Unit tests with all new code, integration tests in the middle, end-to-end tests at the top. Low coverage snowballs as the codebase grows, making it progressively harder to refactor with confidence.

What We Find

The most common Product finding is architectural decisions that will block scaling. These are rarely dramatic — a database schema that cannot support multi-tenancy, a synchronous pipeline that will not handle peak load, a monolith that has grown without internal boundaries.

The second most frequent finding is premature microservices. Eight engineers operating twenty-five services, each needing its own deployment pipeline, monitoring, and on-call. The team spends more time managing infrastructure than building features — the opposite of what the adoption was meant to achieve.

What concerns us: teams that describe their architecture as "microservices" when it is actually a distributed monolith — all the complexity with none of the benefits. Architecture diagrams that do not reflect reality.

What impresses us: deliberate, documented architecture decisions with clear trade-off analysis. Companies that resisted the temptation to over-engineer and built what they need for their current stage.

Platform

What We Assess

Cloud architecture, service selection, and deployment topology. Cost management and whether spend is understood at a granular level. Disaster recovery with defined and tested objectives. Monitoring and observability. CI/CD pipeline maturity. On-call arrangements, incident management, and the daily discipline of running production systems.

Where We Stand

Managed services over self-hosted. RDS over self-hosted PostgreSQL, managed Kubernetes over bare-metal. For most organisations the operational overhead of self-hosting is not justified. Multi-availability-zone deployment is the minimum for production resilience.

Infrastructure as Code is non-negotiable. Terraform, CDK, Bicep — the tool matters less than the principle. Infrastructure defined in version-controlled code, deployable reproducibly, managed through the same review processes as application code.

Cloud costs are almost universally neglected until they become a board-level problem. Manage costs from the start, not retrospectively when the AWS bill triples. Developers should know the actual execution cost of the services they build.

Tested DR plans, not theoretical ones. A plan that has never been executed is not a plan. Recovery testing should happen at least annually.

Alerts must be actionable. Eighty alerts a week creates fatigue worse than no alerts at all. If an alert does not require someone to act, it should not be an alert.

"Cloud cost management is almost universally neglected until it becomes a board-level problem. Infrastructure cost should be actively managed from the start, not discovered retrospectively when the AWS bill triples."

What We Find

Cloud cost waste is the most consistent Platform finding. Development environments sized identically to production. Reserved instances not purchased for stable workloads. Storage not lifecycle-managed. The pattern is consistent: costs go unreviewed until they become alarming.

The second most common finding is the absence of infrastructure as code. Environments built by hand through console clicks. No reproducibility, no audit trail, and disaster recovery that depends on someone remembering what they did six months ago.

What concerns us: no DR testing, teams that cannot state their recovery time objective, and infrastructure that has never been rebuilt from scratch.

What impresses us: infrastructure treated as a first-class engineering concern with its own roadmap. Cost dashboards reviewed regularly by engineering leadership. Platform teams that enable rather than gatekeep.

Protection

What We Assess

Security testing practices and whether they are integrated into the development pipeline. Access control and whether least privilege is applied and audited. Data protection: encryption, GDPR compliance, handling of sensitive data across environments. Compliance posture and certifications. Incident response capability and whether procedures have been tested.

Where We Stand

Annual penetration testing is our minimum expectation. No pen testing is the single most common critical gap we find. Both black-box and white-box testing are necessary. SAST and DAST should be standard in the CI/CD pipeline.

Least privilege, always. All developers having production access is insecure at scale. Two-factor authentication is mandatory for all sensitive accounts. Production database access should be restricted, audited, and justified.

Production data must be anonymised before copying to lower environments. This is violated more often than it should be, exposing customer data to development environments with weaker controls.

ISO 27001 demonstrates serious enterprise commitment. SOC 2 suits maturing companies targeting the US market. Even where certification is not yet justified, working toward the standards provides valuable structure.

"The absence of pen testing is not merely a gap; it is an indicator of a broader security culture that treats security as an afterthought rather than a foundational discipline."

What We Find

No penetration testing is our most frequent critical finding. It is remarkable how many organisations handling sensitive data have never had an independent security assessment. The absence signals a broader culture that treats security as an afterthought.

The second most common finding is excessive production access. Every developer can reach every database, every customer record, with no audit trail. Understandable at five people. A material risk at thirty.

We also frequently find production data in non-production environments — customer records copied to staging without anonymisation. This is a GDPR violation and a data breach waiting to happen.

What concerns us: secrets committed to source control, no security training, and incident response plans that have never been tested.

What impresses us: developers who routinely flag security concerns in code review, and organisations that have been through a security incident, conducted a thorough post-mortem, and made systematic changes as a result.

How the Pillars Connect

The five pillars interact in ways more revealing than any individual finding.

Architecture problems almost always have roots in People. Microservices in an eight-person team is not an architecture problem — it is a leadership problem. Poor Process creates Protection gaps: teams that deploy manually rarely have automated security scanning. Platform debt limits Product capability: teams that cannot deploy quickly cannot iterate quickly, and platform limitations cascade upward.

People problems manifest everywhere. A team without senior technical leadership will make poor architecture decisions, skip security practices, tolerate manual processes, and neglect infrastructure. When we find a gap, we investigate why it exists. The root cause determines the recommendation.

Stage-Appropriate Maturity

"Good" looks different at different stages. We do not compare a five-person startup to a hundred-person enterprise.

At seed stage, technical debt is acceptable if the team understands it. The question is not "do they have ISO 27001?" but "do they have foundations to build toward it when they need to?" At Series A, we expect deliberate hiring, processes that support rather than constrain, and architecture designed with the next twelve months in mind. At Series B and beyond, team structure should support autonomous delivery, processes should be mature, and infrastructure fully automated.

The companies that perform best are not the ones that have solved every problem. They are the ones that understand where they are, know what to address next, and have a credible plan.

Frequently Asked Questions

Get started
Ready to get started?

Whether you are evaluating an investment or assessing a portfolio company, the 5P Framework provides clarity.