
“score board” by jm3 is licensed under CC BY-SA 2.0.
Is Your Tech Team Any Good? 10 Signs That Tell You.
You suspect something is not right with your technology team, but you cannot quite put your finger on what. Everything takes longer than promised. The same problems seem to recur. When you ask whether things are on track, you get reassuring words but a nagging feeling that the reality is different.
You are not alone. This is the most common reason non-technical CEOs contact us. They know something is wrong. They just cannot diagnose it, because the language and the processes are opaque to anyone without a technical background.
Having assessed technology teams across every sector from healthcare to construction, fintech to logistics, we have identified ten signs that reliably indicate whether your engineering team is performing or falling behind. You do not need to understand code to spot these. You need to know what questions to ask and what the answers mean.
“In the UK market, annual turnover above fifteen per cent in an engineering team should trigger concern. Above twenty-five per cent, something is seriously wrong.”
People
1. The Same Bugs Keep Reappearing
What it means: Your team is not learning from its mistakes. Either the root causes of problems are not being investigated, or the fixes are superficial — patching symptoms rather than addressing underlying issues. This is one of the clearest indicators of engineering maturity. A team that repeatedly fixes the same class of problem is spending engineering time on rework rather than progress — and the cumulative cost is substantial. In our experience, teams without a post-mortem discipline spend twenty to thirty per cent of their capacity on rework that could have been prevented.
The diagnostic question: "Can you show me the last three bugs that affected customers, and what we did to prevent each one from happening again?"
A good team will point you to post-mortem documents with root cause analysis, specific preventative actions, and evidence that those actions were implemented. A struggling team will give you vague answers about "fixing the bug" without reference to systemic improvement.
2. High Turnover in the Engineering Team
What it means: Something is driving people away. It might be poor leadership, lack of career progression, frustrating processes, or a codebase that nobody enjoys working on. Whatever the cause, high attrition is both a symptom and a compounding problem — each departure takes knowledge with it and puts more pressure on those who remain. The financial cost is significant: replacing a mid-level engineer in the UK typically costs between thirty and fifty thousand pounds when you factor in recruitment fees, onboarding time, and the productivity gap during the transition. Three departures in a year can easily cost more than a fractional CTO engagement.
The diagnostic question: "How many engineers have left in the past twelve months, and what reasons did they give in exit interviews?"
In the UK market, annual turnover above fifteen per cent in an engineering team should trigger concern. Above twenty-five per cent, something is seriously wrong. And if you are not conducting structured exit interviews, you are missing the most direct source of diagnostic information available to you.
Process
3. Every Release Takes Twice as Long as Planned
What it means: Delivery discipline is missing. The team may be over-promising, under-estimating complexity, or being disrupted mid-sprint by changing priorities. Whatever the cause, chronic over-run erodes trust between the technology team and the rest of the business. This trust deficit has consequences beyond the technology function — sales teams stop committing to feature timelines, product managers pad estimates with hidden buffers, and the board loses confidence in the technology roadmap. The business stops treating technology as a reliable capability and starts treating it as an unpredictable risk.
The diagnostic question: "What percentage of sprint commitments have been delivered on time over the past three months?"
Sprint commitments experiencing fifty per cent mid-cycle disruption indicate process immaturity. A well-functioning team delivers eighty to ninety per cent of its sprint commitments consistently. If the number is below sixty per cent, the issue is not effort — it is planning discipline, scope management, or unrealistic expectations.
4. New Developers Take Six Months or More to Be Productive
What it means: Onboarding is broken. The codebase is undocumented, the development environment is difficult to set up, or institutional knowledge is concentrated in a few individuals who are too busy to train new joiners. Long onboarding times are expensive — you are paying a full salary for months of reduced output — and they compound the attrition problem, because new engineers who cannot get productive become frustrated and leave. This is also a leading indicator of documentation quality across the entire engineering function. If onboarding is slow, it is almost certain that other critical knowledge — architecture decisions, operational procedures, incident response — is equally poorly documented.
The diagnostic question: "How long does it take a new engineer to make their first meaningful contribution, and what do we do to accelerate that?"
Industry best practice is four to six weeks to first meaningful contribution for a well-structured team with reasonable documentation. If your team says "a few months," the documentation, tooling, and mentoring practices are inadequate.
“Process problems are the most fixable issues on this list. Delivery discipline — sprint planning, scope management, honest estimation — can be rebuilt in four to six weeks with the right leadership focus.”
Product
5. "Tech Debt" Is the Excuse for Everything
What it means: Technical debt is real, but when it becomes the universal explanation for slow delivery, it often masks a deeper problem. Either the architecture is genuinely limiting — every new feature requires working around accumulated shortcuts — or the team is using "tech debt" as a catch-all excuse to avoid accountability for delivery timelines. The distinction matters. Genuine architectural debt requires investment to resolve. A culture of using "tech debt" as an excuse requires a different intervention entirely — better estimation practices, clearer accountability, and honest conversations about why delivery is slow. When we assess teams, we ask to see the specific technical debt items and their business impact. Teams that can articulate this clearly have a real problem they understand. Teams that cannot are often using the term as a shield.
The diagnostic question: "What percentage of engineering time is dedicated to technical debt reduction, and what specific improvements have been completed in the past quarter?"
A healthy team allocates ten to twenty per cent of its engineering capacity to continuous technical improvement. If the answer is "we have a tech debt sprint planned for next quarter," that is the wrong approach — annual cleanups are insufficient and suggest that debt is being ignored until it becomes unbearable.
6. You Cannot Pivot When Business Needs Change
What it means: The system architecture is too rigid. Changes that should take weeks take months. Adding a new feature requires modifying code across multiple systems. The technology has become a constraint on the business rather than an enabler. This is particularly dangerous in competitive markets where the ability to respond quickly to customer needs or market shifts is a genuine differentiator. We have seen companies lose significant commercial opportunities because their technology could not adapt quickly enough — a new partnership that required an integration, a regulatory change that demanded rapid compliance, or a customer request that should have been straightforward but required months of architectural rework.
The diagnostic question: "If we needed to add a significant new capability — a new payment method, a new customer segment, a new integration — how long would it take, and what would need to change?"
The answer reveals the flexibility of the architecture. A well-designed system can accommodate new requirements in weeks. A brittle system requires months of refactoring before the new work can even begin.
Protection
7. You Cannot Answer "Are We Secure?"
What it means: If you do not know your security posture, you probably do not have one. Security is a continuous practice of assessment, improvement, and vigilance — not a box you tick once. The absence of that practice is itself a risk. The average cost of a data breach in the UK is measured in millions of pounds, and for mid-market businesses the reputational damage can be existential. Beyond the direct financial impact, security failures increasingly attract regulatory attention — the ICO has become significantly more active in enforcement, and the penalties for negligent data handling are substantial.
The diagnostic question: "When was our last penetration test, and what did it find?"
Annual penetration testing is the minimum expectation for any company handling customer data. If the answer is "we have never had one" or "I am not sure," your security posture is an unknown — and unknowns in security are always bad news.
8. There Is No Disaster Recovery Plan
What it means: One incident — a server failure, a ransomware attack, a cloud provider outage — could take your business offline for days. Without a tested disaster recovery plan, you are relying on luck and improvisation. The companies that recover quickly from incidents are the ones that have practised. The ones that lose days or weeks are almost always the ones who assumed their backups worked, assumed their team knew what to do, and discovered in the middle of a crisis that neither assumption was correct. We have seen companies lose access to their own production data because nobody had verified that backups were actually completing successfully.
The diagnostic question: "If our primary systems went offline right now, how long would it take to restore service, and when did we last test that?"
Every company needs a tested disaster recovery plan with defined recovery time objectives. "Tested" is the operative word — a plan that exists only in a document and has never been rehearsed is barely better than no plan at all. If your team cannot answer this question with specific numbers, the plan either does not exist or has never been validated.
Platform
9. System Downtime Is Affecting Revenue
What it means: Reliability has not been treated as a first-class engineering concern. Downtime that impacts revenue — whether through lost transactions, frustrated customers, or SLA penalties — indicates that the platform infrastructure is not designed for the availability the business requires. The cumulative impact is often larger than individual incidents suggest. Each outage erodes customer confidence, and in B2B relationships a pattern of unreliability can trigger contract exit clauses or prevent renewals. We frequently find that the true revenue impact of downtime is three to five times what the engineering team estimates, because they measure only the direct transaction loss, not the downstream commercial consequences.
The diagnostic question: "How much unplanned downtime have we had in the past twelve months, and what was the estimated revenue impact?"
If nobody can answer this question precisely, that is itself a finding. A mature technology operation tracks availability metrics, understands the financial impact of outages, and has specific targets for improvement.
10. Cloud Costs Keep Rising Without Explanation
What it means: Nobody is managing infrastructure spend. Cloud computing is powerful, but it is just as easy to overspend on. Without active cost management — understanding what each service costs, why costs are growing, and whether the spending is proportionate to the value delivered — cloud bills become a silent drain on profitability. Unmanaged environments accumulate waste quickly: unused instances left running, oversized databases, redundant services that nobody has decommissioned. The savings from a single optimisation exercise regularly cover its own cost several times over.
The diagnostic question: "What is our monthly cloud spend, how has it changed over the past year, and what is driving the change?"
Infrastructure cost should be actively managed. Developers should be aware of the actual execution costs of the services they build. If your cloud bill has doubled in a year and nobody can explain why, the cloud provider is not to blame — a cost-conscious engineering culture is missing.
“We routinely find twenty to forty per cent waste in cloud environments that have never been actively optimised. For a company spending fifty thousand pounds a month on infrastructure, that represents a potential saving of ten to twenty thousand pounds monthly with no impact on performance.”
What to Do With This Information
Count how many of these signs apply to your organisation.
The encouraging reality is that most of these problems are fixable within three to six months with the right leadership and the right priorities. The harder part is knowing which ones matter most for your specific business — which issues are urgent, which are important, and which can wait. That prioritisation is what experienced technology leaders bring to the table.
Your Score
0–2 signs: Your team is likely performing well. The signs you have identified are common and fixable with focused attention.
3–5 signs: There are meaningful gaps that are probably costing you money and slowing your growth. An independent assessment would identify the highest-priority issues and the most effective interventions.
6 or more: The technology organisation needs significant attention. The problems are likely interconnected — poor process leads to poor quality leads to attrition leads to knowledge loss leads to slower delivery. Addressing this requires senior technology leadership, either internal or external. If you want to understand the root causes in more depth, our guide on why tech teams do not deliver breaks down the five most common patterns.
Related Reading
- Technology Audit — independent assessment of your technology team and systems
- Fractional CTO Services — ongoing senior technology leadership for your business
- Why Your Tech Team Isn't Delivering — the five root causes behind slow engineering delivery
Frequently Asked Questions
References
- Forsgren, N., Humble, J., Kim, G.. Accelerate: The Science of Lean Software and DevOps. IT Revolution Press (2018).
- DORA / Google Cloud. Accelerate State of DevOps Report 2024. Google Cloud (2024).
- McKinsey & Company. Yes, You Can Measure Software Developer Productivity. McKinsey Digital (2023).
- Oxford Economics. The Cost of Brain Drain: Understanding the Financial Impact of Staff Turnover. Oxford Economics (2014).
- IBM Security. Cost of a Data Breach Report 2025. IBM / Ponemon Institute (2025).
- CIPD. Resourcing and Talent Planning Report 2024. CIPD (2024).
Need an independent view of your technology team?
Our technology audits give CEOs and boards the clarity they need to make informed decisions about their engineering organisation.