AI doesn’t fix bad software engineering… it amplifies it.

AI is almost certainly the greatest gamechanger of our age. Conceivably, of any age. AI’s impact is already pervasive and we have only started to scratch the surface of its transformative potential.

But – sorry to be the party-pooper – AI is also the subject of the most distorted and delusional claims in technological history…

Sincere apologies for this sudden reality check, but getting a true perspective is vital. Right now, companies are signing up to all kinds of snake-oil promises. AI will solve your decades-old legacy system problems… business users will be empowered to write their own applications… security vulnerabilities will disappear… development costs will plummet. It’s palpable – costly – nonsense. AI is not some magical alchemy for turning broken processes into digital gold. So, let’s cut through the hype and hallucination to focus on the truth…

The uncomfortable truth about AI and code

David Elliman, Chief of Software Engineering at Zuhlke, recently delivered a presentation that should make every executive pause before banking their software future on AI. His central thesis? AI doesn’t fix bad software engineering – it amplifies it.

Think of AI as a high-performance sports car. Hand it to a Formula 1 driver on the Silverstone race circuit, and you will witness breathtaking performance. Give it to a teenager in a shopping centre car park, and you’re heading for disaster. The difference isn’t the car – it’s the skill and capability behind the wheel.

Here’s what most executives don’t appreciate: Large Language Models (LLMs) don’t actually ‘understand’ code any more than a pianola ‘understands’ Chopin. They’re sophisticated pattern-matching engines, trained on billions of lines of text and code. They create probabilistic associations through mathematical relationships rather than genuine comprehension.

Why does this matter? Because when your AI coding assistant confidently suggests using a non-existent API or introduces a security vulnerability (which happens in 40% of AI-generated code), it is blissfully unaware of its error. It is not being deceitful – it simply lacks the capacity for self-awareness that human expertise provides.

The great democratisation myth

Perhaps the most seductive – and dangerous – narrative circulating in boardrooms today is that AI will ‘democratise’ software development. The vision is intoxicating… Business users will directly create applications. We will eliminate the bottleneck of scarce technical talent. And we will finally achieve the holy grail of business-IT alignment.

But does this all sound strangely familiar? It should. We’ve heard this song before…

Twenty years ago, we were promised that workflow engines and low-code platforms would eliminate the need for developers. Fifteen years ago, it was service-oriented architecture. Ten years ago, it was the cloud. Each time, the promise was the same: technology would finally bridge the gap between business intent and software reality.

So, what actually happened? These tools didn’t eliminate complexity. They relocated it. Instead of needing fewer specialists, organisations discovered they needed different specialists, often with even scarcer (and more expensive) skills. The fundamental challenge of translating messy business requirements into precise logical instructions remained unchanged.

AI follows the same pattern, but with a twist that makes it profoundly more problematic: it creates an illusion of understanding that can be more dangerous than obvious ignorance.

Netflix versus everyone else: a tale of two strategies

While most companies chase the democratisation mirage, the truly successful adopters of AI are taking a radically different approach.

Netflix, for instance, isn’t using AI to replace their world-class engineering team – they are using it to make that team superhuman. Their AI agents handle auto-scaling and self-healing operations, working as sophisticated assistants to expert engineers who understand the broader system context.

Similarly, the US Air Force is attempting to modernise 40-year-old COBOL systems with AI assistance. But notably, they’re doing so with teams of experts who understand both legacy architectures and modern patterns.

The pattern and the message are clear: AI amplifies existing capabilities – it doesn’t create new ones from scratch.

The prerequisites nobody talks about…

Most AI software initiatives fail before they begin. Elliman identified five critical prerequisites that separate AI success stories from expensive lessons:

  • Mature development processes: You need well-defined software development lifecycles. If your teams are still figuring out basic project management, AI won’t solve that chaos – it will accelerate it.
  • Quality documentation: AI’s Retrieval Augmented Generation (RAG) capabilities are only as good as the knowledge bases from which they draw. Garbage documentation produces garbage results – exponentially faster.
  • Platform foundations: Internal developer platforms with automated deployment pipelines aren’t just nice-to-have, they are essential for AI to operate safely at scale.
  • Continuous delivery: Automated testing and monitoring become critical when AI is generating code that needs human verification.
  • Security and governance: Perhaps most importantly, you need robust guard rails and verification processes. Why? Because AI-generated code fails in subtle, sophisticated ways.

The security paradox

Here’s another uncomfortable reality check: AI promises to improve software security, but early evidence suggests it’s creating new categories of vulnerabilities. Elliman estimates that around 40% of AI-generated code contains security flaws, and these aren’t obvious mistakes. They are sophisticated vulnerabilities that traditional scanning tools miss.

Moreover, AI systems themselves become attack vectors through prompt injection techniques that can manipulate the AI into generating malicious code or exposing sensitive information. We are not just automating code generation – we are introducing automated vulnerability.

What is the implication? Organisations need to invest more heavily in security expertise, not less. The ‘trust but verify’ principle becomes absolutely critical when dealing with AI-generated code that looks correct but behaves dangerously.

The agent revolution (and why it matters)

While the democratisation narrative crumbles under scrutiny, a more sophisticated evolution is emerging: agentic systems. These are not simple code generators – they are orchestrated networks of specialised AI personas working together on complex problems.

Imagine a coding agent collaborating with a testing expert and a requirements analyst – each brings different perspectives to bear on a development challenge. Agentic systems can work asynchronously, handling routine tasks while human experts focus on architectural decisions and creative problem-solving.

The key insight? These agents aren’t replacing human judgment – they are augmenting it by extending human capacity. They handle the routine, pattern-matching work that consumes developer time. This frees experts to focus on the complex systems-thinking that AI cannot replicate.

The maturity correlation

Perhaps the most sobering insight from Elliman’s research is the direct correlation between organisational software engineering maturity and AI benefits. Companies with excellent development practices are seeing dramatic productivity gains. Conversely, those with poor processes are, at best, seeing marginal improvements.

This creates a widening gap in the software development world. Well-run engineering organisations are becoming exponentially more effective, while poorly managed teams struggle to extract meaningful value from AI tools. The technology isn’t equalising – it’s amplifying endemic differences.

What this means for your strategy

If you’re a senior executive planning your organisation’s AI software strategy, these insights demand a fundamental reframing:

  • Stop chasing the democratisation dream. AI won’t eliminate the need for software engineering expertise. But it will certainly amplify the value of really good engineers whilst brutally exposing weak engineering organisations.
  • Invest in foundations first. Before deploying AI coding tools, ensure you have mature development processes, quality documentation, robust security practices, and automated testing pipelines.
  • Focus on augmentation, not replacement. Look upon AI applications not as a substitute for human expertise but as an amplifier of your highest value talents.
  • Prepare for new security challenges. AI-generated code requires new forms of verification and security scanning that your current tools probably don’t provide.
  • Embrace the complexity. Legacy system modernisation and business process automation remain complex challenges that require human insight, regardless of AI assistance.

The real revolution

The real AI revolution in software development isn’t about replacing human intelligence. Instead, it’s all about creating human-AI partnerships that leverage the strengths of both. AI excels at pattern recognition, code generation for well-defined problems, and handling routine tasks. Humans excel at systems thinking, architectural decision-making, and understanding complex business contexts.

Organisations that recognise this complementary relationship and invest in the foundations to support it will gain significant competitive advantages. Those that chase the mirage of AI-powered democratisation are likely to find themselves with expensive tools that amplify their existing problems rather than solve them.

The question isn’t whether AI will transform software development – it already has. The real question is whether your organisation will use it to amplify excellence or accelerate mediocrity. The choice, and the outcome, remain entirely in human hands.

__________________________

Ian Spencer is a founding partner of Clustre, The Solution Brokers.

Our special thanks to David Elliman – Chief of Software Engineering at Zuhlke – for his inspirational contribution to this article. If you would like to discuss any of the thoughts and messages in this article, David Elliman would be happy to help. Contact: robert.baldock@clustre.net

MORE INFO
FOLLOW
IN TOUCH
© 2026 Clustre, The Solution Brokers All rights reserved.
  • This field is for validation purposes and should be left unchanged.
  • We will use the data you submit to fulfil your request. Privacy Policy.