Saint or sinner – the state of AI in 2025

Navigating hype, reality and regulation.

Artificial intelligence now dominates headlines and many boardroom discussions. For senior leaders, understanding where we stand with AI technology – and where we’re heading – has never been more critical. At a recent industry briefing, experts from the advanced technology, law and defence sectors came together to share insights on AI’s current state, regulatory landscape, and practical applications. Their findings make very interesting reading.

In the beginning – the AI hype cycle.

From Alan Turing’s famous test in 1950 to today’s AI revolution, the path of artificial intelligence reads like a rollercoaster of human ambition and technological drama.

The story really begins, in 1966, with ELIZA – imagine a digital therapist armed with nothing but simple pattern matching. It was more party trick than practical tool. But it was revolutionary for its time… a beguiling glimpse of how humans might interact with machines. But then came the AI winters.

The first winter (1973-1980) struck when the grand promises of rule-based AI crashed against the hard reality of limited computing power. Scientists were writing cheques their machines couldn’t cash. So, funding dried up faster than a desert mirage.

The second winter in the 1980s repeated the pattern, as expensive expert systems proved too rigid for real-world messiness. But like any good story, AI had its comeback moments.

In 1997, IBM’s Deep Blue delivered a shocking ‘checkmate’ by defeating chess champion Garry Kasparov. At the time, many viewed it as an ominous sign for humanity – today, though, we barely blink when computers beat us (yet again) at chess.

By 2011, IBM was back with Watson winning at ‘Jeopardy!’. This showcased AI’s growing mastery of language. But the real game-changer came in 2017 with the transformer architecture… the technological breakthrough that would eventually give us chatbots.

The arrival of ChatGPT, in 2022, changed everything. The ‘ChatGPT moment’ sparked unprecedented public interest. But was this phenomenon different from the usual hype cycle rollercoaster… high excitement plunging into deep disillusionment?

James Duez, CEO of Rainbow AI, thinks so – and here’s why…

“What makes this moment so different is the scale of investment and the potential market impact. The numbers are truly staggering. With 70 million professionals in the US alone earning an average of $100,000, even capturing a small percentage of this market through AI automation could create trillions in market value”. And this explains why investors are pouring unprecedented resources into AI development.

The Productivity Phase: where exactly are we?

Despite the excitement around Artificial General Intelligence (AGI), we are currently in what Duez calls the ‘productivity phase’. Organisations are finding real value in:

  • Document summarisation and research assistance
  • Creative and planning support
  • Sales prediction
  • Regulatory compliance searching
  • Industry-specific applications.

This assessment is supported by data showing that 82% of senior executives report tangible value from AI implementations. However, we are still far from the autonomous, self-determining AI systems that capture public imagination.

Key challenges holding back AI adoption.

Several significant barriers are preventing broader AI adoption:

  1. Explainability and Trust.

Nearly half of surveyed executives cite explainability as a major concern. The ‘black box’ nature of many AI systems, combined with questions about training, data quality and potential bias, creates a lot of hesitation among decision-makers.

  • Data Quality Issues.

Over a quarter of organisations report having insufficient or untrustworthy data. Industry estimates suggest that up to 98% of available data contains some form of bias. Increasingly, AI systems are being trained on synthetic data rather than real-world information.

  • Precision and Reliability.                                                                                     

Current generative AI systems are fundamentally pattern-matching machines, not logical reasoning engines. They can simulate explanations but cannot provide true causal reasoning. Inevitably, this leads to inconsistent results and potential hallucinations.

  • Scaling Limitations.                                                                                              

The computational resources required for next-generation AI models are enormous. GPT-5 is estimated to need 30-100 times more computing power than its predecessor. This raises serious questions about the sustainability of current development approaches.

The Regulatory Landscape: A Complex Picture.

Richard Kemp, founder of Kemp IT Law, describes the current regulatory environment as “uncomfortably poised,” particularly in the UK, which sits between two contrasting approaches:

The EU Approach:

  • The AI Act is coming into force in phases – it started in 2024
  • Detailed, specific regulations with clear categories of risk
  • Prohibited practices were outlawed from February 2024
  • Comprehensive rules for high-risk AI systems

The US Approach:

  • No overarching federal regulation
  • Mix of state, federal, and industry measures
  • Focus on investment rules rather than direct regulation

The UK must navigate between these two models while maintaining its competitiveness in AI development. This balance is particularly crucial in areas like copyright law, where the government is consulting on new exceptions for AI training data.

Emerging Technologies to Watch.

Industry experts identified several key technologies gaining momentum:

  1. Retrieval Augmented Generation (RAG). This is currently showing strong practical value. RAG allows AI systems to search across organisational documentation while maintaining accuracy through citation requirements.
  2. Agentic AI. The concept of multiple AI agents self-orchestrating to complete tasks is gaining significant attention. However, experts caution this may be more rebranding of existing concepts than genuine innovation.
  3. Neurosymbolic AI. Combining traditional rule-based systems with modern machine learning approaches, neurosymbolic AI offers potential solutions to current limitations in precision and reliability.

Case Study: AI in Defence Communications:

A practical example (cited by Origami Labs) demonstrates how combining different AI approaches can solve real-world problems. This company developed a system to convert battlefield radio communications into digital data, addressing several complex challenges:

  • High-noise environments affecting speech recognition
  • Specialised military vocabulary and syntax
  • Need for edge computing capability
  • Critical requirement for accuracy

The solution combined machine learning (ML) for initial transcription with traditional rule-based systems (RBS) for error correction and context understanding. This hybrid approach achieved better results than ML and RBS could ever achieve alone.

Key Lessons for Leaders:

  1. Focus on Specific Problems. Rather than chasing the latest AI trend, success comes from clearly defining specific business problems and selecting appropriate technologies to solve them.
  2. Consider Hybrid Approaches. Combining traditional AI methods with newer technologies often provides better results than relying solely on the latest innovations.
  3. Validate Technology Carefully. When implementing AI solutions, thoroughly examine all components, including third-party libraries and tools, to ensure security and reliability.

Case Study in Compliance and Risk Management.

A case study from the pharmaceutical industry (source: Kemp IT Law), demonstrates how AI can address complex regulatory challenges. The ‘Guard Rail’ system combines large language models with causal reasoning to ensure marketing materials comply with FDA regulations.

This application shows how AI can:

  • Automatically check content against approved claims
  • Provide specific feedback on compliance issues
  • Document decision-making processes
  • Reduce review cycles and resource requirements

Looking Ahead.

As we move through 2025, several key trends are emerging:

  1. The focus is shifting from general-purpose AI to domain-specific applications that solve concrete business problems.
  2. Hybrid approaches combining different AI technologies are proving more effective than single-technology solutions.
  3. Regulatory frameworks are evolving rapidly, requiring organisations to stay agile in their compliance approaches.
  4. The limitations of current AI technologies are becoming clearer, leading to more realistic expectations and implementation strategies.

Recommendations for Senior Leaders.

  1. Focus on Narrow Applications. Rather than pursuing broad AI initiatives, identify specific business processes where AI can provide immediate value.
  2. Build Trust through Transparency. Implement AI systems that can explain their decisions, particularly in high-stakes applications.
  3. Invest in Data Quality. Address data quality issues before expanding AI implementations.
  4. Monitor Regulatory Developments. Stay informed about evolving AI regulations, particularly in your key markets.
  5. Consider Hybrid Solutions. Look for opportunities to combine different AI technologies to achieve better results.

The Clustre perspective.

The current state of AI presents major opportunities and challenges for organisations. While we may not be entering the age of artificial general intelligence just yet, significant value can be derived from current AI technologies when properly applied to specific business problems. Success requires a balanced approach that considers technological capabilities, regulatory requirements, and practical implementation challenges.

We were thinking of ending the article at this point. But that would only short-change readers. We must address the elephant in the room: the risks versus the rewards equation. Because for every avid apostle to the cause of AI, there are equally vocal detractors. And surprisingly, many of them are the most influential thought-leaders in technology.  

As you might expect, one of the global gurus of Artificial Intelligence is a fiercely loyal cheerleader for the cause:

“It is difficult to think of a major industry that AI will not transform. This includes healthcare, education, transportation, retail, communications, and agriculture. There are surprisingly clear paths for AI to make a big difference in all these industries”. Andrew Ng, Computer Scientist and Global Leader in AI

The leaders of Microsoft share the same positive view – but with a cautionary nod to corporate responsibility…

“This next generation of AI will reshape every software category and every business, including our own. Although this new era promises great opportunity, it demands even greater responsibility from companies like ours”. Satya Nadella CEO Microsoft  

And AWS, as you might expect, is also caught up in the euphoria – or is it obsessive enthusiasm? – for AI…

“I have not seen this level of engagement and excitement (in generative AI) from customers, probably since the very, very early days of cloud computing”. Dr. Matt Wood, VP Artificial Intelligence at AWS

BUT – and it’s a big proviso – not everyone is a die-hard convert. From beyond the grave, Steve Jobs’ words resonate with a sombre ring of warning…

“Success in creating AI would be the biggest event in human history. Unfortunately, it might also be the last, unless we learn how to avoid the risks. Any sufficiently advanced technology is indistinguishable from magic”. Steve Jobs

And where does one of the world’s greatest AI pioneers sit in the debate? Sam Altman certainly doesn’t mince his words…

“AI will probably most likely lead to the end of the world, but in the meantime, there’ll be great companies”. Sam Altman, Chairman of OpenAI

For a final prophetic comment, we must turn to the most audacious entrepreneur of the tech era. Elon Musk is a self-proclaimed champion of free speech and uncensored social media. But his unfettered thinking doesn’t extend to AI. Musk readily admits that he is ‘increasingly inclined to think that there should be some regulatory oversight, maybe at the national and international level, just to make sure that we don’t do something very foolish’. He expands on this concern with this dark warning:

“The pace of progress in artificial intelligence (I’m not referring to narrow AI) is incredibly fast. Unless you have direct exposure to groups like DeepMind, you have no idea how fast. It is growing at a pace close to exponential. The risk of something seriously dangerous happening is in the five-year time frame. 10 years at most…

I mean with artificial intelligence we’re summoning the demon”. Elon Musk

Keir Starmer recently declared that Britain will become the global powerhouse in AI. But he conveniently omitted to mention that we are not alone. Rogue nations, state-sponsored terror cells and organised crime syndicates are also ‘in the game’. And they are extremely malign power-players. They will blunt any regulatory attempts to control and constrain.

Ultimately, the future of AI may be dictated by self-regulation. By the ethical boundaries agreed by commercial and industrial leaders across the world. It’s a daunting thought.

Ian Spencer is a founding partner of Clustre, The Solution Brokers

MORE INFO
FOLLOW
IN TOUCH
© 2025 Clustre, The Solution Brokers All rights reserved.
  • We will use the data you submit to fulfil your request. Privacy Policy.
  • This field is for validation purposes and should be left unchanged.