
Three world-class experts – Jason Ward, Richard Kemp, and Dan Klein – recently shared a stage to explore AI trust barriers, regulatory challenges, and practical solutions for full-scale deployment of AI. This article is indebted to their deep insights…
In the BBC series ‘The Traitors’, contestants confront one existential question: who can you trust? The faithful must identify the traitors; the traitors must deceive the faithful.
It’s a parody of business life. Replace the castle with a corporate boardroom. Swap the contestants for AI. And you have the precise dilemma facing every organisation. Can you trust AI?
The uncomfortable answer is very few do. According to MIT, only 5% of AI projects ever reach production and deliver meaningful value. 95% chew budgets and deliver nothing.
Ultimately, this isn’t a technology issue. It’s fundamentally a trust problem. And here’s the irony…
While legitimate businesses are paralysed by questions of trustworthiness, organised crime is deploying AI with a casual confidence born out of indifference. They respect no regulations. No constraints. They’ve decided their AI systems are faithful enough, so they’re off and running. Ironically, organised crime is displaying a pragmatic honesty missing from most legitimate organisations.
AI and data guru, Jason Ward, delivered some profoundly uncomfortable truths. He identified four common traitors and went on to reveal why we struggle to know which AI to trust (because there’s much more to AI than mere LLMs).
Remember, not all AI is well developed and proven… so be discerning, very discerning.
Computer vision – the use of machine and deep learning to interpret and understand information from images and videos – has spent six years maturing. Thanks to published white papers, independent verification, and performance evaluations, it is now production-ready. 2025 is the year it really ‘came of age’, with enterprises deploying it at scale. This is a true Faithful.
Large language models are still at the enthusiastic undergraduate stage – brilliant in flashes, spectacularly wrong in others, and overly confident about almost everything. According to the METR study, true autonomy requires 99% accuracy. LLMs won’t achieve mass enterprise deployment until at least 2028-2030. In areas such as legal discovery – where errors have serious consequences – it may well take much longer.
The practical implication is stark: organisations are throwing money at chatbots (the uncertain teenagers) while ignoring computer vision (the reliable adults) that could solve many of their problems today.
Dan Klein – partner in Data and AI at AlixPartners – delivered the most uncomfortable truth: organisations claim 100% accuracy in financial reconciliation but actually operate on probabilities. For example, The Bank of New York Mellon accepts transaction deltas under $50. Effectively, they are operating on probabilities, not certainties.
Paradoxically, these same organisations refuse to trust AI because it cannot guarantee 100% accuracy. Major institutions have built their entire business infrastructure on ‘acceptable margins of error’. They have also failed to document these errors. But that doesn’t stop them demanding perfection from their algorithms.
Klein argues that businesses must be transparent about their probability-based operations. Workflow documentation is very often flawed. It lacks any comprehension of error measurement at critical stages. Organisations must document error rates and understand what drives uncertainties in their processes before they are ready for AI.
Klein likens it to discovering your house has been standing on shaky foundations for decades while you’ve been criticising the architect’s plans for a new porch.
Richard Kemp – technology law expert and CEO of Kemp Law IT – outlined a regulatory landscape that makes the convoluted ‘Traitors’ plot look straightforward.
GDPR casts its shadow over everything. Then there’s the AI Act, cybersecurity rules, operational resilience requirements, and geographic fragmentation. Nearly every US state has its own draft AI legislation. China has its own – strictly national – framework. And, of course, there’s the UK – a nation which, despite its AI superpower aspirations, remains a regulatory laggard.
Our legal systems are ponderous. They struggle with the balance of probability when dealing with black box AI. By the time court cases are resolved – and this often takes three years or longer – the technology has moved on. It’s judicial archaeology.
Critical processes like month-end reconciliation often run on undocumented Excel workflows. Three people understand them. One person maintains them. When that person goes on holiday, everyone holds their breath.
This is ‘Shadow IT’ – unofficial, undocumented, absolutely essential systems.
When data governance is largely absent… vital infrastructures are so frail… stretched staff have no formal AI training… is this really the right moment to introduce a radical and disruptive new element to the mix?
Time for some practical guidance. Let’s start with 10 imperatives specifically designed to foster trust and confidence:
What boxes must you tick to build an AI trust framework. Here are your expert takeaways…
The greatest barrier to AI adoption isn’t the technology. It’s not even the regulations, chaotic though they are. The real hurdle is our hard-wired failure to recognise how business actually operates.
We’ve been playing ‘Traitors’ – but, in truth, we were the traitors all along. We have lied to ourselves about our own reliability while demanding perfect faithfulness from AI systems.
The core question is not whether you should deploy AI. It’s whether you will be honest about your current operation – and discerning enough about AI maturity – to deploy it successfully.
Rogue states and criminals have already decided. They operate on balanced probabilities. They invest in research, investigate models continuously, and build clever reverse engineering capabilities. They are agile, adaptive, and pragmatic.
Right now, a tiny minority of organisations are running successful projects. The vast majority are so frozen with uncertainty, they cannot decide who or what to trust.
Which category are you in?

Ian Spencer is a founding partner of Clustre, The Solution Brokers