Faithful or Traitor…

is AI to be trusted?

Three world-class experts – Jason Ward, Richard Kemp, and Dan Klein – recently shared a stage to explore AI trust barriers, regulatory challenges, and practical solutions for full-scale deployment of AI. This article is indebted to their deep insights…

In the BBC series ‘The Traitors’, contestants confront one existential question: who can you trust? The faithful must identify the traitors; the traitors must deceive the faithful.

It’s a parody of business life. Replace the castle with a corporate boardroom. Swap the contestants for AI. And you have the precise dilemma facing every organisation. Can you trust AI?

The uncomfortable answer is very few do. According to MIT, only 5% of AI projects ever reach production and deliver meaningful value. 95% chew budgets and deliver nothing.

Ultimately, this isn’t a technology issue. It’s fundamentally a trust problem. And here’s the irony…

While legitimate businesses are paralysed by questions of trustworthiness, organised crime is deploying AI with a casual confidence born out of indifference. They respect no regulations. No constraints. They’ve decided their AI systems are faithful enough, so they’re off and running. Ironically, organised crime is displaying a pragmatic honesty missing from most legitimate organisations.

The Traitors Among Us

AI and data guru, Jason Ward, delivered some profoundly uncomfortable truths. He identified four common traitors and went on to reveal why we struggle to know which AI to trust (because there’s much more to AI than mere LLMs).

  • The Black Box Traitor: Large language models are the perfect traitors. They respond with unshakeable certainty while lacking any transparency in their decision-making. When you ask how they arrived at a decision, they will offer a dismissive: ‘we processed 175 billion parameters’. Although technically accurate, this response will definitely not satisfy regulators who want to know exactly why you sanctioned a substantial loan or denied someone’s mortgage application.
  • The Hidden Bias Traitor: Models contain biases from their training data. They readily absorb prejudices, assumptions, and blind spots. They sound plausible, but their foundations are often rotten to the core.
  • The False Confidence Traitor: AI systems present incorrect answers with stunning confidence and speed. This is creating an ‘integrity crisis’ in, for instance, academic publishing. AI-generated papers flood journals with content that reads beautifully, cites impressively, but lacks any intellectual honesty and integrity.
  • The Governance Traitor: Even when you have identified a faithful AI system, getting it safely into production – without creating governance nightmares – can prove extraordinarily difficult.

Who Can You Trust?

Remember, not all AI is well developed and proven… so be discerning, very discerning.

  • The Faithful: Computer Vision

Computer vision – the use of machine and deep learning to interpret and understand information from images and videos – has spent six years maturing. Thanks to published white papers, independent verification, and performance evaluations, it is now production-ready. 2025 is the year it really ‘came of age’, with enterprises deploying it at scale. This is a true Faithful.

  • The Uncertain: Generative AI

Large language models are still at the enthusiastic undergraduate stage – brilliant in flashes, spectacularly wrong in others, and overly confident about almost everything. According to the METR study, true autonomy requires 99% accuracy. LLMs won’t achieve mass enterprise deployment until at least 2028-2030. In areas such as legal discovery – where errors have serious consequences – it may well take much longer.

The practical implication is stark: organisations are throwing money at chatbots (the uncertain teenagers) while ignoring computer vision (the reliable adults) that could solve many of their problems today.

The Trust Paradox: We’re All Traitors Here

Dan Klein – partner in Data and AI at AlixPartners – delivered the most uncomfortable truth: organisations claim 100% accuracy in financial reconciliation but actually operate on probabilities. For example, The Bank of New York Mellon accepts transaction deltas under $50. Effectively, they are operating on probabilities, not certainties.

Paradoxically, these same organisations refuse to trust AI because it cannot guarantee 100% accuracy. Major institutions have built their entire business infrastructure on ‘acceptable margins of error’. They have also failed to document these errors. But that doesn’t stop them demanding perfection from their algorithms.

Klein argues that businesses must be transparent about their probability-based operations. Workflow documentation is very often flawed. It lacks any comprehension of error measurement at critical stages. Organisations must document error rates and understand what drives uncertainties in their processes before they are ready for AI.

Klein likens it to discovering your house has been standing on shaky foundations for decades while you’ve been criticising the architect’s plans for a new porch.

The Regulatory Maze

Richard Kemp – technology law expert and CEO of Kemp Law IT – outlined a regulatory landscape that makes the convoluted ‘Traitors’  plot look straightforward.

GDPR casts its shadow over everything. Then there’s the AI Act, cybersecurity rules, operational resilience requirements, and geographic fragmentation. Nearly every US state has its own draft AI legislation. China has its own – strictly national –  framework. And, of course, there’s the UK – a nation which, despite its AI superpower aspirations, remains a regulatory laggard.

Our legal systems are ponderous. They struggle with the balance of probability when dealing with black box AI. By the time court cases are resolved – and this often takes three years or longer – the technology has moved on. It’s judicial archaeology.

The ‘Shadow IT’ Problem: The Invisible Traitors

Critical processes like month-end reconciliation often run on undocumented Excel workflows. Three people understand them. One person maintains them. When that person goes on holiday, everyone holds their breath.

This is ‘Shadow IT’ – unofficial, undocumented, absolutely essential systems.

When data governance is largely absent… vital infrastructures are so frail… stretched staff have no formal AI training… is this really the right moment to introduce a radical and disruptive new element to the mix?

Building Trust: Practical Solutions

Time for some practical guidance. Let’s start with 10 imperatives specifically designed to foster trust and confidence:

  1. Keep Humans in the Loop: Implement systems with explicit editorial control and human oversight. Have trusted humans verify outputs before critical decisions are made.
  2. Document Everything: Understand and document workflows all the way back to data processing. Include measurement confidence and error where appropriate. You can’t identify traitors if you don’t know who’s in the room.
  3. Deploy the Faithful: Computer vision is ready for enterprise deployment today. Stop obsessing about chatbots and deploy proven technology.
  4. Being Transparent Wins: Be open about AI use, especially for health and safety monitoring. Strategic honesty builds trust.
  5. Put Data Security First: Implement governance tools before widespread AI adoption.
  6. Maintain Research Capability: Investigate models continuously. Hedge funds have had research divisions for decades. Lean into uncertainty and learn from it. Understanding errors will help you to select the right models.
  7. Red Team Your Systems: Have someone play ‘bad actor’ to identify weaknesses.
  8. Seek Multiple Sources: Never rely on one data source. Multiple sources provide verification, especially for critical decisions.
  9. Negotiate While You Can: Competition among AI suppliers means favourable terms are available – now. Seek unlimited liability for certain risks. This window won’t last forever.
  10. Consider Extra Insurance: Evaluate AI-specific coverage. Sometimes the smartest move is to insure against getting it wrong.

Critical Takeaways: Your Trust Framework

What boxes must you tick to build an AI trust framework. Here are your expert takeaways…

  • Distinguish AI types: Computer vision is production-ready but generative AI needs more time. Adopting hybrid AI solutions will overcome the limitations of specific technologies such as LLMs.
  • Accept imperfection: No system is 100% accurate. Always document and communicate confidence and probability, not fictional certainty.
  • Document everything: Workflows must be understandable and reproducible, with a view of measurement confidence and error where appropriate.
  • Win through transparency: Build organisational capability through understanding. Be open with staff about AI use for health, safety, and compliance.
  • Prepare, don’t panic: Build capability now – mature AI technologies offer immediate value. Prepare pipelines and workflows for future LLM deployment.
  • Accept that regulation lags technology: Create practical compliance frameworks rather than waiting for perfect clarity.
  • Learn from Audit practices. Auditors accept statistical sampling to establish probability – so should you. Use this precedent to justify AI adoption.

The Final Round: Who Wins?

The greatest barrier to AI adoption isn’t the technology. It’s not even the regulations, chaotic though they are. The real hurdle is our hard-wired failure to recognise how business actually operates.

We’ve been playing ‘Traitors’ – but, in truth, we were the traitors all along. We have lied to ourselves about our own reliability while demanding perfect faithfulness from AI systems.

The core question is not whether you should deploy AI. It’s whether you will be honest about your current operation – and discerning enough about AI maturity – to deploy it successfully.

Rogue states and criminals have already decided. They operate on balanced probabilities. They invest in research, investigate models continuously, and build clever reverse engineering capabilities. They are agile, adaptive, and pragmatic.

Right now, a tiny minority of organisations are running successful projects. The vast majority are so frozen with uncertainty, they cannot decide who or what to trust.

Which category are you in?

Ian Spencer is a founding partner of Clustre, The Solution Brokers

MORE INFO
FOLLOW
IN TOUCH
© 2025 Clustre, The Solution Brokers All rights reserved.
  • This field is for validation purposes and should be left unchanged.
  • We will use the data you submit to fulfil your request. Privacy Policy.