”AI governance is a chess game, played on multiple tables, each to totally different rules, all at the same time…

Boards should be very worried!”

Your company has invested years and millions of pounds achieving GDPR compliance. Your legal team deserves recognition for flawless execution. But why do so many customers feel deeply frustrated? Why are security protocols blocking their access to personal data? Why is trust in brands ebbing, not rising?

The disconnect between legal compliance and stakeholder satisfaction is a fundamental governance challenge. But why has this become such a global problem? In a recent Clustre Briefing for business leaders, AI and data specialist Dan Klein and technology lawyer Richard Kemp, explained why traditional governance approaches are failing – and what savant business leaders are doing about it.

The compliance trap that’s costing customers

“Legal teams excel at identifying prohibitions but utterly fail at delivering customer expectations”. Klein’s blunt assessment cuts straight to the heart of the GDPR implementation problem. While organisations obsessed over privacy notices and consent mechanisms, they missed the transformational opportunity to give customers meaningful control over their own data. The inevitable result was technical compliance with precious little customer trust or competitive differentiation.

This legal-centric mindset becomes particularly problematic in sectors where stakeholder engagement is paramount. Healthcare is a subjective case in point because Klein can speak from personal experience. After a recent operation, he was denied access to his own medical scans. And he is not alone. Patients routinely discover they lack ownership of their medical scans or any meaningful access to their health data. A painful reality that undermines the very tenets of the unspoken patient/doctor contract.

Research consistently demonstrates patient willingness to share data for medical research… providing they receive transparency and access to their own information. Yet regulatory complexity and risk-averse legal frameworks create barriers to the very outcomes stakeholders expect and demand.

A self-regulatory ‘Wild West’

The challenge intensifies exponentially with AI systems. The EU’s AI Act – effective from August 2025 – represents the latest regulatory response to rapidly evolving technology. However, companies are already engineering ‘workaround’ classifications. To minimise the compliance burden, they are positioning their AI systems as ‘limited risk’ rather than ‘high risk’. Big mistake.

Kemp has identified a fundamental flaw in the current AI governance architecture: Self-Regulation. Unlike established safety frameworks where independent experts assess risks, AI developers essentially self-certify their own systems. This contrasts sharply with proven safety protocols – such as ALARP (As Low As Reasonably Practicable) – where independent oversight ensures constant accountability.

And the risks of self-regulation are already self-evident…

The Boeing 737 MAX crisis provides a sobering precedent. When the expert community developing technology also regulates it, catastrophic blind spots emerge. This systemic flaw is self-replicating across AI development, creating risks that traditional governance frameworks simply cannot address.

Navigating the global regulatory maze

“AI governance is a chess game, played on multiple tables, each to totally different rules, all at the same time”. Kemp’s headlining metaphor is a worrying reality for multinationals. The regulatory landscape resembles a complex strategic game played across multiple jurisdictions – simultaneously. EU regulations extend globally, affecting any organisation touching European data or customers. Meanwhile, fragmented implementation across countries creates a compliance complexity that challenges even the most sophisticated legal operations.

Caught between EU regulatory frameworks, US market-based approaches, and Chinese state control, post-Brexit Britain now has diminished influence at every negotiating table. Interestingly, though, the direction of travel for global regulation may be changing. China’s car industry offers a glimpse into an alternative regulatory future. Rather than legal frameworks, Chinese authorities now require cars to transmit 300 data points per second, achieving surveillance objectives through technical mandates. Watch this space.

Digital speed collides with shuffling legal systems.

Perhaps the most fundamental challenge is temporal misalignment…

Technology accelerates at warp speed while legal systems shuffle at institutional pace. Kemp calculates that – from initial court challenge to judicial ruling – our ponderous legal process is already 5 to 10 years behind tech progress.  And it’s getting worse. Digital is evolving faster than our legal system’s capacity to adapt.

Regulation is in perpetual catch-up. This leaves businesses in a limbo of uncertainty and stakeholders dangerously unprotected. But, it could be so different. The COVID-19 pandemic showed us the art of the possible. When the urgency is critical and real, we can swerve legal barriers… deliver life-saving innovation… and balance the needs of multiple stakeholders. If the stakes are high enough, anything is possible.

The collaborative solution

Companies are getting that message. Many successful organisations have removed the absolute power to veto innovation that legal teams have long held. Instead, they’re implementing what Klein calls: “stakeholder integration from day one”.

Legal, compliance, security, and – most critically of all – end users are brought into the team, right from project inception. No single discipline dominates decision-making. User research, backed by extensive video interviews, can then reveal actual customer needs; not assumed requirements. And risk management can evolve continuously based on best practices rather than rigid compliance checklists.

The results speak for themselves. During the pandemic, research organisations that embraced this collaborative approach achieved remarkable and rapid innovation while maintaining appropriate safeguards. They discovered that, when you truly understand user needs and integrate diverse expertise, you can often exceed both legal requirements and customer expectations.

The educational divide

Let’s now pause the narrative. We need a reality check.

Earlier we quoted Klein berating the shortcomings of legal teams…  “(they) excel at identifying prohibitions but utterly fail at delivering customer expectations”.  His harsh judgement is totally correct. But it’s also totally understandable. There is a basic skills gap plaguing regulatory progress. Engineers lack commercial and legal awareness, while lawyers lack technical understanding. This educational divide compounds and amplifies governance challenges. Decisions are being made by people who cannot grasp either the technology or its business implications.

But there are grounds for optimism. Leading organisations are now tackling this issue through cross-functional education and embedded expertise. Siloes of narrow excellence are being replaced by new hybrid roles. Multi-disciplinary teams now include members who work seamlessly across technical, legal and business areas.

The path forward

The message for senior leaders is clear: traditional legal-first approaches to data and AI governance are not working. They are counter-intuitive and counterproductive. Success requires three fundamental shifts:

  • First. Conduct extensive user research. Understand what customers actually want from your data practices, not what you assume they want or what regulations demand. This often reveals opportunities to exceed both legal requirements and customer expectations.
  • Second. Integrate all stakeholders from project inception. Legal, compliance, security, and user representatives should be embedded in teams from day one, with no single group holding veto power over innovation.
  • Third. Adopt adaptive risk management. Rather than rigid compliance checklists, implement frameworks that evolve with best practices and technological capabilities. Think ALARP for the AI age.

Business leaders are right to be worried. We are playing an increasingly complex game where the global rules are often conflicting and even contradictory. But the companies that manage to balance the priorities will emerge more agile, sustainable, compliant, and competitive.

The key question isn’t whether your current approach is legally compliant. It’s whether it’s actually working for the people you serve. If the answer is no, it’s time for a fundamental rethink… before your competitors figure it out first.

Ian Spencer is a founding partner of Clustre, The Solution Brokers.

Our special thanks to Dan Klein – Partner in Data and AI at AlixPartners – and Richard Kemp – CEO of Kemp Law IT – for their inspirational contribution to this article. If you would like to discuss any of the thoughts and messages in this article, please contact Clustre’s founder and MD: robert.baldock@clustre.net

MORE INFO
FOLLOW
IN TOUCH
© 2025 Clustre, The Solution Brokers All rights reserved.
  • This field is for validation purposes and should be left unchanged.
  • We will use the data you submit to fulfil your request. Privacy Policy.