Oops!

Why the foundations of security and privacy are being sacrificed on the altar of progress.

In a world obsessed with AI innovation, the foundations of security and privacy are often sacrificed on the altar of progress. But what happens when the regulators come knocking with a €35 million fine? Or when your carefully constructed AI system collapses because of a single point of failure that no one mapped?

These sobering questions formed the backdrop of the recent Clustre Briefings Q2 session… a masterclass in the hidden risks that could derail even the most promising AI initiatives.

The Regulatory tsunami Is coming

As August 2025 approaches, the EU AI Act looms large on the horizon. With possible penalties reaching €35 million or 7% of global annual turnover, non-compliance isn’t just a risk – it’s an existential threat. And according to data expert Dan Klein, even if you’re operating solely in the UK – where post-Trumpian regulatory appetites have  somewhat diminished – there’s no ‘GET OUT OF JAIL FREE’ card: “Organisations trading with Europe will need to comply with the EU AI Act regardless.”

What’s particularly concerning is the Act’s self-policing approach. Without a dedicated compliance body, organisations are left navigating murky waters alone. This breeds extreme caution that often stifles innovation. Many companies might simply decide that implementing AI in production environments isn’t worth the regulatory gamble.

But regulation is just one piece of the complex puzzle.

When your data gets poisoned

Imagine investing millions in an AI diagnostic system for your healthcare network, only to discover it’s been systematically misdiagnosing patients with specific demographic characteristics. This nightmare scenario isn’t hypothetical—it happened in 2024 when medical images were surreptitiously altered to manipulate AI outcomes.

“Poisoning the pool” is emerging as a critical threat vector, where malicious actors deliberately corrupt training data to sabotage AI systems. And it’s not just about data poisoning. The EU AI Act highlights additional security imperatives:

  • Preventing inference attacks that could reveal personal data
  • Defending against prompt engineering exploits
  • Maintaining comprehensive risk management frameworks.

These requirements aren’t merely compliance checkboxes; they are essential safeguards against potentially catastrophic failures.

Beyond failure: mapping the path to success

Traditional risk management focuses on how systems fail. But security specialist Andy Clark offers a paradigm shift: dependency mapping that examines how systems succeed.

Unlike conventional fault tree analysis, dependency mapping will:

  • Measure the probability of success rather than failure
  • Take a business-driven rather than technology-driven approach
  • Employ Bayesian networks to analyse dependencies between system components
  • Identify critical sensitivity points throughout the system.

In one compelling example, Clark demonstrated how replacing a single point of failure (central staff) with redundant resources improved system availability from 63% to 74% during the Covid lockdown period. This significant enhancement was achieved entirely through smarter resource allocation. There was no additional investment.

This approach helps leadership teams understand dependencies across all business components, not just IT infrastructure. When executives are able to visualise how success in one area depends on seemingly unrelated elements, they can make far more astute strategic resource decisions.

The ticking clock: why time is your most critical variable

The 2024 Heathrow Airport IT failure serves as a stark reminder that sometimes it’s not the failure itself that causes catastrophe – it’s the recovery time. As Klein noted, “The issue wasn’t just the failure, but the 24-hour recovery period that could have been anticipated with proper time-based dependency mapping.”

During his tenure at BT, Klein translated “five nines” availability (99.999%) into a concrete metric everyone could understand: just 4 minutes of downtime per year. This time-based approach cuts through technical jargon, providing executives with practical planning frameworks: “What will we do if AWS is down for four minutes? Four hours? Four days?”

Without this time dimension, dependency mapping remains an academic exercise rather than a practical business tool.

Breaking down data silos without breaking privacy

Perhaps the most intriguing development presented at the Clustre Briefing came from Freya De Mink of Roseman Labs, who showcased encrypted computing technology that allows organisations to analyse sensitive data owned by multiple parties without ever decrypting it.

This breakthrough addresses a fundamental paradox: the most valuable data for AI advancement often remains locked away due to regulatory constraints or commercial sensitivity. De Mink illustrated this with a case study where a plastics manufacturer and component maker collaborated on quality optimisation without either party revealing proprietary information.

With Gartner predicting that domain-specific data in generative AI models will grow from 1% today to over 50% by 2027, encrypted computing may be the key that unlocks these data treasures while maintaining privacy and security.

The Executive Imperative: four actions you can’t postpone

Waiting for government guidance is a luxury few organisations can afford. Forward-thinking executives should seize the initiative and focus on four immediate imperatives:

  1. Secure by design: Build security into AI systems right from inception – even during prototyping phases.
  2. Map dependencies that create success: Look beyond failure modes to understand the complete ecosystem that enables positive outcomes.
  3. Link data together safely: Leverage privacy-enhancing technologies and encryption when sharing sensitive information.
  4. Scrutinise foundation models: Understand the security underpinnings of any foundation models you incorporate… including potential training data biases and manipulation vectors.

From compliance burden to competitive advantage

Security and privacy are too precious to be sacrificed on the altar of progress. The organisations that thrive in this new landscape will not see them as compliance burdens. Instead, they will recognise them as strategic differentiators. By embedding these principles into their AI initiatives from the ground up, they will not only avoid regulatory penalties but also build more resilient, trustworthy systems. Ones that can confidently withstand the inevitable challenges heading our way.

In a world where data is both an opportunity and a liability, the path to sustainable AI innovation isn’t through regulatory shortcuts or security afterthoughts. It’s through embracing the complex interdependencies that determine ultimate success.

As your competitors scramble to retrofit security into their AI systems before the regulatory hammer falls in August 2025, you could be capitalising on the robust foundation you’ve already built. The choice – and the competitive advantage it confers – is yours.

Ian Spencer is a founding partner of Clustre, The Solution Brokers.

MORE INFO
FOLLOW
IN TOUCH
© 2025 Clustre, The Solution Brokers All rights reserved.
  • This field is for validation purposes and should be left unchanged.
  • We will use the data you submit to fulfil your request. Privacy Policy.