The numbers are stark, and they tell a story of reckless optimism: 13% of organisations have reported breaches of AI models or applications, according to IBM’s 2025 Cost of a Data Breach Report. Even more alarming, 8% of organisations don’t even know if they’ve been compromised.
And here’s the kicker—97% of those that were breached didn’t have AI access controls in place. Read that again. Ninety-seven per cent.
It’s the corporate equivalent of leaving your front door wide open, posting your address online, and being shocked when someone walks in and takes your stuff.
Welcome to the AI security crisis of 2025, where innovation has lapped governance by about three years and counting.
The Shadow Economy Running Your Office
Let’s talk about what’s really happening in your organisation right now—whether you know it or not. In healthcare, manufacturing, and financial services sectors, shadow AI tool usage surged more than 200% year over year.
Shadow AI, for the uninitiated, is the unauthorised use of AI tools by employees who’ve decided they can’t wait for IT to catch up with 2025.
71% of UK employees have used an unsanctioned AI tool at least once, and over half continue to do so weekly. In the U.S., the picture is equally grim: 46% of office workers—including IT professionals who understand the risks—use AI tools their employers didn’t provide.
68% of employees use personal AI accounts at work, with 57% inputting sensitive data. That marketing analyst using ChatGPT to draft campaign copy?
The finance associate experimenting with an LLM to forecast revenue? The developer automating ticket updates through a private API? They’re all part of an invisible infrastructure that’s quietly bypassing your enterprise’s formal control structure.
And it’s not just happening at tech giants. Companies with just 11–50 employees showed the densest usage, averaging 269 unsanctioned AI tools per 1,000 employees—roughly 27% of employees actively using them.
Smaller companies, often with zero dedicated security staff, are drowning in unregulated AI whilst lacking any means to see it, let alone control it.
The Half-Million Dollar Mistake
The financial consequences of this laissez-faire approach are brutal. Organisations that experienced cyberattacks due to shadow AI faced costs averaging $670,000 more than breaches at firms with little or no shadow AI.
That’s more than half a million dollars in additional damage because someone in accounting thought their personal ChatGPT account was fine for crunching quarterly numbers.
60% of AI-related security incidents led to data being compromised, and 31% resulted in operational disruption.
Meanwhile, 63% of breached organisations had no governance policy or were still developing one. Of those that claimed to have policies, fewer than half have an approval process for AI deployments, and 62% failed to implement strong access controls.
It’s a governance vacuum, and nature—or in this case, hackers—abhors a vacuum.
What Gets Compromised? Everything That Matters
The type of data flowing into these unsecured AI systems should terrify any CISO worth their salt. 34.8% of all corporate data that employees input into AI tools is classified as sensitive—up from 27.4% a year ago and more than triple the 10.7% observed two years ago.
The most common types of sensitive data employees put into AI are source code (18.7% of sensitive data) and R&D materials (17.1%).
When shadow AI is in play, things get even worse: Shadow AI caused a rise in compromised personally identifiable information (65%) and intellectual property (40%) over the global averages of 53% and 33% respectively.
45% of AI breaches came from malware in models pulled from public repositories, whilst 33% originated from chatbots, and 21% from third-party applications. The attack vectors are multiplying faster than security teams can catalogue them.
The Long Con: Shadow AI Isn’t Going Anywhere
Here’s where things get particularly insidious. This isn’t a phase. Two particular shadow AI tools had median usage durations of about 403 and 401 days respectively—well over a year of continuous use without formal approval or oversight.
After 100+ days of continuous use, an AI tool isn’t a trial anymore—it’s embedded in core business processes and daily workflows.
At that point, trying to remove it isn’t just an IT task; it’s a business disruption waiting to happen. Employees will resist, productivity will tank, and you’ll be the villain for taking away their favourite productivity hack.
And they are hiding it from you. Nearly half (45%) of organisations opted not to report an AI-related security breach due to concerns over reputational damage. The data breach you don’t know about can’t hurt you, right? Wrong.
The Blame Game: Who’s Responsible for This Mess?
76% of organisations report ongoing internal debate about which teams should control AI security, illustrating a leadership vacuum at the worst possible time.
Whilst executives argue about org charts, 28% of employees aren’t provided with a work-approved AI tool. So they bring their own.
Can you blame them? 57% of employees express positive emotions about AI in the workplace. They’ve discovered that AI can write their emails, respond to internal communications, and generally make their jobs less soul-crushing.
From their perspective, they’re being productive. From a security perspective, they’re opening attack vectors you didn’t know existed.
The cultural disconnect is jarring: Only 32% of employees expressed concern about company and customer data being fed into AI tools, and just 29% had any worry about the use of shadow AI affecting IT security.
The (Slightly) Good News
It’s not all doom and cybersecurity horror stories. Organisations using AI and automation extensively throughout their security operations saved an average $1.9 million in breach costs and reduced the breach lifecycle by an average of 80 days.
When implemented correctly—with actual governance, access controls, and monitoring—AI can be a force multiplier for security teams.
96% of companies are increasing their AI security budgets in 2025, which suggests that at least some organisations are waking up to the problem. The question is whether they’re moving fast enough.
Best Practices to Secure Data in AI Systems
As AI systems grow more complex, protecting the data that fuels them is critical. Experts recommend the following measures to safeguard AI data, whether stored on-site or in the cloud.
1. Use Trusted Data and Track Provenance
Source data from verified, authoritative providers. Maintain a secure provenance log—ideally a cryptographically signed, immutable ledger—to trace data origins and detect tampering.
2. Protect Data Integrity
Apply checksums and cryptographic hashes to confirm data hasn’t been altered during storage or transfer. Integrity checks ensure datasets remain accurate and trustworthy.
3. Authenticate Data Revisions
Use digital signatures—preferably quantum-resistant—to verify authenticity and prevent unauthorized changes. Each dataset version should be cryptographically signed and validated by a trusted authority.
4. Operate on Trusted Infrastructure
Adopt Zero Trust architecture and secure enclaves for data processing. Isolate sensitive operations to prevent tampering and protect data during computation.
5. Classify and Control Access
Label data by sensitivity and apply appropriate access controls and encryption. Align AI system outputs with the same classification level as their inputs.
6. Encrypt Everything
Use strong encryption—AES-256 for data at rest and TLS with AES-256 or post-quantum protocols for data in transit. Follow NIST SP 800-52r2 for secure TLS implementation.
7. Secure Storage Devices
Store data only in FIPS 140-3–certified devices. Choose Security Level 3 or higher for protection against advanced intrusion attempts.
8. Use Privacy-Preserving Methods
Implement data masking, differential privacy, or federated learning to protect personal and sensitive data during training and sharing. Weigh computational tradeoffs carefully.
9. Delete Data Safely
Before repurposing or disposing of drives, use secure deletion methods—cryptographic erase, block erase, or overwrite—as outlined in NIST SP 800-88.
10. Continuously Assess Risk
Perform regular data security assessments using NIST frameworks (SP 800-3r2 and AI 100-1). Monitor for evolving threats, update controls, and strengthen your organization’s security posture.
What Needs to Happen (But Probably Won’t, Fast Enough)
The solutions aren’t rocket science. They’re just not being implemented. Organisations need continuous monitoring for unsanctioned AI tools.
They need to establish clear AI governance policies that don’t just exist in a PowerPoint deck but are actually enforced. They need to provide approved AI tools that are secure enough to satisfy IT and useful enough that employees won’t route around them.
Only 34% of organisations with AI governance policies regularly check their networks for sanctioned tools. That’s the real problem—not the absence of policies, but the absence of enforcement and visibility.
Security teams need to move from reacting to AI risks to anticipating them. That means asset management for AI systems, risk assessments specific to AI vulnerabilities, data security built around AI workflows, and incident response plans that account for AI-specific threats.
All the boring stuff that gets cut from budgets when executives want to talk about the “AI revolution.”
