It’s 9:20 on a Monday morning and Sean, a mid-level marketing manager, has already committed an unwitting act of leaking a tranche of personal identifiable information (PII) to an unapproved large language model (LLM)."
It’s not entirely his fault—there’s an organizational mandate to use generative AI to work more efficiently, and the first urgent item on his list seemed like a perfect candidate:
He thoughtfully crafts his prompt, uploads a CSV to the LLM du jour, and as if by magic, saves half a day of fighting with a spreadsheet. What he hasn’t considered is that this time-saving act qualifies as a shadow AI data breach, and Sean’s company could have a multimillion dollar problem on its hands.
What We Do in the Shadows
“Sean” isn’t real, but his situation is, and we likely all have an AI novice in our lives that needs a cybersecurity refresher. Like its progenitor shadow IT, shadow AI encompasses any use of AI tools in a work environment on which a security team hasn’t signed off. That includes our hypothetical Sean’s innocent act of uploading proprietary data to a commercial generative AI. Gen AI is emerging as a critical tool across nearly all industries: engineering teams are using AI coding tools to ship products faster, designers are using AI image or video generators to concept, build and prototype, and physicists are using it to speed up the discovery of black holes.
As useful as these tools are becoming in the workplace, they qualify as shadow AI when used without IT and security team oversight or consent. “Shadow AI is just not knowing what you don't know with respect to how your organization is using AI,” said Vishal Kamat, vice president of data security at IBM. “Let’s assume people are going to do things for mostly good reasons, sometimes inadvertently for bad reasons, and sometimes maliciously; but you can’t make that determination without knowing what data is flowing through your network.”
The use of shadow AI, said Kamat, can include workers accessing third-party open source LLMs in an organization’s cloud environment. Or it can be a malicious act of using sanctioned AI for unsanctioned actions like prompt injections (disguising malicious inputs as legitimate prompts to elicit nefarious actions). In either case, workers are experimenting with the full gamut of AI tools; recent studies have shown that only 41% of enterprises have deployed sanctioned AI internally, but more than 75% (almost twice that amount) of global workers are using generative AI ahead of an official CISO signoff.
Kamat notes that pressure to do so is applied from the top down, leaving workers feeling the need to integrate generative AI into their workflow or be left behind. For the most part, executives that see AI adoption as the path to dominance are applauding these efforts. However, without a plan for implementation this feedback loop creates an AI education gap. While the average user isn’t vibe hacking a competitor's email system, most workers have not been trained in proper AI governance, assuming that their organizations even have governance policies. As it turns out, many do not—and some are paying a price for it.
Governance as Guardrails
In the world of enterprise AI adoption, governance refers to the policies, processes, and oversight mechanisms that ensure AI systems are developed and used responsibly. With so many potential avenues for both breaches and well-intentioned (though misguided) abuse of AI, the best way to stay ahead of threats without blocking progress is to get security and governance teams talking to each other, and ultimately to their entire workforce.
“Businesses are going to adopt AI,” Kamat said. “If security and governance teams are not working hand in glove, it's going to slow that adoption pace.”
Sridhar Muppidi, chief technology officer for IBM’s software security portfolio, said the solution is having a clear governance structure that allows these two teams to work together before something goes wrong. “How do I look at security and governance as enablers of safety and scalable use of AI—letting the good guys in—versus just keeping the bad guys out?” he said.
Supply chain attacks are the most common threat to businesses, according to the 2025 Cost of a Data Breach Report. But another risk vector occurs between the keyboard and chair: the user. It wouldn’t necessarily be the hypothetical Sean’s fault if model drift skewed the analysis in his executive brief; but that brief would still affect business outcomes thanks to incorrect information traveling through the decision chain. When using AI to upskill or move quickly, model hallucinations or output errors may not be identified by users that lack the domain expertise to catch them. Sean's fault would be not checking the gen AI’s work.
In a scenario like this, governance goes beyond setting ground rules for usage or blocking access—by monitoring AI performance, governance enables teams to scale quickly and confidently. For example, when a UK-based genomics firm brought an AI-powered colorectal cancer screening from research phase to regulatory review and commercialization, they implemented IBM’s watsonx.governance dashboard to keep tabs on model health, accuracy, and drift bias. The resulting insight allowed regulators and internal stakeholders to maintain visibility into the inner workings of a potentially lifesaving AI application.
Given the sensitive nature of the data flowing through these systems, a bit of friction between IT, security, governance and legal teams is predictable: each team has different security standards that don't align with the others. IT wants broad access for efficiency, security demands strict controls, legal focuses on compliance. Governance tries to balance all three. When none of these approaches mesh, AI governance and innovation stalls.
This lack of oversight is borne out by the data: according to the Cost of a Data Breach Report, 63% of the breached organizations studied did have AI governance policies in place, and of the organizations that have a policy, “less than half have an approval process for AI deployments, and 61% lack AI governance technologies.” A mere third of the studied organizations performed audits for unsanctioned AI.
This is where software tools and automated dashboards can provide guidance, Muppidi said, bringing governance and security teams that don't necessarily work at the same pace together to enable faster AI adoption. “The question is, how do you do that in a manner that reduces risk?” he said. “This is where having a structured governance program is going to help accelerate innovation.” That starts with finding out what kind of AI is flowing through your network.
Calculating Risk
The benefits of integrating AI into your workflow may make company-wide implementation feel non-negotiable, but the risks of doing so without oversight are substantial. “People think that shadow gen AI is just one thing,” Muppidi said. “But it's not—it's the proliferation of data” as it leaves the organization’s control and enters into a tangle of APIs and services. Leaked data can be incorporated into the LLM’s training data and then included in responses to prompts by anyone, anywhere. Integrating unapproved data into an LLM can create model drift, tainting data and disrupting outcomes in wholly novel ways.
The labyrinth of interconnected systems that make generative AI (and the modern internet) possible also creates an environment where a business’s data security is only as good as their most insecure vendor. The Cost of a Data Breach Report details that the most common AI security incident is supply chain compromise—attacks that occur through third-party applications and APIs that AI services rely on to function. This broad attack surface allows malicious actors to probe for operational credentials or chip away at LLMs without having to strike at the foundational model. “So it's not just the fact that you're using a gen AI component, but that that gen AI component is connecting to a greater ecosystem,” Muppidi said.
In some cases, the interconnectivity of systems can facilitate the jump to the physical world. For example, a breach of a dam’s control system is infiltrated through shadow AI—say through AI-facilitated credential harvesting—could literally open the floodgates and put human lives at risk. Bringing it closer to home, one could imagine accepting a malicious calendar invitation that triggers your smart home’s boiler to malfunction and your doors to lock. While these scenarios have been largely confined to security conferences and research labs, AI continues to develop as a tool to scale their activities. “Adversaries are definitely using AI to get smarter, faster, more automated in how they're breaching,” Kamat said.
Intentional or accidental, all breaches carry a cost. In addition to the technical risks, large enterprises face financial and reputational threats. First, companies lose revenue by paying ransom to attackers, though of late fewer are opting to do so. Post breach, they can face an average of $4.4 million in regulatory fines and escalation costs. Per IBM, in addition to the increased exposure of PII like health or financial data, incidents that involved high levels of shadow AI added another $670,000 to that number.
Getting Under the Hood
There are methods for sniffing out shadow AI before it’s too late. IT teams can monitor networks to see what’s coming across their firewalls, catching internal data flowing externally. Security teams can supervise privileged access holders with high-value data at their fingertips; the base technologies have been available in other IT functions for some time. Now, tools that continuously and proactively monitor networks and source code have been adapted to monitor shadow AI and pick up the pace of adoption.
That expanded domain can be enlightening. In one case, an IBM client sought to get ahead of suspected shadow AI on their network and found that they had engaged in the nick of time: discovery dug up almost 500 AI projects on their system—double the amount expected.
Once leaders have identified how AI is being used in their organizations, a cross-functional team that encompasses legal, IT, security can start laying the groundwork for a governance rollout: legal and compliance can begin setting ethical and guardrails for employees while IT and security create processes to drive compliance and detect and eradicate breaches as quickly as possible.
This allows teams and individuals to operate at the speed dictated by their business landscape. “Organizations want to innovate very quickly and security is often seen as an inhibitor,” Muppidi said. “But if you convert that into more of a guardrail versus a stoplight, then there is more opportunity for innovation teams to adopt AI safely and securely.” In other words, by providing knowledge workers with the tools to operate safely and within policy, IT and governance teams can sidestep the perceived need to implement unsanctioned solutions, letting innovators innovate at full throttle. “This is this freeway with a 60 mile per hour speed limit.” Muppidi said. “Here are the guardrails. As long as you're in the guardrails, you can run a little bit faster, right? And that comes with good policies. And the policies are driven by governance.”
As commercial generative AI and custom-tailored agents become more common and fully integrated into workflows, it’s clear that AI can’t and shouldn’t be stopped; the robot is out of the lab and pingponging between cubicles and workstations around the globe. That can be a boon to how we work, it just needs the right oversight, and users need guidance. Unfortunately, it’s too late for Sean.
Learn more in IBM’s 2025 Cost of a Data Breach report.


