93% of employees now use unsanctioned AI tools, and they’re unintentionally exposing their enterprises. Here’s what CIOs can do about it.

Shadow AI (artificial intelligence), the unauthorized use of LLMs (large language models), browser extensions, and unvetted APIs (application programming interfaces), is silently becoming a threat to enterprise IT strategy.
Today, a whopping 93% of employees use unsanctioned AI tools to boost productivity, according to CIO. Meanwhile, 91% of them believe their actions pose little to no risk. But there are real dangers in using shadow AI, including data leakage, ethical bias, security implications, hallucinations, and a lack of explainability.
How can IT leaders tackle this growing problem in 2026? The answer is not to say “No” to AI use, it’s to “Know” which unsanctioned tools employees use, how they use them, and what’s being exposed. From there, you can provide or recommend an alternative solution. Let’s break down how to get there.
How Can IT Leaders Detect Unauthorized AI Activity Across Their Networks?
You can’t manage what you can’t see. That’s always been true for IT security, and it holds for shadow AI, as well.
So, how can IT Leaders and decision makers effectively detect shadow AI? The key is to focus on data flow visibility rather than application access.
- Start With Network Visibility. Use traffic analysis to flag known AI domains and high-frequency API calls.
- Augment That With Endpoint Monitoring. Track the installation of unvetted browser extensions and local “Small Language Models” (SLMs) running on employees’ computers.
- Finally, Conduct a Thorough Financial Audit. Review expense reports for departmental AI subscriptions that weren’t procured centrally.
This will not only surface what AI tools are being used without corporate approval, but also where your sensitive data is going.
What Are The Enterprise Costs and Risks of Unmanaged AI?
Now, let’s dive deeper into what could go wrong if Shadow AI becomes rampant in your enterprise.
#1: Intellectual Property Loss
You may have heard of the 2023 Samsung incident, in which developers entered confidential code into ChatGPT twice while debugging, and the chatbot used it as training data for future public responses. Such a scenario is a real possibility if you don’t effectively manage AI use. And remember, once things like proprietary code or trade secrets get absorbed into public training sets, you’ve lost more than just confidential information; you’ve potentially sacrificed patent protections and competitive advantages.
#2: Compliance Liabilities
Data regulations such as the GDPR (General Data Protection Regulation), HIPAA (Health Insurance Portability and Accountability Act) in Europe, and HIPAA (Health Insurance Portability and Accountability Act) here in the U.S. require organizations to protect sensitive data and demonstrate visibility and governance over high-risk AI applications. That means your enterprise faces legal exposure every time sensitive data flows through unvetted channels, such as when employees enter patient records, financial data, or personally identifiable information into public AI tools.
#3. Technical Fragmentation
Finally, you risk the growth of siloed “agentic” workflows that don’t integrate with the core enterprise stack. Over time, these disconnected workflows could become brittle, more difficult to maintain, and nearly impossible to secure consistently across the organization.
In a nutshell, unmanaged AI creates ‘governance debt’ that often outweighs short-term efficiency gains. That’s why IT leaders must take control of AI use in their enterprise.
Why Is It Important To Implement a Tiered AI Governance Framework?
A governance framework helps you transition AI from a bottleneck to an enabler of safe innovation.
Here’s how to create one:
- Create a Tiered Access Model: Categorize your AI tools into three buckets: 1) Approved (Enterprise-grade tools with proper contracts and security), 2) Limited-Use (Monitored tools for specific use cases), and 3) Prohibited (tools with unacceptable risk profiles).
- Establish an AI Center of Excellence (CoE): Create a cross-functional task force with representatives from IT, Legal, HR, and other relevant departments to comprehensively vet AI tools against security, privacy, ethical, and compliance implications. Doing diligence can surface blind spots you would miss in a siloed evaluation.
- Set Technical Guardrails: Enforce a Data Loss Prevention (DLP) policy that automatically redacts sensitive information before reaching external APIs.
- Offer Sanctioned Alternatives: Start by asking what problems your employees use unsanctioned AI tools to solve, then provide an enterprise-grade version with proper data residency so that everything remains within your corporate perimeter. When the approved tool solves the actual problem, employees won’t resort to shadow AI.
The next step is to create a culture of responsible AI use.
How Can Enterprises Foster a Culture of Responsible AI Innovation?
As an IT leader, you must always remember that sustainable AI oversight depends on building literacy and trust, not just enforcing restrictions.
Start fostering responsible AI use through comprehensive training. Explain everything from the “why” behind data security to how to prompt-engineer chatbots safely. When people understand that pasting proprietary code into ChatGPT could violate trade secret protections, they’re more likely to think twice before doing it.
At the same time, create a “No-Fault” disclosure process, then use questionnaires to surface the tools being used and their purpose. Once employees know they can report tools they’re using without facing retaliation, they’ll be more forthcoming. And that early voluntary disclosure is far better than discovering your shadow AI exposure during a breach or compliance audit.
Finally, don’t forget to align IT security goals with departmental ROI so that the sanctioned AI tools solve actual problems.
Overcome Shadow AI With Expert Help From NRI
Shadow AI isn’t going away. The genie’s out of the bottle, and trying to stuff it back in will only drive usage underground. Instead, smart IT leaders are getting ahead of the curve by proactively managing how their teams use AI. That starts with:
- Detection: Gaining visibility into what’s actually happening.
- Governance: Building frameworks that guide rather than stifle AI use.
- Responsible Culture: Creating an environment where employees use AI confidently without putting the enterprise at risk.
Ready to tackle shadow AI in your organization? Contact NRI today to assess your current exposure, implement visibility tools, and build an AI governance roadmap that empowers your teams while protecting your enterprise. We’re here to help you turn shadow AI from a hidden threat into a managed opportunity.


