Fortinet Research Report · April 2026

The Shadow AI Visibility Crisis

Senior security leaders at large enterprises report incidents involving sensitive data sent to unauthorized AI services, policies that can't be enforced, and AI threats moving faster than the defenses built to stop them.
104 Senior Security Leaders · Government, Financial Services, Healthcare, Manufacturing, Retail, Technology
Shadow AI has moved from emerging risk to active loss. More than half of senior security leaders at organizations with 1,000+ employees report a confirmed or suspected incident involving sensitive data submitted to unauthorized GenAI services in the past 12 months. Source code led the list of data types named. Eight in ten organizations have written a GenAI acceptable-use policy. Only half have the technical controls to enforce one.
53%
had a confirmed or suspected GenAI data leakage incident in the past 12 months
53%
have the technical controls to enforce their GenAI acceptable-use policy, despite 80% having one on paper
78%
have increased their security budget in response to GenAI risk, with DLP investment frequently named
1

What Security Leaders Don't Know About Shadow AI

When asked to estimate Shadow AI usage in their environments, senior leaders gave answers that ranged from "a few tools" to "every employee, every day." But the more revealing data is what happens when those leaders go looking. One in four discovered substantially more than they thought, and the gap between confidence and control is wider than the field generally admits.

When asked: "Was the discovered number much higher, much lower, or roughly the same as your initial estimate?"

1 in 4 Leaders Discovered Far More Shadow AI Than They Believed

25 of 104 senior leaders (24%) said the discovered count was much higher than they had initially believed.

When asked: "What percentage of your employees would you estimate are using unauthorized GenAI apps?"

Nearly Two-Thirds Say at Least 1 in 4 Employees Are Using Unauthorized GenAI

Median estimate: 30% of employees using unauthorized GenAI apps. Based on 101 of 104 respondents who provided a specific percentage.

When asked: "How confident are you that you can identify which AI services employees are sending data to, and what data is being sent?"

Most Leaders Say They Are Confident. Their Incident Rates Suggest Otherwise.

Key Insight

Confidence and control are not lining up. The leaders most certain they can see what employees are doing with AI still report incident rates near 46%, only 11 points lower than their moderately confident peers (57%). Visibility, as defined today, is not the same as the granular, prompt-level observability needed to actually catch sensitive data on the way out.

"The single biggest gap is our lack of real-time visibility into 'contextual data egress', knowing not just that data is leaving, but the intent behind it. Closing this requires moving away from legacy, static blocking toward an integrated AI Security Posture Management framework that can sanitize prompts and enforce granular usage policies at the browser level."
— CISO, Technology, $500M–$999M revenue
2

More Than Half Reported a GenAI Data Incident in the Past 12 Months

The Shadow AI conversation is often framed around what could happen. The data here suggests the question is academic. A clear majority of senior security leaders confirmed that sensitive data has already been submitted to unauthorized AI services in the past 12 months. Source code dominates the list of leaked content, with strategic documents, financial data, and customer information close behind.

When asked: "Has your organization experienced a confirmed or suspected incident of sensitive data being submitted to an unauthorized AI service in the past 12 months?"

53% Reported a Confirmed or Suspected GenAI Data Incident in the Past 12 Months

Among the 55 organizations reporting incidents, when asked: "What kind of data was involved? Source code, customer PII, financial data, strategic documents, something else?"

Source Code Tops the List of Data Named in GenAI Incidents

Multi-select question. Each bar shows the percentage of incident-reporting organizations that named that data type. Total exceeds 100% because incidents commonly involved multiple types.

Cross-tab: Industry × Q5 (incident in past 12 months)

Technology Reports Nearly Twice the Incident Rate of Manufacturing

Tech n=53, Manufacturing n=34, Other (Financial Services, Healthcare, Retail, Government combined) n=17. Other industries shown in aggregate due to small individual sample sizes.

Key Insight

The pattern across incident reports is consistent: developers reaching for AI assistance with code, executives summarizing strategic documents, and back-office teams pasting financial data into free GenAI tools to speed up their work. Productivity intent, security consequence. The leaders most aware of this are the ones whose tools are already catching it.

"About six months ago, we had a new hire in our legal ops team who took a heavily redacted contract, stripped out the formatting, and pasted the whole thing into a free, unsanctioned AI summarizer to help them draft an executive summary. They didn't realize the redactions were basically just black highlights and the underlying text went straight to the model."
— CIO, Technology, $5B+ revenue
"Yes, we had one confirmed incident 8 months ago where a junior engineer in manufacturing ops pasted a log file containing internal IP addresses and a partial config snippet into the free tier of ChatGPT to debug a script, and we caught it only because our DLP flagged the outbound text pattern."
— CTO, Technology, $5B+ revenue
3

Policies Are Common. Enforcement Is Not.

Most organizations now have a written GenAI acceptable-use policy. Far fewer have backed that policy with technical controls. The gap between the two creates a brittle compliance posture, where employees are trusted to follow the rules but no system is watching to confirm they do. The resulting blind spot is what makes Shadow AI so persistent.

When asked: "Does your organization have a formal acceptable-use policy for GenAI tools, and if so, what technical controls are actually enforcing it?"

Half of Senior Security Leaders Cannot Technically Enforce Their Own GenAI Policy

"Technical enforcement" includes DLP, CASB/SSE, secure web gateways, endpoint controls, identity-based restrictions, or AI-specific governance tools. "Policy only" means the organization has written rules but lacks technical mechanisms to enforce them.

Cross-tab: Q7 enforcement state × Q5 incident rate

Organizations With Technical Enforcement Are Catching More Incidents

Among the 55 organizations with technical enforcement, 36 (65%) had a confirmed or suspected incident. Among the 31 with policy but no technical enforcement, 13 (42%) reported incidents. Tech enforcement reveals incidents that policy-only environments cannot see.

Key Insight

The 23-point incident-rate gap between organizations with technical enforcement and those running on policy alone is a visibility gap, not a security gap. Teams with DLP, CASB, and AI-specific controls are seeing the leaks that always existed. Teams running on honor-system policies are flying blind, and the absence of reported incidents in those environments is not reassurance. It's the problem.

"The core gap is the invisible AI usage layer. Closing this gap requires a new security architecture layer specifically for AI: an AI-native governance layer (not bolt-on DLP) and full prompt + response observability so every AI interaction is logged, classified and linked to identity, device, and workflow."
— CTO, Technology, 2.5K employees, $900M revenue
4

AI Threats Are Outpacing Defenders

Beyond Shadow AI itself, security leaders see a broader pattern. AI-powered phishing, deepfake fraud, automated exploit development, and adversarial AI are all moving faster than the cycles of traditional defense. Two-thirds of leaders willing to take a clear position say their teams are falling behind. Detection is where they feel furthest behind, and the budget is moving to match.

When asked: "Do you feel like AI-driven threats — not just data leakage, but AI-powered attacks, automated exploits, deepfakes — are outpacing your team's ability to defend against them?"

66% of Leaders With a Clear Position Say AI Threats Are Outpacing Their Defenses

Among the 86 respondents who took a clear yes/no position, 57 (66%) reported feeling outpaced. Including the 18 who gave nuanced or non-committal answers, 55% of all 104 respondents leaned outpaced.

Follow-up question: "Where's the gap widest, detection, response, or intelligence?"

Detection Is Where Defenders Feel Furthest Behind

Multi-select among the 64 respondents who specified at least one area. Many named more than one, which is why the bars exceed 100% in total.

Cross-tab: Industry × Q10 budget increase due to GenAI

Technology Leads DLP Budget Growth. Manufacturing Lags.

Tech n=53, Manufacturing n=34, Other (Financial Services, Healthcare, Retail, Government combined) n=17. The 22-point gap between Technology and Manufacturing reflects different rates of GenAI adoption and different risk postures across the two sectors.

Key Insight

Detection sits at the top of the gap list because it is the precondition for everything else. Without seeing the threat, response and intelligence teams have nothing to act on. The 78% security budget shift in response to GenAI is the buying signal that follows, with DLP investment named most frequently. Leaders are funding the visibility they currently lack, with AI-aware monitoring at the front of the line.

"The biggest gap isn't technology, it's organizational velocity. AI has accelerated the threat landscape, and the company needs to accelerate with it. Once we close that gap, the rest, Shadow AI, data leakage, enforcement, threat detection, becomes manageable instead of existential."
— CTO, large enterprise (10,000+ employees), $500M–$999M revenue

Closing the Shadow AI Gap

The picture from these 104 senior security leaders is consistent. Sensitive data is moving to unauthorized AI services. Most organizations have written the right policies and lack the technical controls to back them up. The pace of AI-driven threats is outrunning the cycles of traditional defense, and detection is where the gap is widest.

What the research suggests is needed: integrated AI-aware data loss prevention, real-time visibility into GenAI usage across endpoints, cloud, and network, and threat intelligence that adapts as the AI threat landscape moves. Fortinet's portfolio is positioned to address each of these dimensions.

Learn More About Fortinet's AI Security Solutions →

Methodology

Fortinet commissioned this primary research with 104 senior security and IT leaders across six target industries. Conversational interview format, fielded April 24–27, 2026.

104
Senior security leaders
100%
Director level or above
1,000+
Employee minimum
$250M+
Annual revenue minimum

Respondents by Industry

Respondents by Title

Percentages are calculated as a share of unique respondents, not total mentions. Multi-select questions are flagged where they exceed 100% by design (data types leaked in Q5 follow-up; gap location in Q9 follow-up; tools used in Q8). Charts presenting cross-tabs include sample sizes for each segment. Industries with fewer than 10 respondents are presented in aggregate where individual percentages would be statistically unstable. Open-ended responses were classified into the categories shown using consistent rules; "Unclear" reflects responses that did not align with any defined category.

0