When asked to estimate Shadow AI usage in their environments, senior leaders gave answers that ranged from "a few tools" to "every employee, every day." But the more revealing data is what happens when those leaders go looking. One in four discovered substantially more than they thought, and the gap between confidence and control is wider than the field generally admits.
When asked: "Was the discovered number much higher, much lower, or roughly the same as your initial estimate?"
25 of 104 senior leaders (24%) said the discovered count was much higher than they had initially believed.
When asked: "What percentage of your employees would you estimate are using unauthorized GenAI apps?"
Median estimate: 30% of employees using unauthorized GenAI apps. Based on 101 of 104 respondents who provided a specific percentage.
When asked: "How confident are you that you can identify which AI services employees are sending data to, and what data is being sent?"
Confidence and control are not lining up. The leaders most certain they can see what employees are doing with AI still report incident rates near 46%, only 11 points lower than their moderately confident peers (57%). Visibility, as defined today, is not the same as the granular, prompt-level observability needed to actually catch sensitive data on the way out.
The Shadow AI conversation is often framed around what could happen. The data here suggests the question is academic. A clear majority of senior security leaders confirmed that sensitive data has already been submitted to unauthorized AI services in the past 12 months. Source code dominates the list of leaked content, with strategic documents, financial data, and customer information close behind.
When asked: "Has your organization experienced a confirmed or suspected incident of sensitive data being submitted to an unauthorized AI service in the past 12 months?"
Among the 55 organizations reporting incidents, when asked: "What kind of data was involved? Source code, customer PII, financial data, strategic documents, something else?"
Multi-select question. Each bar shows the percentage of incident-reporting organizations that named that data type. Total exceeds 100% because incidents commonly involved multiple types.
Cross-tab: Industry × Q5 (incident in past 12 months)
Tech n=53, Manufacturing n=34, Other (Financial Services, Healthcare, Retail, Government combined) n=17. Other industries shown in aggregate due to small individual sample sizes.
The pattern across incident reports is consistent: developers reaching for AI assistance with code, executives summarizing strategic documents, and back-office teams pasting financial data into free GenAI tools to speed up their work. Productivity intent, security consequence. The leaders most aware of this are the ones whose tools are already catching it.
Most organizations now have a written GenAI acceptable-use policy. Far fewer have backed that policy with technical controls. The gap between the two creates a brittle compliance posture, where employees are trusted to follow the rules but no system is watching to confirm they do. The resulting blind spot is what makes Shadow AI so persistent.
When asked: "Does your organization have a formal acceptable-use policy for GenAI tools, and if so, what technical controls are actually enforcing it?"
"Technical enforcement" includes DLP, CASB/SSE, secure web gateways, endpoint controls, identity-based restrictions, or AI-specific governance tools. "Policy only" means the organization has written rules but lacks technical mechanisms to enforce them.
Cross-tab: Q7 enforcement state × Q5 incident rate
Among the 55 organizations with technical enforcement, 36 (65%) had a confirmed or suspected incident. Among the 31 with policy but no technical enforcement, 13 (42%) reported incidents. Tech enforcement reveals incidents that policy-only environments cannot see.
The 23-point incident-rate gap between organizations with technical enforcement and those running on policy alone is a visibility gap, not a security gap. Teams with DLP, CASB, and AI-specific controls are seeing the leaks that always existed. Teams running on honor-system policies are flying blind, and the absence of reported incidents in those environments is not reassurance. It's the problem.
Beyond Shadow AI itself, security leaders see a broader pattern. AI-powered phishing, deepfake fraud, automated exploit development, and adversarial AI are all moving faster than the cycles of traditional defense. Two-thirds of leaders willing to take a clear position say their teams are falling behind. Detection is where they feel furthest behind, and the budget is moving to match.
When asked: "Do you feel like AI-driven threats — not just data leakage, but AI-powered attacks, automated exploits, deepfakes — are outpacing your team's ability to defend against them?"
Among the 86 respondents who took a clear yes/no position, 57 (66%) reported feeling outpaced. Including the 18 who gave nuanced or non-committal answers, 55% of all 104 respondents leaned outpaced.
Follow-up question: "Where's the gap widest, detection, response, or intelligence?"
Multi-select among the 64 respondents who specified at least one area. Many named more than one, which is why the bars exceed 100% in total.
Cross-tab: Industry × Q10 budget increase due to GenAI
Tech n=53, Manufacturing n=34, Other (Financial Services, Healthcare, Retail, Government combined) n=17. The 22-point gap between Technology and Manufacturing reflects different rates of GenAI adoption and different risk postures across the two sectors.
Detection sits at the top of the gap list because it is the precondition for everything else. Without seeing the threat, response and intelligence teams have nothing to act on. The 78% security budget shift in response to GenAI is the buying signal that follows, with DLP investment named most frequently. Leaders are funding the visibility they currently lack, with AI-aware monitoring at the front of the line.
The picture from these 104 senior security leaders is consistent. Sensitive data is moving to unauthorized AI services. Most organizations have written the right policies and lack the technical controls to back them up. The pace of AI-driven threats is outrunning the cycles of traditional defense, and detection is where the gap is widest.
What the research suggests is needed: integrated AI-aware data loss prevention, real-time visibility into GenAI usage across endpoints, cloud, and network, and threat intelligence that adapts as the AI threat landscape moves. Fortinet's portfolio is positioned to address each of these dimensions.
Learn More About Fortinet's AI Security Solutions →Fortinet commissioned this primary research with 104 senior security and IT leaders across six target industries. Conversational interview format, fielded April 24–27, 2026.
Percentages are calculated as a share of unique respondents, not total mentions. Multi-select questions are flagged where they exceed 100% by design (data types leaked in Q5 follow-up; gap location in Q9 follow-up; tools used in Q8). Charts presenting cross-tabs include sample sizes for each segment. Industries with fewer than 10 respondents are presented in aggregate where individual percentages would be statistically unstable. Open-ended responses were classified into the categories shown using consistent rules; "Unclear" reflects responses that did not align with any defined category.