AI Security Research and Insights

Shadow AI Is the New Shadow IT: What Your Browser and Endpoints Are Hiding
AI tools are appearing across every surface of the enterprise simultaneously — browsers, IDEs, copilots, extensions, agents, and APIs. Most security teams have little to no visibility into how they are being used.
For the past two decades, security leaders have battled a familiar adversary: Shadow IT.
Employees adopted SaaS tools faster than IT could govern them. Marketing spun up new analytics platforms. Developers deployed cloud services outside approved workflows. Security teams responded with CASB tools, SaaS governance platforms, and cloud security posture management.
But today, a new — and far more complex — version of Shadow IT has emerged.
Shadow AI
Unlike the SaaS tools of the past, AI tools are appearing across every surface of the enterprise simultaneously: browsers, IDEs, copilots, extensions, agents, APIs, and internal models. Many of these tools connect directly to enterprise data and systems.
And most security teams have little to no visibility into how they are being used.
The AI Explosion Inside the Enterprise
AI adoption inside organizations is happening faster than any previous technology wave. Developers are integrating large language models into applications. Employees are using copilots to generate content and analyze data. Teams are experimenting with AI-powered automation and agents.
This innovation is incredibly powerful. But it also creates a reality security leaders are starting to confront: AI is already everywhere inside the enterprise — whether security teams can see it or not.
Consider how AI typically enters an organization today: a developer integrates an LLM API into an internal service; a sales team installs an AI Chrome extension to summarize emails; a product manager uses an AI research assistant to analyze documents; an engineer deploys an AI agent to automate support workflows; a team experiments with internal models connected to sensitive data.
Individually, each action seems harmless. But collectively, they create a rapidly expanding AI ecosystem that is difficult to track, govern, or secure.
Why Shadow AI Is Harder Than Shadow IT
Shadow IT was primarily a SaaS governance problem. Security teams needed visibility into which cloud applications were being used and what data they accessed.
Shadow AI is fundamentally different. AI tools often operate across multiple layers simultaneously.
1. Browser Extensions and Desktop Apps
AI assistants now live directly in the browser — summarization tools, email copilots, AI research assistants, and productivity copilots. These tools can access emails, documents, CRM data, and customer records. In many cases, these integrations happen without security review.
2. Developer Tools and IDE Copilots
AI development assistants are rapidly becoming standard in engineering teams. These tools can access source code, internal APIs, proprietary models, and infrastructure configurations.
3. AI Agents and Automation
The next wave of AI adoption involves autonomous AI agents. These systems can access internal tools, interact with APIs, retrieve enterprise data, and trigger workflows. An AI agent connected to internal systems can quickly become a privileged digital identity inside the organization.
4. Internal Models and AI Services
Organizations are increasingly deploying internal AI models in cloud platforms, internal infrastructure, data science pipelines, and AI experimentation environments. Security teams often discover these models only after they are already in production.
The Real Risk: AI + Data + Access
The biggest risk from Shadow AI isn't simply the presence of AI tools. It's the interaction between AI systems and sensitive enterprise data.
A typical risky scenario: an employee installs an AI extension, it accesses internal documents, sensitive customer information is included in prompts, and data is transmitted to an external AI provider. In many organizations, this interaction happens thousands of times per day.
This combination creates what security teams call toxic combinations: AI systems interacting with sensitive data, identities, and infrastructure in ways that were never intentionally designed.
Why Most Security Tools Can't See AI Risk
Traditional security platforms were not designed for AI ecosystems. EDR covers endpoints. CASB covers SaaS. CSPM covers cloud infrastructure. AppSec covers application vulnerabilities.
AI systems operate across all of these environments simultaneously. A single AI workflow might involve a browser extension, an external model provider, internal APIs, enterprise data, and cloud infrastructure. Security teams may see fragments of this activity in different tools — but they rarely see the full picture.
The Three Questions CISOs Are Now Asking
1. What AI exists inside our organization? This includes models, agents, extensions, APIs, and internal AI services. Many organizations discover hundreds or thousands of AI assets once they begin investigating.
2. What data and systems can these AI tools access? A model connected to public data may present minimal risk. A model connected to customer data or financial systems may present a major exposure.
3. Which AI risks actually matter? Security teams already face alert overload. What CISOs need is prioritization based on real business risk, not just technical vulnerabilities.
A New Approach to AI Security
To manage Shadow AI effectively, organizations need to discover AI everywhere — models, agents, extensions, and AI-enabled services across code, cloud infrastructure, enterprise tools, and developer environments.
They must map the AI ecosystem, understanding relationships between models, data sources, APIs, identities, and infrastructure. Without this visibility, it's impossible to understand attack paths or exposure chains.
And they must prioritize real risk. The most dangerous scenarios typically involve toxic combinations of sensitive data, privileged identities, exposed interfaces, and vulnerable dependencies.
The Future of Security Is AI Ecosystem Security
Organizations are no longer securing only infrastructure, applications, and endpoints. They must now secure entire AI ecosystems — models, agents, data pipelines, APIs, identities, and automation systems.
AI innovation is moving fast. Security cannot afford to slow it down — but it also cannot afford to operate blindly. The organizations that succeed will be those that gain visibility into how AI actually operates across their enterprise.
Because the first step to controlling AI risk is simple: you must first be able to see it.

From Telemetry to Action: What Real Enterprise AI Usage Data Reveals About Risk
Security leaders don't need more speculation about what could happen. They need visibility into what is happening — how employees, developers, and systems interact with AI in daily workflows.
AI adoption inside the enterprise is accelerating at a pace few security teams expected.
In the past year alone, organizations have introduced AI tools across nearly every function: software development, marketing, customer support, finance, and operations. From large language model APIs to AI-powered copilots and autonomous agents, the enterprise technology stack is quickly becoming an AI ecosystem.
But for security leaders, one question remains difficult to answer: How is AI actually being used inside our organization?
AI Adoption Is Happening Faster Than Governance
Most organizations did not plan for the speed of AI adoption. Unlike traditional enterprise software, AI tools are often introduced bottom-up.
Developers experiment with new model APIs. Teams install AI browser extensions. Business units adopt AI copilots for productivity. Many of these tools can be deployed in minutes. Security reviews, governance policies, and architecture reviews rarely move that fast.
The result is an environment where AI adoption spreads organically across the organization, often without centralized oversight.
What Enterprise AI Telemetry Shows
When organizations begin mapping their AI usage, several patterns quickly emerge.
1. The Number of AI Tools Is Much Higher Than Expected
Most security teams initially assume their organization uses a small number of AI platforms. In reality, once discovery begins, organizations commonly uncover dozens of AI browser extensions, multiple LLM APIs used by developers, internal AI models running in experimentation environments, and AI-powered SaaS tools embedded in existing platforms.
In some enterprises, security teams discover hundreds of AI-enabled services interacting with enterprise systems.
2. AI Usage Is Distributed Across the Entire Organization
AI is not confined to engineering teams. Marketing teams use AI to generate campaigns. Customer support teams deploy AI assistants. Sales teams use AI tools to research accounts. Operations teams use AI to automate workflows.
Each use case introduces new AI systems interacting with enterprise data. From a security perspective, this creates a challenge: AI adoption is decentralized.
3. AI Identities Are Growing Rapidly
One of the most overlooked aspects of enterprise AI adoption is the rise of non-human identities. AI agents accessing internal APIs, models querying enterprise databases, and automation systems triggering workflows — each represents a digital identity operating inside the organization.
In many environments, these AI identities accumulate permissions over time, often without the same governance applied to human accounts.
4. Sensitive Data Is Frequently Involved
Another common discovery is the frequency with which AI tools interact with sensitive enterprise data — internal documents, customer records, financial data, intellectual property, and product roadmaps.
Many employees use AI tools to summarize documents, generate reports, or analyze datasets. In some cases, this data is transmitted to external AI services without clear visibility.
The Gap Between Visibility and Action
Discovering AI usage is only the first step. The real challenge is: which of these risks actually matter?
Large enterprises may identify thousands of AI-related findings — exposed model endpoints, unapproved AI tools, vulnerable dependencies, data access risks, identity misconfigurations. If every issue receives equal priority, security teams quickly become overwhelmed.
This is the same problem organizations faced during the early days of cloud security. Thousands of alerts were generated — but few were tied to real business impact.
Why AI Security Requires Context
Not every AI system represents the same level of risk. A chatbot analyzing public marketing data presents a very different risk profile than an AI system connected to production customer records.
Understanding AI risk requires evaluating what data the system can access, what permissions it has, where it runs, whether it is exposed externally, and how it connects to other systems. The most dangerous scenarios involve combinations of conditions — what security teams call toxic combinations.
Turning AI Telemetry Into Risk Intelligence
Organizations need to discover AI assets — models, agents, and AI-enabled tools across code repositories, developer environments, cloud infrastructure, SaaS applications, and employee endpoints.
They must map relationships between models, data sources, APIs, identities, and infrastructure. A model connected to sensitive data may be safe if it operates within a secure environment. But if that same model also has internet exposure, weak authentication, and privileged access, the risk profile changes dramatically.
And they must score risk in business context — evaluating data sensitivity, identity permissions, exposure level, regulatory implications, and operational dependencies.
Moving From Awareness to Control
AI will continue transforming how organizations operate. Security teams cannot — and should not — attempt to stop this innovation.
But they must ensure that AI adoption occurs with visibility, governance, and control. The organizations that succeed will be those that move beyond basic discovery and develop the ability to understand how AI systems interact with enterprise data, prioritize risks based on real business context, and enable AI innovation while maintaining security discipline.
In other words, they will move from telemetry to action.

MCP Servers, AI Agents, and Browser Extensions: The Hidden AI Attack Surface Your Security Stack Can't See
AI agents, MCP servers, and browser extensions are quietly creating a new enterprise attack surface — one that most security stacks were never designed to monitor.
The Rise of AI Agents
Enterprise AI adoption is entering a new phase. The first wave focused on chat interfaces and copilots. But a second wave is now emerging: AI systems that act on behalf of humans.
These systems take the form of AI agents that automate workflows, browser extensions that embed AI into daily work, MCP servers that connect models to enterprise systems, and AI-powered automation frameworks.
Together, these technologies are quietly creating a new enterprise attack surface — one that most security stacks were never designed to monitor.
Unlike traditional scripts or bots, AI agents can interpret instructions, reason through tasks, and interact with multiple systems. They retrieve information from internal databases, query APIs, update records in SaaS systems, analyze documents, and trigger operational workflows. An AI agent with access to internal systems effectively becomes a new digital identity inside the organization.
Introducing MCP: The New Connectivity Layer for AI
A growing number of AI systems are now using Model Context Protocol (MCP) to connect models with tools and data sources. MCP allows AI models to interact with external systems in a standardized way.
Instead of building custom integrations for every tool, developers can expose enterprise services through MCP servers. These servers can provide access to internal APIs, databases, SaaS platforms, file storage systems, and automation workflows.
From the model's perspective, these systems become available tools. From a security perspective, MCP creates a powerful — but potentially risky — connectivity layer.
MCP Servers: The New AI Infrastructure
In many organizations, MCP servers are now emerging as a critical part of the AI infrastructure stack. They function as intermediaries between AI systems and enterprise resources.
A typical architecture: an AI agent receives a request, the model determines which tools are needed, it interacts with MCP servers to access those tools, and the MCP server retrieves data or triggers actions in enterprise systems.
This architecture is powerful because it enables models to interact dynamically with the enterprise environment. But if MCP servers are misconfigured, an AI system may gain access to resources far beyond what developers originally intended.
The Overlooked Role of Browser Extensions
At the same time that AI agents and MCP servers are expanding automation capabilities, another AI surface is quietly growing: AI browser extensions.
Employees are increasingly installing AI-powered extensions that summarize emails, analyze documents, draft responses, research topics, and extract insights from web content. These tools often request permissions such as reading page content, accessing browser data, and interacting with enterprise SaaS platforms.
In many organizations, these extensions are deployed without security review.
When These Systems Combine
Individually, AI agents, MCP servers, and browser extensions may appear manageable. The real complexity arises when these systems interact.
Consider a typical workflow: a user installs an AI browser extension, the extension connects to an AI agent platform, the agent uses MCP servers to access enterprise systems, and data from internal tools is retrieved and processed by external models.
At each step, new permissions and connections are introduced. Without proper visibility, security teams cannot answer which AI agents exist, what tools they can access, which MCP servers connect to enterprise systems, or what data is flowing through these workflows.
Why Traditional Security Tools Miss This
Most enterprise security tools were built for earlier technology models — monitoring endpoints, applications, cloud infrastructure, and SaaS platforms. AI ecosystems do not fit neatly into these categories.
An AI agent may run in a cloud container, connect to an MCP server in a developer environment, interact with SaaS APIs and enterprise databases, while employees interact with it through browser extensions. Each component may appear in a different security tool, but no single platform sees the entire AI workflow.
The Real Risk: Privileged AI Systems
The most significant risks involve privileged AI systems — agents that can retrieve sensitive information, modify enterprise data, trigger operational workflows, and interact with infrastructure services.
If that agent also connects to external model providers, the organization may have limited visibility into how information is processed. Similarly, MCP servers may expose internal capabilities never intended to be accessible through AI systems.
Securing the AI Attack Surface
Organizations must discover AI resources — models, agents, MCP servers, extensions, and AI-enabled applications across both code and cloud environments.
They must map the AI ecosystem, understanding how agents interact with MCP servers, how models access enterprise data, how extensions connect to AI services, and how identities control access to AI workflows.
And they must prioritize high-risk combinations — AI agents with privileged access, MCP servers exposed to the internet, models connected to sensitive datasets, and vulnerable dependencies in AI services.
The enterprise attack surface is expanding beyond traditional applications and infrastructure. Security leaders must begin viewing these systems as first-class components of their security architecture. Because in the age of AI-driven automation, the question is no longer simply 'What software is running in our environment?' It is: 'What autonomous systems are acting inside our enterprise?'

A Practical Playbook for CISOs to Govern AI Without Slowing the Business
Blocking AI adoption is not realistic. The challenge for CISOs is not stopping AI — it is governing it intelligently with a structured five-step framework.
Artificial intelligence is moving into the enterprise faster than almost any technology before it. Developers are integrating models into applications. Business teams are adopting AI assistants. Autonomous agents are beginning to automate workflows.
Across industries, leaders are asking the same question: How do we secure AI without slowing down innovation?
Blocking AI adoption is not realistic. Employees will continue experimenting with new tools, and developers will continue building AI-powered systems. The challenge for CISOs is not stopping AI. It is governing it intelligently.
Why Traditional Governance Models Fail
Most enterprise governance models were designed for technologies that evolve slowly. New systems were introduced through formal procurement processes, architecture reviews, and deployment approvals.
AI adoption doesn't follow that pattern. Today, AI tools can appear through browser extensions, SaaS platforms, developer frameworks, APIs, and AI agents. Many can be deployed in minutes while security review cycles take weeks.
By the time governance processes begin, AI systems may already be embedded in operational workflows.
The CISO's New Role in the Age of AI
Historically, security leaders were seen as gatekeepers. In the AI era, this model no longer works. Innovation is happening too quickly and too broadly.
Instead of acting as gatekeepers, CISOs must evolve into strategic enablers of safe AI adoption — helping organizations answer: Where is AI being used? What risks does it introduce? How do we manage those risks without slowing the business?
A Five-Step Framework for AI Governance
Organizations that successfully manage AI risk typically follow a governance model built around five core capabilities.
Step 1: Discover AI Across the Enterprise
The first step in governing AI is simple: you must know where AI exists. This includes identifying AI usage across developer environments, cloud infrastructure, SaaS platforms, employee endpoints, internal AI services, and external AI APIs.
In many organizations, this discovery process reveals far more AI activity than expected — dozens of AI-enabled SaaS tools, internal model experimentation environments, AI-powered browser extensions, and agents connected to internal APIs.
Without this visibility, governance is impossible. You cannot secure what you cannot see.
Step 2: Understand AI Access to Data and Systems
Once AI assets are identified, the next step is understanding what they can access — internal documents, enterprise databases, SaaS applications, APIs, cloud infrastructure, and automation systems.
Understanding these relationships helps answer: Which AI systems can access sensitive data? Which AI identities have privileged permissions? Which systems interact with external model providers?
Step 3: Map the AI Ecosystem
AI systems rarely operate in isolation. A single AI workflow may involve a model, a data source, an API, an automation service, and an identity controlling access.
A model connected to a database may appear safe on its own. But if that same model is exposed through an API and accessed by an external agent, the risk profile changes significantly. Mapping these relationships creates a clearer picture of the AI ecosystem.
Step 4: Prioritize Real Business Risk
Not every AI issue requires immediate attention. Security teams must prioritize AI risks based on business context — data sensitivity, identity permissions, internet exposure, regulatory requirements, and operational impact.
The most dangerous scenarios often involve toxic combinations: AI systems with privileged access to sensitive data, exposed model endpoints connected to internal resources, vulnerable dependencies in AI workloads, and automation agents interacting with production systems.
Step 5: Apply Guardrails Without Blocking Innovation
Once high-priority risks are identified, organizations must implement appropriate controls that enable safe AI usage rather than restrict innovation.
Policy controls define approved AI tools, data usage guidelines, and access permissions. Technical guardrails include monitoring AI usage, enforcing identity permissions, restricting access to sensitive datasets, and auditing AI interactions.
And continuous monitoring ensures governance remains effective as new models, tools, and integrations appear.
The Goal: Enable Safe AI Innovation
The purpose of AI governance is not to slow progress. It is to enable organizations to adopt AI confidently.
Companies that successfully implement these practices reduce the risk of data exposure, provide leadership with greater assurance, empower teams to innovate while maintaining security discipline, and build the trust required to scale AI across the organization.
The CISOs who succeed will be those who move early to establish visibility, context, and risk prioritization across their AI environments. Because in the AI era, governance is no longer about stopping innovation. It is about making innovation safe.

Why AI Risk Requires Classification and Scoring — Not Just Alerts
Without a structured way to classify and score AI risk, security teams risk repeating the mistakes of the early cloud era: overwhelming noise with very little actionable insight.
Security teams are used to alerts. Over the past decade, organizations have deployed dozens of security tools designed to detect threats, vulnerabilities, and misconfigurations. These tools generate thousands — or sometimes millions — of signals every day.
The problem has never been a lack of alerts. The problem has always been understanding which ones actually matter.
Now, as artificial intelligence spreads across enterprise environments, the same challenge is emerging again — only this time, the stakes are even higher.
The AI Risk Visibility Problem
As organizations begin discovering AI usage, they encounter an unexpected reality. AI adoption is rarely limited to a handful of projects.
Enterprises typically uncover a rapidly expanding ecosystem: internal machine learning models, external AI APIs, AI agents and automation tools, browser extensions and AI copilots, developer tools integrated with large language models, and data pipelines connected to AI systems.
But not every AI system represents the same level of risk. A chatbot analyzing public marketing content does not present the same exposure as an AI model connected to customer financial data.
Why AI Risk Is Different
AI risk is not simply another category of application security.
AI systems interact with data dynamically — through prompts, retrieval systems, and automated actions. This makes it harder to anticipate how data may be accessed or used.
AI systems accumulate permissions over time. AI agents, models, and automation systems often operate through service accounts, tokens, or API credentials that may end up with privileged access to sensitive resources.
AI systems depend on complex supply chains — open-source model packages, third-party APIs, external model providers, container images, and automation frameworks. A vulnerability in one component may impact multiple systems.
The Problem With 'Flat' Security Alerts
When security tools generate alerts without context, they treat each issue independently.
A model endpoint exposed to the internet triggers an alert. A dataset containing sensitive information triggers another. An AI service running with elevated permissions triggers a third.
Viewed individually, each finding may appear manageable. But the true risk may lie in the combination: an exposed model endpoint connected to a sensitive dataset and operating with privileged access represents a very different level of risk.
Introducing AI Risk Classification
To manage AI risk effectively, organizations must begin by classifying AI assets across several dimensions.
AI Asset Type: models, agents, APIs, AI-powered SaaS tools, developer frameworks, and automation services. Each introduces different risk considerations.
Data Sensitivity: from public data to internal operational data, confidential business information, and regulated or personal data. AI systems interacting with sensitive datasets require stronger controls.
Access and Identity Permissions: Does the AI system use a service account? What APIs can it access? Does it interact with production systems?
Exposure Level: Some AI systems operate entirely within internal environments. Others expose APIs to external users or interact with third-party platforms.
From Classification to Risk Scoring
Classification provides the foundation. But to prioritize effectively, organizations need a risk scoring model that evaluates the combination of factors for each AI asset.
An effective AI risk score considers data sensitivity, identity and access permissions, exposure level, supply chain dependencies, and regulatory implications.
The most dangerous scenarios — toxic combinations — emerge when multiple high-risk factors converge: a model with privileged access to sensitive data, exposed externally, with vulnerable dependencies.
By scoring these combinations, security teams can focus on the risks most likely to result in real business impact rather than chasing thousands of low-priority alerts.
Building an Operational AI Risk Program
The shift from flat alerts to classification and scoring represents a fundamental evolution in how organizations approach AI security.
Security teams must discover all AI assets across the enterprise, classify them by type, data sensitivity, access level, and exposure, score risk based on the combination of these factors, and continuously monitor as the AI ecosystem evolves.
This approach mirrors the maturity curve organizations followed in cloud security — moving from basic visibility to contextual risk prioritization.
The organizations that adopt this model early will be best positioned to manage AI risk at scale, enabling innovation while maintaining the security discipline that enterprise environments demand.
