Generative AI drives surge in workplace data breaches
Use of generative AI in the workplace has driven a sharp increase in data policy violations, according to new research from Netskope Threat Labs, which reports that GenAI-related breaches more than doubled over the past year.
The Cloud and Threat Report: 2026 finds that organisations now record an average of 223 GenAI-linked data policy violations each month. Among the top quartile of organisations, that figure rises to 2,100 incidents per month.
The study draws on anonymised telemetry from millions of users worldwide gathered between October 2024 and October 2025. It examines how the spread of GenAI tools intersects with longer-standing cloud, identity and phishing threats.
Shadow AI persists
Netskope reports that adoption of GenAI tools inside organisations has expanded quickly. The overall number of GenAI application users grew 200% over the past year. The volume of prompts sent to GenAI services increased by 500%.
The average organisation now sends around 18,000 prompts per month to GenAI tools. This is up from an average of 3,000 prompts per month the previous year.
Alongside this growth, the research highlights continued reliance on so-called shadow AI. These are tools that employees use outside formal company approval processes. Netskope states that 47% of GenAI users still access tools through personal, unmanaged accounts. Some do so exclusively. Others combine personal accounts with company-approved services.
Organisations have responded with tighter technical controls on AI tools. The report states that 9 in 10 organisations now actively block at least one GenAI application. On average, organisations block ten different GenAI tools. Netskope notes that this share is up from 80% of organisations blocking at least one GenAI tool the previous year, rising to 90% in the latest data.
Data exposure risks
The report identifies user uploads of regulated data as the largest category of GenAI-related policy violations. This includes personal, financial and healthcare information.
Uploads of this type account for 54% of all flagged policy violations involving GenAI tools. Other sensitive categories include source code, intellectual property and credentials.
Netskope warns that personal cloud applications play a growing role in insider threat incidents. These services are now involved in 60% of such cases, according to the report. The data involved ranges from regulated information to proprietary business assets.
Ray Canzanese, Director of Netskope Threat Labs, said that GenAI has changed the security landscape for many organisations.
"Enterprise security teams exist in a constant state of change and new risks as organisations evolve and adversaries innovate," said Ray Canzanese, Director of Netskope Threat Labs. "However, genAI adoption has shifted the goal posts. It represents a risk profile that has taken many teams by surprise in its scope and complexity, so much so that it feels like they are struggling to keep pace and losing sight of some security basics. Security teams need to expand their security posture to be 'AI-Aware', evolving policy and expanding the scope of existing tools like DLP, to foster a balance between innovation and security at all levels."
Phishing still active
Alongside GenAI trends, the report examines phishing activity. Netskope notes some improvement in user awareness. Measured user susceptibility to phishing declined by 27% year on year.
Phishing remains widespread despite this trend. The data shows that 87 out of every 10,000 users still click on a phishing link each month.
Netskope links this continued exposure with the broader cloud and identity landscape. Attackers target cloud services and identity systems as entry points, while users work across a mix of corporate and personal applications.
Policy and control
The research suggests that organisations are combining increased monitoring, stricter access controls and expanded policy enforcement around GenAI and cloud services. The blocking of multiple GenAI tools indicates a preference for selective access to approved applications.
The scale of prompt use and the volume of data transferred into GenAI tools place additional emphasis on governance. Organisations are reviewing which data types employees can submit to AI systems, and under which conditions.
Netskope states that security teams are now adapting their tools and rulesets for an environment in which GenAI applications and personal cloud services are part of daily workflows. The company's data shows that this adjustment is still underway as insider risk and phishing activity continue to feature in incident statistics.
The Cloud and Threat Report: 2026 concludes that GenAI adoption, shadow AI usage and persistent phishing exposure will remain focal points for security planning in the coming year.