SecurityBrief UK - Technology news for CISOs & cybersecurity decision-makers
Flux result 4cccdffa e627 44e4 9061 87014f0d98a3

AI agents blur human access lines in enterprise systems

Wed, 25th Mar 2026

Research published by the Cloud Security Alliance found that many organisations cannot clearly distinguish between the actions of AI agents and those of humans, pointing to broad weaknesses in how companies manage identity and access for autonomous software.

The study, commissioned by Aembit, found that 68% of organisations could not clearly tell human activity apart from AI agent activity, even though 73% expected AI agents to become vital within the next year. It also found that 85% already use AI agents in production environments rather than limiting them to test settings.

The findings suggest adoption has moved ahead of the controls meant to govern access. Respondents reported using AI agents across a range of business and technical tasks, including task automation, research, developer assistance, and security or monitoring work.

Task automation agents were the most widely used, cited by 67% of respondents. Research agents were used by 52%, while developer-assist agents and security or monitoring agents were each cited by 50%.

Identity Gaps

A central finding of the report is that AI agents often operate without a clear identity model. More than half of organisations, 52%, said they use workload identities for agents, while 43% rely on shared service accounts and 31% allow agents to operate under human user identities.

That mix raises the risk of software agents receiving more access than they need. Nearly three-quarters of respondents, 74%, said agents often get more access than necessary, and 79% said agents create new access paths that are difficult to monitor. Another 52% said agents at least sometimes inherit access intended for people or other systems.

Hillary Baron, AVP of Research at Cloud Security Alliance, said the findings show that established identity and access management approaches are coming under pressure as AI systems take on more independent roles.

"AI agents are already embedded within enterprise environments, and as these systems take on more autonomous roles, organizations must address new challenges around identity and access," Baron said. "The survey data indicates that existing IAM approaches were not designed for autonomous agents and are showing strain as deployments scale."

Split Ownership

The research also found that responsibility for AI agent identity and access is spread across multiple teams rather than assigned to one clear owner. Security teams were named as the lead by 28% of respondents, followed by development or engineering at 21% and IT at 19%.

Only 9% said identity and access management teams were the main owner. That fragmented model can make it harder to apply consistent rules, respond quickly to incidents, and maintain clear accountability.

On paper, many organisations appeared confident in their control over AI agents. Some 57% reported moderate or high confidence in identity scoping, but the survey also exposed gaps in underlying operating practices.

One-third of organisations said they did not know how often AI agent credentials were rotated. A further 32% were unsure how much time was needed to implement and maintain authentication or credential handling for a typical AI agent, while only 22% said access frameworks were applied very consistently to AI agents.

Containment Focus

The data suggests many organisations rely on broad containment measures when something goes wrong rather than tightly defined identity controls. The most common response, cited by 49%, was disabling identities or revoking tokens, while 42% said they terminate the compute environment where an agent is running.

Only 33% said they remove or modify access policies in real time. This suggests heavier reliance on governance and oversight measures than on direct, identity-based enforcement at the point of access.

Aembit, which works in identity and access management, funded the research and helped develop the questionnaire with Cloud Security Alliance analysts. The online survey gathered 228 responses from IT and security professionals across organisations of different sizes and locations. Cloud Security Alliance analysts carried out the data analysis and interpretation.

David Goldschlag, co-founder and CEO of Aembit, said the results reflect a mismatch between current tools and how AI agents are now being used inside companies.

"AI agents are inheriting human permissions, operating under shared accounts, and expanding the attack surface in ways that existing IAM tools weren't designed to handle," Goldschlag said. "The survey makes the stakes clear: Agentic autonomy without identity-level access controls is a risk organizations can't afford to ignore."