SecurityBrief UK - Technology news for CISOs & cybersecurity decision-makers
Story image

Survey finds AI agents pose rising security risks for firms

Yesterday

A new research report from SailPoint highlights a rapid increase in the adoption of AI agents across organisations and identifies associated security concerns.

The report, titled 'AI agents: The new attack surface. A global survey of security, IT professionals and executives', reveals that 82% of organisations already utilise AI agents. However, only 44% have policies in place to secure these systems. Despite growing apprehension, with 96% of technology professionals viewing AI agents as an increasing risk, 98% of organisations intend to expand their use of them over the next year.

AI agents, also described as agentic AI, refer to autonomous systems capable of perceiving, making decisions, and taking actions to achieve particular objectives. These agents frequently require several machine identities to access data, applications, and services. Their ability to modify themselves and potentially create sub-agents introduces additional complexity.

According to the survey, 72% of organisations perceive AI agents as presenting a greater security risk than other machine identities. Respondents noted several key factors contributing to these risks, including the agents' ability to access privileged data (60%), potential to take unintended actions (58%), sharing sensitive data (57%), making decisions based on inaccurate or unverified information (55%), and accessing or sharing inappropriate content (54%).

Chandra Gnanasambandam, Executive Vice President of Product and Chief Technology Officer at SailPoint, commented, "Agentic AI is both a powerful force for innovation and a potential risk. These autonomous agents are transforming how work gets done, but they also introduce a new attack surface. They often operate with broad access to sensitive systems and data, yet have limited oversight. That combination of high privilege and low visibility creates a prime target for attackers. As organisations expand their use of AI agents, they must take an identity-first approach to ensure these agents are governed as strictly as human users, with real-time permissions, least privilege and full visibility into their actions."

The report indicates that AI agents currently have access to highly sensitive information, including customer data, financial records, intellectual property, legal documents, and supply chain transactions. There is widespread concern about the ability to control what information these AI agents can access or share. An overwhelming 92% of respondents believe that governing AI agents is critical for enterprise security.

Security incidents involving AI agents have already occurred. 23% of respondents stated that their AI agents were deceived into revealing access credentials. 80% of organisations reported that their AI agents have carried out unintended actions. Specific instances include accessing unauthorised systems or resources (39%), accessing or sharing sensitive or inappropriate data (31% and 33%, respectively), and downloading sensitive content (32%).

The report emphasises that AI agents must be treated as a distinct type of identity within organisational systems. With nearly all organisations planning to grow their use of agentic AI in the coming year, the need for comprehensive identity security solutions is apparent. Such solutions should offer unified visibility, enforce zero standing privilege, enable the discovery of all AI agents in an environment, and allow for thorough auditing to support regulatory compliance and bolster enterprise security.

The survey was conducted by Dimensional Research and included 353 participants globally. All respondents held responsibilities for enterprise security, with representation from IT professionals involved in AI, security, identity management, compliance, and operations across various seniority levels and five continents.

Follow us on:
Follow us on LinkedIn Follow us on X
Share on:
Share on LinkedIn Share on X