Agentic AI raises new cybersecurity and privacy risks
Artificial intelligence systems that act autonomously are introducing new cybersecurity and data privacy risks as organisations adopt agentic architectures across their operations.
Experts in law and technology say the shift from traditional software models to AI-driven systems changes how control is exercised within applications. This alters how vulnerabilities emerge and how they must be managed.
Agentic AI systems differ from earlier software because they operate with a degree of autonomy. They can execute tasks, interact with tools, and make decisions based on natural language inputs.
"What's changed now in the agentic world and with the AI capabilities that software has is that locus of control, at least in part, can now be shifted to inside the automated system, inside the software itself," said Steven B. Roosa, Head of Digital Analytics and Technology Assessment Platform, Norton Rose Fulbright.
This change moves decision-making closer to the system rather than relying solely on predefined human instructions. Developers now combine code with natural language prompts, which introduces variability in system behaviour.
"Now we have code and natural language controlling how software systems operate," said Roosa.
Attack surface
The structure of agentic AI systems creates new points of vulnerability. These systems rely on multiple layers, including prompting, tooling, data sources, and backend infrastructure.
A key risk lies in how systems process external inputs within their context windows. These inputs may include web content, database queries, or user instructions.
"When you start combining instructions with data such that data can be confused for instructions, intentionally or inadvertently, you can get unintended system behavior," said Roosa.
This mechanism resembles known attack methods such as injection attacks, where malicious inputs alter system behaviour. In agentic systems, compromised prompts can trigger a chain of unintended actions across connected tools and systems.
Examples include unauthorised communications or changes to enterprise records. The risk increases as systems integrate more tools and datasets across environments.
Exploit chains
The ability of agentic systems to execute tasks across multiple layers creates conditions for exploit chains. A single compromised input can propagate through different parts of the system.
In such cases, an attacker does not need direct access to backend systems. Instead, they can manipulate inputs that the AI interprets as instructions.
This risk is compounded by the speed at which vulnerabilities can be identified and exploited. Early instances have shown that newly released AI systems can have guardrails bypassed within a short period.
Trust boundaries
To address these risks, organisations are being advised to define strict operational limits for AI systems.
"What am I going to trust this system to do and what is it forbidden to do? What data is it able to see and what data is it able to write?" said Roosa.
These boundaries determine how systems interact with sensitive data and external tools. They also define which actions require explicit validation.
A key recommendation is to avoid relying solely on natural language inputs for executing actions. Systems should include deterministic controls that govern when and how tools can be used.
"We don't want natural language to be the sole basis upon which a tool takes an action, simply because the prompting layer at some point will become potentially unreliable or misfired," said Roosa.
Limiting the scope of tasks assigned to AI systems also reduces ambiguity and potential misuse. Smaller, well-defined tasks help constrain system behaviour.
Vendor scrutiny
Many organisations are expected to adopt agentic AI through third-party providers. This increases the importance of vendor assessment and due diligence.
"The security review is going to be intense, requiring knowledge of how the system plumbing works and understanding and imagination about how it could be weak or exploited," said Marcus Evans, Head of Information Governance, Privacy and Cybersecurity, EMEA, Norton Rose Fulbright.
Evaluating vendor controls requires technical understanding of how AI systems process inputs and execute actions. Organisations must assess whether safeguards are in place to prevent unauthorised access to data and tools.
"Given the ambiguity of natural language, does it allow a natural language prompt alone to trigger access to sensitive tools or data? Much better if there's a confirmation box or a button before such access is given," said Evans.
Regulatory divide
The adoption of agentic AI is taking place within a fragmented regulatory landscape. Different regions are applying varying levels of oversight and enforcement.
In Europe, regulatory frameworks such as the EU AI Act and GDPR impose strict requirements on data protection and system accountability. In contrast, the United States relies on a mix of state-level regulations.
Asia presents a different approach, with a focus on building capability rather than enforcing detailed obligations.
"In Asia, it's really about building capacity as opposed to substantive obligations. But it's not as if the use of AI is not subject to legal restrictions and existing privacy legislation. Legislation relating to online harms and other safety requirements continue to apply. Agentic AI raises further legal risk from application of general law," said Jeremy Lua, Counsel, Singapore, Norton Rose Fulbright Asia.
This variation creates challenges for organisations operating across jurisdictions. They must align their AI systems with multiple regulatory expectations while managing operational risks.
The combination of evolving technology and uneven regulation places responsibility on organisations to implement internal controls that address both security and compliance requirements.