SecurityBrief UK - Technology news for CISOs & cybersecurity decision-makers
Moody ai cyber attack human at computer tangled paths security

Agentic AI reshapes security risks, business & protest

Tue, 30th Dec 2025

Security specialists and ethics researchers are warning that agentic artificial intelligence systems are reshaping attack surfaces, business models and regulatory priorities as autonomous tools move from experiments into production use.

Their comments follow fresh research on a new exploit technique against AI agents and growing interest in fully automated companies and digital protest in response to tightly controlled online environments.

The developments highlight how systems that can act on instructions, call tools and trigger real-world changes now sit at the centre of security and governance debates.

Safety controls

Security researchers this month detailed an attack method against agentic AI that they call Lies-in-the-Loop. The technique targets so-called Human-in-the-Loop confirmation prompts in AI agents. These prompts ask a user to review and approve actions such as running code, sending emails or modifying files.

Attackers can shape the text that the model presents to the human reviewer. The visible description can appear harmless while the underlying action triggers arbitrary code execution once the user clicks approve.

The pattern underlines how many AI systems treat a human check as the last barrier between an agent and external systems such as code repositories, databases or cloud infrastructure.

James Wickett, Chief Executive Officer at DryRun Security, said organisations misjudge the reliability of these human review steps.

"Human-in-the-loop is often described as a safety net, but in reality, it becomes the primary attack surface. LLM applications repeatedly fail at system boundaries, where untrusted model output can influence real decisions and trigger real actions. Lies-in-the-loop demonstrates how attackers exploit this dynamic by shaping what the model presents to a human approver, turning judgment into a point of leverage rather than protection. When security depends on a person noticing a single misleading sentence, the system is already fragile. Durable defenses have to live outside the chat, with strict least-privilege tool access, enforced policies, and audit trails that remain reliable even when humans are confidently wrong," said Wickett.

DryRun Security focuses on AI-based security review of code changes in software pipelines. Wickett's comments reflect concerns among application security teams that AI agents are creating a new category of machine-driven business logic that can be targeted by attackers.

Automated firms

Researchers at the World Ethical Data Foundation (WEDF) expect that experiments with fully automated businesses will intensify as agentic tools become easier to assemble.

The group anticipates that at least one small business will run every operational function through AI systems. These functions include management, marketing and accounting.

"A company, in the most legitimate small business sense of the word, is revealed to be completely run by AI, from the "owner/director", all the way down. Whatever product or services it provides - printed t-shirts, VPN services, or something else - the entire operation will be automated, including marketing and accounting. The likely outcome is that the company folds within the year, and that subsequent headlines will barely register on the global news circuit."

Cade Diehm, Head of Research at WEDF, said such ventures will likely struggle to sustain themselves in competitive markets. The scenario underscores questions about legal responsibility, employment and transparency where key decisions originate from autonomous systems rather than human managers.

Digital protest

WEDF also forecasts a rise in what it calls digital civil disobedience. The organisation links this trend to frustration with the design and governance of online services.

The group expects users to increasingly circumvent technical controls and policy restrictions that they view as excessive.

"2026 will be the year we begin to witness both intended and unintended digital civil disobedience. For some, this will feel cyberpunk; for others, it will simply be unavoidable. Digital spaces are becoming so unwieldy, autocratic, controlling, and suffocating that circumventing impositions will feel like the only viable option," said John Marshall, Executive Director of the World Ethical Data Foundation.

Marshall said that this behaviour will echo older conceptions of hacking that were grounded in user autonomy.

"Hacking will return to what hacking was always meant to be: making computers do what people need them to do, rather than submitting to the constraints baked in by operating systems, big tech, and increasingly heavy-handed governments. Even if the numbers remain small at first, this shift will become visible in Western technology cultures, much as it has long been present in more oppressive regimes," said Marshall.

The projection suggests that policy makers, platform operators and security teams may see more boundary-testing behaviour that relies on AI tools and automation.

Policy pressure

Agentic AI is also becoming a live topic for governance specialists. Developers now link large language models to tools that can fetch data, take actions and coordinate tasks, which expands the potential for knock-on impacts across systems.

Anna Babkina, Head of Partnerships at WEDF, said the shift will create additional pressure on regulators.

"Agentic AI is about to make governance and policymaking even more complicated. With policy already lagging far behind AI development, autonomous systems will force a shift from model-centric oversight to an ecosystem governance.

"We need to understand the contexts these agents inhabit: what information they can access, what tools they can call, what boundaries shape their behaviour, and what values or constraints are or aren't embedded in those interactions. Right now, policymakers don't have that lens. We lack even the first frameworks for describing, let alone governing, agentic behaviour at scale.

"At WEDF we are currently working on including an agentic AI domain into our Open Standard for Responsible AI - which is essentially an open source governance model that captures the feedback and suggestions in real time across 80 countries. Agility is a key here, as these systems are getting built with insane pace," said Babkina.

Follow us on:
Follow us on LinkedIn Follow us on X
Share on:
Share on LinkedIn Share on X