SecurityBrief UK - Technology news for CISOs & cybersecurity decision-makers
Story image

Guardian agents set to secure 15% of AI market by 2030

Yesterday

Guardian agent technologies are forecast to represent between 10% and 15% of the agentic AI market by 2030, according to Gartner.

Guardian agents are artificial intelligence-based systems developed to ensure interactions with AI remain trustworthy and secure. These technologies may function as AI assistants, helping users with tasks such as content review, monitoring, and analysis.

They also have the potential to act as semi-autonomous or fully autonomous agents, capable of forming and executing action plans, and redirecting or blocking actions to stay in line with specific agent objectives.

Growth expected for AI oversight tools

A poll by Gartner in May 2025 surveyed 147 Chief Information Officers and IT function leaders, revealing that 24% had already deployed fewer than twelve AI agents, while 4% had deployed more than a dozen. An additional 50% reported they were researching and experimenting with the technology. Seventeen percent said they had not started deployment but planned to do so by the end of 2026.

As usage increases, Gartner suggests there is an accelerating need for tools like guardian agents that can deliver automated trust, risk, and security controls, keeping agents aligned and safe.

"Agentic AI will lead to unwanted outcomes if it is not controlled with the right guardrails," said Avivah Litan, VP Distinguished Analyst at Gartner. "Guardian agents leverage a broad spectrum of agentic AI capabilities and AI-based, deterministic evaluations to oversee and manage the full range of agent capabilities, balancing runtime decision making with risk management."

Threat landscape

The Gartner poll also found that 52% of respondents currently deploy or plan to deploy AI agents for internal administration tasks, including IT, human resources, and accounting. Meanwhile, 23% focus on external, customer-facing functions.

There are multiple security threat categories affecting AI agents. These include input manipulation and data poisoning—where agents work with compromised or erroneous data. Example risks involve credential hijacking, leading to unauthorised access and data theft; exposure to falsified or criminal online resources, which can yield poisoned outcomes; and agent deviation when unintended behaviour causes reputational or operational harm, whether due to internal faults or outside influences.

"The rapid acceleration and increasing agency of AI agents necessitates a shift beyond traditional human oversight," said Litan. "As enterprises move towards complex multi-agent systems that communicate at breakneck speed, humans can't keep up with the potential for errors and malicious activities. This escalating threat landscape underscores the urgent need for guardian agents, which provide automated oversight, control and security for AI applications and agents."

Role of guardian agents

Gartner recommends organisations consider three main use types for guardian agents when attempting to safeguard and keep AI interactions secure. These are:

  • Reviewers: Responsible for identifying and evaluating AI-generated output and content for both accuracy and appropriate use.
  • Monitors: Charged with observing and tracking the activities of AI agents for follow-up by humans or AI-based systems.
  • Protectors: Capable of modifying or blocking AI actions and permissions through automated means during operations.

According to Gartner, guardian agents will handle interactions and anomalies regardless of the application type. This oversight is a central element of their integration, with Gartner predicting that by 2028, multi-agent systems will be utilised by 70% of AI applications.

Gartner provides further details on this topic in its research: 'Guardians of the Future: How CIOs Can Leverage Guardian Agents for Trustworthy and Secure AI', and in a related webinar for CIOs about leveraging guardian agents for trustworthy and secure AI.

Follow us on:
Follow us on LinkedIn Follow us on X
Share on:
Share on LinkedIn Share on X