SecurityBrief UK - Technology news for CISOs & cybersecurity decision-makers
Exabeam

Exabeam widens AI agent monitoring for Google tools

Thu, 23rd Apr 2026 (Today)

Exabeam has expanded its Agent Behaviour Analytics tools to cover agents built with Google Cloud's Agent Development Kit, and added an integration framework for Google Agent Gateway.

The update extends Exabeam's monitoring of AI agents across Google's ecosystem, including Gemini Enterprise, custom-built agents and multi-agent workflows.

Security vendors are racing to address a new set of risks as companies deploy autonomous software agents to handle internal tasks, automate decisions and interact with business systems. Those risks are more complex in environments where multiple agents work together, because security teams often have limited visibility into how agents behave, what data they access and how they interact with one another.

Exabeam's Agent Behaviour Analytics product is designed to give security teams a unified view of agent activity, establish normal patterns of behaviour, flag anomalies and correlate activity across multiple agents. The latest expansion brings that analysis to agents built with Google Cloud's development framework, as well as broader workflows managed through Google's agent infrastructure.

Google expansion

Support for Google Cloud's Agent Development Kit means organisations building their own AI agents can apply Exabeam's monitoring from development through production. The product is intended to show how agents operate, how they interact with systems and how that behaviour changes over time.

The planned integration with Google Agent Gateway extends that approach to multi-agent setups. In these environments, Exabeam will ingest and analyse agent-to-agent interactions, map workflow relationships and identify unusual patterns across distributed systems.

The move reflects a wider shift in cybersecurity, as vendors adapt tools once aimed mainly at human users, devices and applications to a world in which autonomous agents can hold similar access rights and carry out actions without direct human input. In practice, security teams need ways to distinguish normal automated behaviour from suspicious or unintended activity.

"Agentic AI represents a fundamental shift in how applications are built and operated," said Steve Wilson, Chief AI and Product Officer, Exabeam.

"By extending our Agent Behaviour Analytics to include agents developed with Google Cloud's Agent Development Kit, we are enabling security teams to maintain trust and visibility across the full lifecycle of intelligent agents," Wilson said.

Customer view

One customer pointed to the need for more detailed monitoring as AI agents become more common in business operations.

"As we bring more AI agents into the business, we need a clear understanding of how those agents behave, how they interact with systems and data, and where potential risk can emerge," said Eric Santa Cruz, Architect, IT Security, Resorts World Las Vegas.

"What Exabeam is doing in Agent Behaviour Analytics is exactly the kind of innovation we have been looking for. It addresses a rising challenge organisations are facing as AI adoption grows and gives us greater confidence that we can move forward securely while getting more value from AI across our operations," Santa Cruz said.

The product is embedded in Exabeam's New-Scale Security Operations Platform and is being offered at no additional charge. That pricing decision may help position it as a built-in feature rather than a separate add-on, at a time when customers are weighing the costs of governing AI deployments against the expected efficiency gains.

Exabeam has been pushing further into behavioural analysis for both human and non-human actors inside organisations. The company is best known for its work in user and entity behaviour analytics, an area of cybersecurity focused on spotting threats by identifying deviations from normal activity. Its latest pitch applies that model to AI agents, which can now act as digital workers inside enterprise systems.

For customers using Google's AI tools, the expansion suggests that security oversight of agents is becoming part of the infrastructure discussion rather than a separate layer added later. As more businesses experiment with Gemini Enterprise, custom agents and linked agent workflows, vendors are likely to face growing demand for tools that can trace interactions across those systems and surface signs of misuse or malfunction.