CIS launches AI security guides for models & agents
The Centre for Internet Security, Astrix and Cequence have released three security guides for large language models, autonomous AI agents and Model Context Protocol environments, extending the CIS Critical Security Controls to AI systems.
Written jointly by specialists from the three organisations, the documents are aimed at companies managing the security issues emerging as AI tools move into day-to-day operations. They cover distinct parts of the AI stack, from models and prompts to autonomous software agents and the protocol layer used to connect tools and services.
The first guide focuses on large language models, covering prompt-related risks, context handling and exposure of sensitive information. The second addresses autonomous and semi-autonomous agents, with guidance on tool execution, governed autonomy and access to company systems. The third focuses on Model Context Protocol, or MCP, with recommendations on tool access, non-human identities and auditability across the protocol layer.
The release comes as companies deploy AI in workflows ranging from software assistants to systems that can perform tasks with limited human intervention. Security teams are increasingly examining risks that differ from those in conventional software environments, including data leakage, misuse of credentials and software agents taking actions beyond intended limits.
Layered Risks
The material is intended to give security and IT teams a way to apply the existing CIS controls framework to AI deployments without creating a separate security model. It maps established controls to technologies that differ from traditional applications because they can generate outputs dynamically, call tools and interact with external data sources.
Astrix contributed expertise on AI agents, MCP servers and non-human identities, including API keys, service accounts and OAuth tokens that AI systems use to connect to enterprise resources. Cequence focused on application, data and API security, particularly visibility into what AI systems can access and the controls placed around those interactions.
That combination reflects a growing industry concern that AI security is not confined to a single model or application. Risks can span several layers, from how a model handles inputs to the permissions granted to an agent and the interfaces through which it accesses tools or data.
"These guides reflect a shared effort to bring clarity to an area where organisations are seeking direction," said Curtis Dukes, Executive Vice President and General Manager of Security Best Practises at CIS. "By combining our collective expertise, we translated the CIS Controls into concrete steps that help teams secure AI systems across the model, agent, and protocol layers."
Operational Focus
The documents are meant to provide prioritised recommendations across development, deployment and operations. Rather than introducing a new framework, they adapt the existing CIS Controls to AI-driven architectures.
That may appeal to companies already using the CIS framework as a baseline for broader cyber security programmes. For those organisations, the guides offer a way to extend established governance and control processes into newer AI environments without rebuilding security policy from scratch.
Astrix framed the issue around the rise of software identities and agents that can operate across internal systems with a degree of independence. These systems often rely on non-human credentials, which can be difficult to track through identity and access management processes built mainly around human employees.
"AI agents introduce a new operational surface that organisations must understand before they scale," said Jonathan Sander, Field CTO of Astrix Security. "Collaborating with CIS and Cequence allowed us to build guidance that addresses identity, authorisation, and execution risks in a way that's both actionable and aligned with how enterprises work today."
Cequence, whose business focuses on bot defence and API security, pointed to a related issue: AI systems are increasingly interacting directly with applications and APIs that hold sensitive business functions and data. As a result, security teams need to understand not only whether an AI tool is in use, but also what it can do once connected.
The guides therefore cover controls across inputs, context, reasoning, tool execution and protocol-level access. In practice, that includes advice on limiting access, tracking actions, handling credentials and placing checks on how AI systems call external tools or enterprise services.
"As AI systems interact more directly with applications and APIs, the security implications become increasingly critical," said Shreyans Mehta, CTO and Co-Founder of Cequence Security. "This partnership enabled us to create guidelines that codify what we've learned about deploying agentic AI at the world's largest enterprises without sacrificing security, governance, or scale, giving organisations a framework for enabling agentic AI safely."