F5 & Forcepoint join to secure enterprise AI systems
F5 and Forcepoint have partnered to secure enterprise AI systems across their lifecycle, linking data security with runtime protection for AI applications.
The partnership is designed to address gaps that appear when organisations deploy AI tools faster than their security architecture can adapt. It combines Forcepoint's data discovery and classification tools with F5's runtime controls for applications, APIs, models and agents.
Businesses are increasingly rolling out copilots, assistants and automated workflows, but many still struggle to identify where sensitive data sits, how it moves through AI systems and what risks emerge once services are in production. Security controls for data governance, applications and live operations often sit in separate teams or products, leaving policy and actual system behaviour out of step.
Under the arrangement, Forcepoint's data security posture management technology is used to discover, classify and prioritise sensitive and business-critical data across cloud, software-as-a-service, endpoint and enterprise environments. That information is then paired with F5 tools designed to test, monitor and enforce controls around AI systems as they run.
The combined approach is intended to help security teams identify data vulnerabilities in real time, rank AI use cases by risk, apply controls to AI interactions and monitor systems for misuse or unusual behaviour. It also includes continuous telemetry and policy validation to check whether systems remain aligned with internal governance requirements.
Closing Gaps
The agreement reflects a broader shift in corporate security spending towards AI-specific oversight, as companies look for ways to extend existing controls rather than build entirely separate frameworks. For many large organisations, the challenge is no longer whether to deploy AI, but how to do so without exposing confidential data or creating weak points in production systems.
That shift has pushed suppliers in adjacent parts of the security market to work more closely together. Data security providers have focused on identifying and classifying sensitive information, while application and network security companies have concentrated on the systems that process requests and deliver responses. AI adoption has increased pressure to connect those layers more directly.
Forcepoint's role in the partnership centres on data visibility. Organisations need to decide which information should be available to AI systems and which use cases require stricter oversight before models and agents are deployed more widely, according to the company.
F5's contribution focuses on live operational controls. Its tools are intended to enforce policy across APIs, gateways, applications and AI agents, while also testing models for weaknesses and monitoring for issues such as prompt abuse or attempts to extract sensitive data.
Executive View
John Maddison, Chief Marketing Officer at F5, said the pace of AI adoption is challenging existing security programmes.
"Enterprises are moving AI initiatives from experimentation to production faster than most security programs can adapt," said John Maddison, Chief Marketing Officer, F5. "By combining Forcepoint's deep data intelligence and contextual awareness with F5's advanced application security and runtime protections, organisations eliminate operational security gaps with unmatched confidence and control in their AI operations. As AI's threat surface continues to expand, the combined power of DSPM technologies with F5's AI Red Team and AI Guardrails equips enterprises with proactive tools to securely scale and govern AI at every stage of its lifecycle."
Forcepoint framed the issue in terms of how AI changes the nature of data risk, particularly when static policy models are applied to systems that adapt and interact in real time.
"AI has fundamentally redefined data security, exposing static policies for what they are: inadequate," said Naveen Palavalli, Chief Product and Marketing Officer, Forcepoint. "F5 and Forcepoint are establishing a new standard of continuous, adaptive protection that follows data from the moment of creation through every stage of its lifecycle, including the runtime layer where AI systems operate, evolve, and expand risk vectors. The threats AI brings require a new category for proactive data and AI risk mitigation, and our partnership is delivering on this today."
Market Pressure
The deal comes as security teams face growing pressure to show that AI tools can be governed consistently from the point data is gathered through to the moment models are queried in production. In practice, that means understanding not just what data a model was trained on or connected to, but also what users and automated agents can ask it to do once deployed.
For companies already running hybrid technology estates, that challenge is harder because data is spread across multiple cloud services, software platforms and employee devices. The partnership is positioned as a way to work across those environments without requiring organisations to rebuild their existing security architecture from scratch.
Both companies are targeting enterprises that want to adopt AI while retaining tighter oversight of sensitive information, application behaviour and operational risk. The partnership links data classification with testing and enforcement at runtime in response to a market where AI security is increasingly treated as an operational issue rather than a separate experimental function.