Harness unveils AI Security & coding tools for DevSecOps
Harness has launched two security products aimed at risks that arise when organisations build software with generative AI and embed AI models and agents in applications.
The first, AI Security, targets AI-native applications at runtime. The second, Secure AI Coding, checks code produced by AI coding assistants inside the developer workflow.
Harness said the release addresses gaps in visibility and testing as AI use spreads across application stacks and software engineering teams.
Runtime focus
AI Security adds discovery, testing and protection for AI components exposed through application programming interfaces. The suite starts with AI Discovery, now generally available.
AI Discovery inventories AI-related assets and traffic patterns in an environment. It identifies large language model calls, MCP servers, AI agents and use of third-party generative AI services, including OpenAI and Anthropic. It also flags runtime issues such as unauthenticated interfaces calling models, weak encryption and regulated data flowing to external services.
Harness positions AI Security as an extension of API security, arguing that agents and models interact with applications and external services through APIs. It also pointed to AI-specific attacks such as prompt injection, model manipulation and data poisoning, alongside more established API weaknesses.
Two additional AI Security components are in beta. AI Testing runs probes against AI-powered APIs, models and agents to check for prompt injection, jailbreaks and data leakage. Harness said the tests integrate into CI/CD pipelines so checks run as part of the release process.
AI Firewall adds runtime controls for generative AI threats, which Harness tied to the OWASP Top 10 for LLM Applications. It inspects inputs and outputs, applying filters to block prompt injection attempts and restrict data exfiltration. It also enforces behavioural guardrails for models and agents, according to the company.
Developer workflow
Secure AI Coding extends Harness SAST with scanning that runs as code is generated in the integrated development environment. It connects with AI coding tools including Cursor, Windsurf and Claude Code, according to Harness.
The feature flags issues inline as code appears, rather than waiting for a later review stage. It also sends flagged code back to the AI agent for remediation, without requiring developers to switch tools or run a manual scan.
"Security shouldn't be an afterthought when using AI dev tools. Our collaboration with Harness kicks off vulnerability detection directly in the developer workflow, so all generated code is screened from the start," said Jeff Wang, CEO, Windsurf.
Harness said the approach goes beyond basic linting by analysing how data moves through an application. It uses a code property graph to trace data flows across the broader codebase, rather than checking only the snippet produced by an assistant. That context, the company said, helps surface issues such as injection flaws and insecure data handling.
Market context
Survey results cited by Harness point to widespread adoption of AI in both application functionality and coding workflows. In its State of AI-Native Application Security report, Harness said 61% of new applications are now AI-powered. It also said security and engineering leaders struggle to identify which models and agents exist in their environments and how those components behave in production.
In its State of AI in Software Engineering report, Harness said 63% of organisations use AI coding assistants. It argued that increased code volume and release frequency put pressure on application security teams. It also said AI-generated code often arrives in larger commits with less review, increasing the likelihood that vulnerabilities enter codebases.
Katie Norton, Research Manager, DevSecOps, IDC, said the acceptance rate of AI-generated code creates governance concerns. "As AI-assisted development becomes standard practice, the security implications of AI-generated code are becoming a material blind spot for enterprises. IDC research indicates developers accept nearly 40% of AI-generated code without revision, which can allow insecure patterns to propagate as organizations increase code output faster than they expand validation and governance, widening the gap between development velocity and application risk."
Platform integration
Harness sells a DevSecOps platform that spans application security testing and runtime protection. It includes SAST and software composition analysis in development workflows, software supply chain security from build to deployment, and tools for security posture reporting, policy and governance. It also provides web application and API protection for production environments.
Harness said it aims to feed runtime findings back to developers, rather than leaving issues in a security backlog. It positioned AI Security and Secure AI Coding as additions that extend this model to AI-generated code and AI services used in production.
Internal use
Harness said it has deployed AI Security internally to map and monitor its own AI ecosystem, including calls to external providers such as OpenAI, Vertex AI and Anthropic.
It said it now tracks 111 AI assets and monitors more than 4.76 million monthly API calls. It also runs 2,500 AI testing scans a week and has remediated 92% of the issues found. Harness also reported identifying and blocking 1,140 unique threat actors attempting more than 14,900 attacks against its AI infrastructure.
AI Discovery is available now, while AI Testing and AI Firewall remain in beta. Secure AI Coding is available as part of Harness SAST, alongside integrations with the AI coding assistants cited by the company.