Chainguard & Cursor tackle AI code supply chain risks
Chainguard has partnered with Cursor to bring verified open source dependencies into AI-assisted coding workflows. The deal targets software teams using AI agents to select and import code components.
The integration is intended to address a growing security issue in agentic development, where software tools pull packages from public registries such as PyPI, npm and Maven Central with limited human review. Under the arrangement, Cursor users can source Chainguard images and libraries within existing workflows, while the platform handles configuration and credential management.
Security has become more urgent as AI coding tools move from code suggestions to automated dependency selection and project setup. Public package repositories remain central to modern software development, but they have also been a recurring route for supply chain attacks, including compromised open source projects and malicious packages designed to steal credentials or alter software builds.
The partnership gives Cursor customers access to more than 2,300 container images that are continuously rebuilt with upstream patches and released with zero known CVEs, according to Chainguard. Users can also access millions of Python, JavaScript and Java library versions built from publicly verifiable source code, along with signed attestations and reproducible build pipelines intended to show where software came from and how it was built.
The move reflects a broader shift in software security as companies adapt controls built for human developers to systems that can act autonomously. In traditional development, engineers may manually inspect or approve dependencies before software moves into production. In an agentic model, that work can happen at machine speed and at much greater scale.
Chainguard pointed to developer uptake of AI agents as evidence of that shift, saying nearly 84% of developers already use them in software development. As those tools take on more tasks, organisations face a greater challenge in checking whether third-party code is genuine, maintained and free from known vulnerabilities at the point of use.
Trust Layer
The partnership centres on inserting a verification process into that flow rather than asking developers to change tools. Cursor users can enable the integration with natural language instructions, after which the system configures project files, manages secrets and pulls dependencies from Chainguard.
The approach is meant to reduce manual security work for engineering teams. It also reflects how AI coding providers are trying to make security checks part of the default environment rather than a separate review step after code generation.
"AI agents are making dependency decisions at a scale and speed no security team can manually review. As organisations adopt agentic development, the biggest blocker is no longer how fast code can be generated-it's whether that code can be trusted," said Dan Lorenc, Chief Executive Officer and Co-founder of Chainguard.
"Together, Chainguard and Cursor will help ensure that every dependency within AI-generated code comes from a verifiable, secure, and continuously maintained source, so teams can move quickly without introducing unnecessary risk into production. Engineering teams now have a path to move at AI speed without sacrificing security," Lorenc said.
Cursor presented the deal as part of a broader effort to make AI coding systems safer for large organisations already deploying them across development teams. Recent attacks on public registries and software tools have shown how attackers can exploit trusted channels used to distribute open source code, it said.
"Partnering with Chainguard is another step in the direction of Cursor enabling secure agentic coding at scale," said Brian McCarthy, President of Global Revenue and Field Operations at Cursor.
"Recent supply chain attacks showcased how bad actors are working to manipulate the public tools and registries we've historically relied on to consume open source. With agents writing the majority of code at top businesses around the world, new tools to help ensure the code is trusted, and the ability to review and monitor at speed and scale, creates a safer paradigm," McCarthy said.
Market Pressure
The agreement comes as software suppliers and security firms compete to define how AI-generated code should be governed inside large companies. Businesses want the productivity gains from coding assistants, but they also face pressure to show that software entering production can be traced to a known source and checked for tampering.
For Chainguard, the partnership extends its effort to place verified images and libraries closer to the point where developers and automated systems choose dependencies. For Cursor, it offers a way to address a concern that could slow wider adoption of AI-driven development in regulated or security-conscious industries.
Both companies are targeting customers that already operate at large scale. Chainguard counts major technology and enterprise groups among its customers, while Cursor says it serves more than 50,000 teams globally and a majority of the Fortune 500.
The pressure point for both is the same: if AI agents are increasingly responsible for assembling software, the trustworthiness of the components they select becomes a central business risk rather than a back-office technical issue.