SecurityBrief UK - Technology news for CISOs & cybersecurity decision-makers
Global socc night ai brain cloud alerts multinational team

HackerOne warns of widening AI security & testing gap

Thu, 12th Mar 2026

HackerOne research has highlighted a widening gap between the rollout of artificial intelligence systems and the security testing that should accompany them, finding that most organisations still do not assess AI deployments systematically.

The study, based on a survey of more than 300 security leaders and analysis of activity on HackerOne's platform, found that 89% of organisations lack comprehensive testing for AI systems. It also reported a sharp rise in AI-linked weaknesses, with valid AI-related vulnerability reports more than tripling since 2024.

AI and machine learning use has expanded quickly. The research found that 94% of organisations operate more AI and machine learning systems than they did a year ago.

Testing coverage, however, has not kept pace. Two-thirds of respondents said they formally test 61% or more of their AI and machine learning systems. HackerOne characterised this as a 28-point "AI Security Gap" between AI deployment growth and security coverage.

Incident frequency

The survey suggests a link between limited testing and exposure to attacks and vulnerabilities. Among organisations that fall into the gap, 89% of security leaders reported AI-related attacks or vulnerabilities in the past year. Across all respondents, 84% reported at least one AI-related attack or vulnerability in the past 12 months.

Broader testing coverage was associated with fewer reported incidents. Organisations that test 91% or more of their AI systems were 16% less likely to report an AI-related incident than those with lower coverage, the research found.

HackerOne argued the issue is no longer whether AI systems receive any testing at all. Frequent updates, integrations, and data connections can change the security posture over time.

"AI systems are dynamic, evolving with every model update, integration, and data connection - and the same is true of modern digital systems overall," said Kara Sprague, CEO of HackerOne.

"As systems become more interconnected and adaptive, risk evolves in real time. Periodic testing assumed stability. Today's reality requires continuous testing so leaders can detect change, identify what's exploitable, and mitigate risk before it materializes," Sprague said.

Cost pressure

The report linked the AI testing gap to higher remediation spending, finding that organisations operating in the gap have 70% higher annual remediation costs than those that test nearly all their AI systems.

The scale of AI deployments also appeared to correlate with attack variety and cost. Organisations that expanded from a small AI footprint of two systems to a larger footprint of eight to 10 systems reported 82% more attack types and 2.4 times higher attack costs, the research found.

HackerOne attributed the increase to deeper integration of AI systems into enterprise environments, citing connections to APIs, tools, and internal data sources as factors that can increase exposure. Risks can also rise disproportionately when testing does not scale alongside deployment, the report said.

"Organisations keep adding AI systems without thinking about the blast radius," said Luke Stephens, a security researcher.

"These aren't sandboxed toys. They're hooked into real data, real APIs, real decision-making. When something goes wrong, it doesn't stay contained. The cost data in this report reflects what I've been seeing in the wild: the longer you wait to test, the more expensive it gets to fix," Stephens said.

Shadow AI

The research also highlighted gaps in visibility over unsanctioned AI use inside organisations. Only 55% of respondents said they fully track "shadow" AI usage, which the report described as a material blind spot that limits oversight of how staff adopt AI tools in daily work.

Unmanaged use can expand the attack surface and create governance and compliance risks. Employees may integrate AI tools into workflows without security teams having a full view of the resulting data flows and system connections, the report said.

It also found rising security and governance pressure as AI systems move into production. Boards and executive teams are seeking clearer evidence of oversight, alongside growing regulatory scrutiny.

The report said the risk profile of AI deployments compounds with each new integration. It framed continuous testing as an operational requirement rather than a periodic exercise, and argued that organisations need to embed security checks into how AI systems are built, deployed, and governed.

HackerOne surveyed security leaders across the United States, Canada, the United Kingdom, Australia, Singapore, and Germany. Respondents worked at organisations with more than USD $250 million in revenue and represented multiple industries, according to the methodology.

HackerOne expects AI security programmes to receive greater scrutiny from senior management as adoption expands across core business systems and data environments.