SecurityBrief UK - Technology news for CISOs & cybersecurity decision-makers
Computer security analyst monitoring cyber threats multiple screens alerts

AI-driven cyberattacks remain rare as models show key flaws

Fri, 11th Jul 2025

New research from Forescout Vedere Labs examines the current capabilities and limitations of generative artificial intelligence in cyberattacks, finding that AI-driven hacking remains a future concern rather than an immediate danger.

Research overview

Forescout Vedere Labs conducted over 50 simulations to evaluate how advanced large language models (LLMs) perform in relation to two key cybersecurity tasks: vulnerability research (VR) and exploit development (ED). The study assessed a range of models, including those that are commercially available, open-source, and models found in underground communities.

The findings highlight significant technical shortcomings across the board. In the simulations, 48% of the assessed models failed the first vulnerability research task, while 55% could not complete the second. Failure rates were even higher for exploit development: 66% failed the first ED task, and 93% failed the second.

Another critical observation was the instability and inconsistency of the models. During the testing process, LLMs frequently produced varying results even when presented with the same prompt, often timing out or producing content that was not usable for security purposes.

Importantly, the research indicated that no artificial intelligence model tested was able to complete all required phases of a typical exploit pipeline. This means that at present, the technology lacks the reliability and coordination necessary to carry out a complete, end-to-end cyberattack autonomously.

Model performance and implications

According to the research, commercially available LLMs performed better than both open-source and underground models. However, even among commercial models, only three were able to produce a working exploit in the controlled scenarios. Open-source tools were the least successful, while underground models sometimes outperformed open-source models but had significant issues with output quality and usability.

This current wave of AI popularisation, AI-assisted or not, threat actors are likely to continue relying on familiar tactics, techniques, and procedures (TTPs). An AI-generated exploit is still just an exploit; it can be detected, blocked, or mitigated by patching. This means that the fundamentals of cybersecurity remain unchanged. Core principles such as cyber hygiene, defense-in-depth, least privilege, network segmentation and Zero Trust are still critical. Investments in risk and exposure management, network security and threat detection and response remain not only valid, but more urgent than ever. If AI lowers the barrier to launching attacks, we may see them become more frequent, but not necessarily more sophisticated. Rather than reinventing defensive strategies, organisations should focus on enforcing them more dynamically and effectively across all environments

The quoted comments were made by Michele Campobasso, Senior Security Researcher for EMEA at Forescout, reflecting on the study findings and the broader implications for organisations.

"Vibe hacking" threat

The study acknowledges increasing discussion around "vibe hacking", a term describing AI-driven autonomous hacking. However, the current analysis concludes that the risk posed by such attacks remains largely theoretical. The inability of current models to independently and reliably execute sophisticated attack chains means that the threat is not imminent.

Despite media attention and speculation, the evidence suggests that existing AI models are predominantly supplementary tools rather than primary enablers of major breaches. This indicates a continued reliance of threat actors on established methods and techniques, rather than a shift to AI-driven attacks.

Defensive priorities unchanged

The research highlights that, regardless of AI advancements, the essential strategies for cybersecurity defence remain valid. Focus is placed on maintaining core principles such as cyber hygiene, network segmentation, Zero Trust, and dynamic enforcement of all existing protocols across an organisation's digital environment.

With generative AI making headlines for its potential to change the threat landscape, the findings reinforce the message that while attack frequency could increase, attack sophistication is not necessarily expected to rise. As such, organisations are encouraged to prioritise consistent application of existing defensive measures and to invest in network security and risk management in line with evolving but not fundamentally new threats.

Follow us on:
Follow us on LinkedIn Follow us on X
Share on:
Share on LinkedIn Share on X