SecurityBrief UK - Technology news for CISOs & cybersecurity decision-makers
Ai cyberattack shadow figure robotscreens red alarm extortion

AI agents drive surge in cyber threats & extortion

Fri, 13th Mar 2026

Businesses rolling out AI agents and automated workflows are creating new cyber security risks by increasing the number of non-human identities inside corporate systems, according to a report from S-RM and FGS Global.

The report argues that organisations adopting AI quickly can introduce privileged automated accounts and integrations that increase the potential impact of an intrusion. It also highlights a parallel shift among attackers, with criminals using AI to craft more tailored extortion messages and communications during ransomware incidents.

Non-human access

AI agents, automated workflows and API-based integrations often use machine-to-machine credentials rather than employee logins. These identities can hold broad permissions across applications and data stores, reducing visibility for defenders because automated systems can move across many services and process large volumes of transactions without direct human oversight.

That can complicate incident response. Security teams may struggle to establish which systems an automated identity accessed, what data it handled and what actions it took. The report also warns that compromised automation may behave unpredictably, creating secondary impacts beyond the initial intrusion.

It says businesses need updated monitoring for non-human identities, plus containment and investigation frameworks that account for compromised AI systems. The report identifies least-privilege access, continuous monitoring and segmentation from sensitive systems as key controls when AI agents are embedded into workflows.

Attacker innovation

The report describes how cybercriminals are using AI to identify and exploit information that would cause the most disruption or reputational damage if stolen or made unavailable. Criminals are increasingly targeting data that increases pressure during extortion, including customer information, intellectual property, regulatory issues and material that could add legal exposure.

AI tools can also speed up the review and categorisation of stolen data, sharpening extortion demands and increasing the likelihood of specific threats based on what criminals find inside a network.

Ransomware groups already rely heavily on double and triple extortion models, where encryption disrupts operations and data theft adds leverage. The report suggests AI is intensifying this pattern by making it easier to locate sensitive material quickly and present it in ways designed to heighten pressure on executives and boards.

Faster ransomware

One prediction for 2026 is that established ransomware groups will continue to dominate attack volumes and public reporting, while smaller newcomers appear regularly. The report lists Akira, Qilin and Scattered Spider or ShinyHunters among the groups expected to remain prominent.

It also expects faster attack cycles, as ransomware operators improve automation, organisation and execution, reducing the time needed to compromise and extort targets.

The report warns of an "increasing emergence of the speed paradox". As attacks accelerate, organisations have less time to verify facts and respond before customers, partners and regulators demand information. Businesses may have to choose between communicating quickly with incomplete details or waiting for clarity and risking a perception of paralysis.

Reputation pressure

Beyond the technical aspects, the report argues that ransomware response now has a stronger reputation and stakeholder component. It describes threat actors as more professional in their engagement with victim organisations and more aggressive in their communications.

According to the report, AI has made written exchanges more polished and verbal communications more realistic. It also says criminals are increasingly willing to brief the media as a pressure tactic and to challenge what they view as misleading public statements by victims.

The growing number of smaller groups and rebrands adds further unpredictability, affecting how companies coordinate communications during an incident and assess attacker credibility and likely actions.

Jamie Smith, Global Managing Director, Cyber Security at S-RM, said the pace of change was overtaking standard defensive models.

"We are moving into unchartered territory where the speed and sophistication of cyber attacks are outmanoeuvring traditional defences. What once took weeks now takes days, and what took days, now takes hours. Attackers are no longer just encrypting systems; they are using AI to find the most sensitive information that could cause maximum damage to an organisation and using this as leverage. The result is more targeted extortion that goes beyond generic threats of data publication. Threats are becoming specific and more personalised, designed to maximise the victim's fear and willingness to pay.

"As more companies embed AI agents in their workflows, the risk rises exponentially. AI agents should be treated as untrusted identities, with least-privilege access to systems, continuous monitoring and explicit segmentation from sensitive systems, or AI adoption risks creating privileged, opaque intermediaries that threat actors can manipulate for maximum harm."

"Meanwhile, while we expect some disruption to the operations of certain well-established threat actor groups in 2026, it is important to not get complacent. The knowledge, relationships, and capabilities that made a group successful don't disappear when a brand is disrupted. This resilience means that while individual group names may come and go, the overall threat level will remain relatively constant, with experienced operators continuing their activities under new identities," Smith said.

Jenny Davey, Global Co-Head of FGS Global's Crisis & Issues Management Practise, said boards were weighing operational gains from AI deployments against an expanding threat landscape that also included cyber-adjacent risks.

"Ransomware incidents are highly feared by Boards and leadership teams, and for good reason. As recent high-profile attacks have shown, they can have crippling consequences on a business's operations, financial situation and reputation - and the knock-on effects can be significant and far-reaching."

"As Boards consider the implementation of AI agents and automated workflows across their business, they must be mindful that it can be a double-edged sword: while AI can drive efficiency and performance across the business, it can also open up new attack vectors for cybercriminals to exploit and therefore present new reputational risks. Boards must also remain mindful of how AI is enabling cybercriminals to be more sophisticated in communications and engagement with victim organisations, and how it is driving and sharpening threats that are cyber-adjacent, such as deepfakes, synthetic media and misinformation campaigns. These can be particularly reputationally damaging if not handled swiftly and with care."

"In today's complex and rapidly evolving environment, organisations that treat security posture, operational resilience and stakeholder engagement as one, with a holistic, agile and tested approach, will fare better and maintain trust when they're hit with the inevitable," Davey said.

The report also notes that more businesses are paying ransoms for the first time in two years. It points to industrials and manufacturing companies as paying more frequently, reflecting the operational disruption ransomware can cause in production environments.