SecurityBrief UK - Technology news for CISOs & cybersecurity decision-makers
Digital illustration network computer servers shadowy infiltrators vigilant professionals monitoring screens

AI-driven cyber threats & defences to reshape security by 2026

Thu, 20th Nov 2025

Cyber security experts are anticipating major changes to how artificial intelligence will affect the landscape of attacks and defences by 2026, as multi-agent systems proliferate, threat actors shift tactics, and the role of Chief Information Security Officers evolves. These insights were offered by James Wickett, CEO of DryRun Security.

Agent exploitation

Attackers are predicted to move away from traditional prompt injection attacks toward what some describe as agency abuse. Security professionals suggest that as organisations increasingly integrate autonomous agents into their workflows-giving them authority to access code repositories, ticketing systems and databases-these agents could inadvertently execute harmful tasks due to their lack of intent understanding.

Incidents are likely to arise from compromised agents performing actions that would be difficult or impossible for traditional data leak-focused breaches, such as deleting environments or running up significant compute costs through runaway operations. Attackers could conduct more subtle manipulations which rely on the agent's broad permissions rather than on bypassing its text input checks. Manipulations that appear routine to the agent, such as data transfers under the guise of auditing, could result in significant data exfiltration or operational disruption.

Managing hallucinations

The issue of AI hallucinations-or erroneous outputs-remains a fundamental challenge. Security leaders expect that developers will shift away from attempts to entirely eliminate hallucinations, focusing instead on managing their frequency and impact. This approach will likely involve architecting AI systems with secondary agents that assess and filter outputs for accuracy and relevance before responses are surfaced to users.

Layered model architectures may become more widespread, introducing more granular quality control and allowing teams to limit the scope or damage of AI errors. The imperative is to create systems where mistakes are predictable and traceable.

Multi-agent security

By 2026, multi-agent and agentic AI systems are expected to be used widely across different industries. This greater complexity introduces substantial security challenges, as each agent may hold distinct permissions, toolchains and contextual awareness. Companies could face incidents where agents access sensitive systems or data unexpectedly, driven by unclear or excessively broad permissions.

Mitigation is expected to revolve around tighter restrictions on tool access, improved monitoring, and deeper visibility into agent operations and communication. Referencing best practice guidelines like the OWASP Top 10 for large language model applications has been recommended as organisations develop robust controls for agentic systems.

CISO skillset shift

The security field also anticipates a shift in the Chief Information Security Officer's role. As code production grows rapidly, led by both human and AI-generated contributions, it is seen as essential that CISOs have an in-depth technical understanding to assess, secure, and guide AI-enabled environments.

The volume and variety of code across enterprises will demand technical acumen from CISOs. Decision making will increasingly rely on technical awareness of how different software tools, vendor solutions, and in-house machine learning interact within the business. Boards will prioritise security leaders with the expertise to definitively judge whether new software can be released safely.

"The board doesn't just need a translator anymore; they need someone who can say, "Yes, we can ship this safely," and mean it. The modern CISO has to know the tech, or they'll be replaced by someone who does," said James Wickett, CEO, DryRun Security.

Custom malware rise

AI is projected to enable lower-cost, made-to-order malware attacks tailored to individual targets. Whereas attackers previously required extensive resources to develop new exploits, AI can now generate attack code rapidly and at much lower cost.

Security experts foresee a move from broad, indiscriminate attacks to highly targeted campaigns designed for specific systems or even individual developers. As a result, traditional mass malware models may give way to an era where micro-targeted, cost-effective exploits are common.

Dark web evolution

The ecosystem for underground cybercrime is also expected to see shifts. As it becomes easier to generate novel custom payloads, attention on the dark web is anticipated to move from selling stolen identities to trafficking intellectual property and code. Novel marketplaces may emerge, focusing on toolchains, reconnaissance scripts, and AI-driven exploit kits tailored to the needs of specific industries.

The transformation could see these markets resemble mainstream developer platforms, offering full pipelines for cyber attacks instead of generic malware or ransomware-as-a-service.

Follow us on:
Follow us on LinkedIn Follow us on X
Share on:
Share on LinkedIn Share on X