SecurityBrief UK - Technology news for CISOs & cybersecurity decision-makers
Moody uk shop night ai scams ghostly screens distorted faces

AI scams 2.0 leave UK small businesses more exposed

Thu, 12th Mar 2026

UK cybersecurity guidance is putting greater emphasis on organisational resilience as geopolitical tensions increase the risk of disruptive attacks. Industry observers also warn that artificial intelligence is changing the day-to-day fraud faced by smaller firms.

Karim Salama, founder of digital marketing and technology agency E-Innovate, said AI tools are making scams more convincing and easier to run at scale. He described a new wave of "AI scams 2.0" that blends familiar social engineering with automated content creation, voice cloning and deepfake media.

Small and medium-sized enterprises are often most exposed. Many hold customer data and process supplier payments, but operate with lean IT teams and rely heavily on cloud software, outsourced services and digital platforms. That combination gives criminals multiple routes into everyday business workflows.

Fraud evolves

Traditional online fraud required manual effort: attackers wrote phishing emails by hand, cloned websites and contacted targets directly. AI changes the economics. Generative tools can produce realistic messages at volume and in consistent styles, while automated systems can run chat-based conversations with little human input.

Voice cloning and deepfake video add another layer of credibility. Criminals can impersonate senior executives, suppliers or customers, and a payment request delivered in a familiar voice can carry more weight than a written email-especially when staff are under time pressure.

"AI has fundamentally changed the scale and realism of online fraud," Salama said.

"Criminals can now generate convincing messages, fake identities and automated campaigns in seconds. For many SMEs, these scams look indistinguishable from legitimate communications at first sight."

Changing channels

Email remains a major route for phishing, but distribution methods are shifting. Salama pointed to industry research suggesting programmatic advertising now exceeds email as a malware delivery channel. The figures cited indicate advertising accounts for more than 60% of observed malware and phishing campaigns, with incidents up 45% year on year.

Programmatic advertising uses automated buying and placement across networks of sites and services. That complexity can make it harder for businesses to see where adverts appear and what code runs alongside them. Attackers can exploit weak points in the supply chain or insert malicious scripts through compromised ad placements, redirecting users or triggering unwanted downloads.

Targeting is also becoming more personalised. AI tools can analyse public information about companies and staff, including social profiles, websites and supplier relationships. That intelligence can shape the language of a message, the timing of an approach and the choice of impersonated sender.

"Many business owners assume scams arrive via obvious phishing emails, but attackers are increasingly exploiting ad networks, social media platforms and compromised websites. The sophistication of these campaigns means SMEs must rethink where online risks originate," Salama said.

Common scenarios

Deepfake impersonation is a key risk described by E-Innovate. Criminals can use AI voice cloning or synthetic video to pose as a senior figure and request an urgent transfer, often adding pressure through demands for secrecy or claims that a deal requires immediate action.

AI-generated phishing emails remain a core tactic. Generative tools can replicate the tone and branding of known organisations, and tailor messages to a recipient's role, making them harder to spot through basic awareness training.

Fake invoice fraud continues to evolve as criminals research supply chains and purchasing patterns. Invoices can be made to look plausible, using real supplier names or references to ongoing projects. Another variation changes supplier bank details and requests that future payments go to a new account.

Account takeover attacks can also scale with AI-assisted automation. Criminals use tools to test stolen credentials or exploit weak passwords. If they gain access to business email accounts or marketing platforms, they can send messages from inside an organisation and reach customers and partners with fewer warning signs.

Practical steps

E-Innovate's guidance highlights warning signs that can still appear, even in sophisticated scams. These include unexpected payment requests marked as urgent; supplier bank-detail changes that are not independently verified; and messages with unusual tone or grammar. Requests for sensitive information over messaging apps or social media can also be a red flag. Suspicious online adverts and pop-ups that send users to unfamiliar sites may signal a wider compromise.

Salama said smaller businesses can reduce exposure without large budgets by improving internal controls and staff awareness. He recommends multi-factor authentication for email, payment systems and marketing platforms, alongside regular staff training that reflects newer AI-driven techniques.

He also urged tighter verification for financial requests, including a secondary check for payments or changes to supplier details. For firms running programmatic advertising or managing their own web properties, he recommends regular reviews of security settings and third-party scripts. Patch management remains essential, as outdated software still provides straightforward entry points for attackers.

The broader message is that AI makes it harder to rely on intuition about what looks genuine. "Technology alone cannot solve the problem," Salama said.

"As AI continues to evolve, so too will the tactics used by cybercriminals. Businesses need to understand how these scams work and build a culture of verification from the ground up," he said.