UK firms invest in cyber, overlook growing AI risks
Many UK organisations are investing heavily in cyber security but failing to address the specific risks created by artificial intelligence, according to new research from ANS.
The Manchester-based digital services company surveyed more than 2,000 senior IT decision-makers and found widespread confidence in AI security that is not matched by practice on the ground.
ANS reported that 85% of organisations believe they have invested enough in security for AI adoption. Only 42% said they embed security proactively into AI projects. Just 37% said they prioritise security during AI implementation.
The findings point to an approach where companies deploy AI and then respond to emerging threats rather than building defences into systems from the outset.
Spending mismatch
The research indicates that organisations are continuing to spend on cyber security overall. More than half of respondents, 53%, allocate between 11% and 30% of their total IT budgets to security.
ANS said this investment often focuses on traditional infrastructure and applications. It found that organisations rarely ringfence budgets for risks such as model manipulation, prompt injection, or data leakage from AI tools.
Only 39% of IT decision-makers plan to invest in security for AI model and algorithm training over the next three years. This includes controls that detect model poisoning, guard against tampering with training data, and monitor for abnormal outputs.
The data suggests that many security strategies still treat AI as an add-on to existing systems rather than as a distinct attack surface.
Human factor
The study highlights a similar gap in investment in people. Just 34% of respondents said they intend to spend on employee training for secure and responsible AI use.
Organisations increasingly deploy generative AI tools for tasks such as content creation, code generation and customer support. Staff often experiment with these tools with limited formal guidance on what data they can share, which prompts are safe, and how to check outputs.
Security specialists have warned that employees remain a frequent entry point for attackers. They can expose sensitive information through AI prompts, fall for AI-generated phishing, or rely on manipulated outputs in decision-making.
ANS said the survey results suggest that many organisations still treat user training as optional in AI programmes, despite long-standing emphasis on staff awareness in broader cyber security.
False sense of security
The company said the research reveals a disconnect between perception and reality on AI security readiness across UK organisations.
"AI is transforming how organisations operate, but it also introduces entirely new attack surfaces and vulnerabilities. Many businesses assume their existing cybersecurity measures automatically extend to AI, but that simply isn't the case.
"This overconfidence is creating a false sense of security. Without targeted investment in areas like model security, governance frameworks and employee training, organisations risk leaving their AI systems exposed to misuse, manipulation and emerging threats.
"Security can't be treated as a bolt-on or a compliance exercise. It has to be the foundation of responsible AI adoption. Organisations that take a proactive, risk-led approach will be far better positioned to unlock the value of AI safely and confidently," said Kyle Hill, Chief Technology Officer, ANS.
ANS conducted the research with Censuswide. The survey covered senior IT leaders at micro, small, medium and large organisations that already use AI in their roles.
Respondents came from organisations that allocate a broad range of budget levels to security, which indicates that the gaps cut across size and sector rather than only affecting smaller firms with fewer resources.
ANS said it expects the proportion of IT budgets dedicated to AI-specific security measures and staff training to become a point of scrutiny as organisations scale AI use in core business operations.