UK firms race ahead on AI, but controls lag behind
Thu, 14th May 2026 (Today)
Writer has published research showing that many large organisations are adopting AI faster than they are putting controls in place. The survey points to governance gaps, data security concerns and weak oversight of autonomous AI systems.
The findings are based on a survey of 2,400 executives and employees in large enterprises, and suggest many of the UK's biggest businesses could be exposed to risk as staff turn to unapproved AI tools and companies struggle to supervise newer autonomous systems.
Among the most striking findings, two-thirds of C-suite leaders said they believed their organisation had already suffered a data leak or security breach caused by an employee using an unapproved AI tool. Only a third of executives were certain no such breach had taken place.
Employee responses pointed to widespread use of public or prohibited tools. One in three said they had entered proprietary, confidential or sensitive company information into a public AI tool, while 16% had used AI products explicitly banned by their employer.
Those behaviours appear to be linked to frustration with approved systems and pressure to deliver work quickly. Employees who used banned tools most often said they would use whatever was needed to get their work done. Others said approved tools were too poor to use or that enforcement was weak.
Executives also acknowledged limited oversight. More than a third said they did not have full visibility or control over which AI tools employees were actually using inside their organisations.
Reporting fears
The research also highlighted tension between staff and management over harmful AI outputs. More than a quarter of employees said they had seen an AI tool at work produce a result that was dangerously wrong, unethical or biased.
Yet three in ten said they did not feel safe reporting dangerous or unethical AI behaviour to their employer because they feared retaliation. That contrasts sharply with senior leaders' views: 90% of executives believed employees were safe to speak up.
The gap suggests companies face not only technical and compliance risks, but also cultural problems as AI tools become more embedded in day-to-day operations. Staff may be reluctant to challenge systems or report problems if they think doing so could be seen as resistance to adoption.
Agent oversight
Writer's survey placed particular emphasis on autonomous AI agents, which are starting to move from pilot projects into regular business use. Here too, the results showed a lack of confidence in internal controls.
More than a third of executives said they were not confident they could shut down an autonomous AI agent if it began causing financial or reputational harm. A similar share said their organisation still lacked a formal, documented plan for supervising AI agents.
Leaders identified the main governance concerns around these systems as security and data protection, employee training, transparency over how agents operate and explainability. Only a quarter ranked ethical alignment among their top concerns, despite wider concern in the survey about problematic AI outputs.
The report also suggested some AI strategies are being shaped as much by image as by internal readiness. Three-quarters of executives said their company's AI strategy was driven more by public signalling than by practical internal direction.
That points to pressure on senior management to show progress on AI even when policies, oversight structures and employee safeguards remain incomplete. It also helps explain why some organisations may be seeing a rise in so-called shadow AI, where staff use tools outside approved channels to meet performance demands.
The commercial and personal stakes appear high. Six in ten leaders said an agent-driven error causing serious damage would cost a senior executive their job. Respondents most commonly identified the chief executive officer, chief information officer or chief technology officer as most likely to be affected.
The survey comes as companies face growing scrutiny over how they deploy generative AI and autonomous systems in customer service, internal operations and knowledge work. While many employers are pressing ahead with rollout, the data suggests governance has not kept pace with the technology's spread across large organisations.
The findings indicate that oversight of AI use now extends beyond procurement and policy into day-to-day workforce behaviour, internal reporting culture and companies' ability to intervene when automated systems go wrong.
Two numbers capture the tension most clearly: 67% of C-suite leaders believe their organisation has already suffered an AI-related data leak or breach through unapproved tools, and 35% of executives said they were not confident they could pull the plug on an autonomous agent causing financial or reputational damage.