AI risk outpaces oversight as BSI warns of governance gaps for firms
Research by the British Standards Institution (BSI) has highlighted significant gaps in how businesses are governing the implementation of artificial intelligence, warning that many UK and global organisations are inadequately prepared for the risks associated with AI deployment.
The BSI study, which involved the analysis of over 100 multinational annual reports and surveys of 850 senior leaders across several countries, found a mismatch between growing investment in AI and the adoption of robust oversight processes. According to the findings, the pursuit of productivity and cost savings is outstripping the development of controls and governance, exposing businesses to potential risks, liabilities and future costs.
AI governance gap
The report revealed that 62% of senior executives expect to increase investment in AI over the coming year. Of those, 61% cited productivity and efficiency gains, while 49% identified cost reduction as a key driver. Moreover, 59% now see AI as crucial to the long-term growth of their organisations.
Yet, less than a quarter (24%) of businesses globally stated they have an AI governance programme in place. This figure rises modestly to 34% among large enterprises, defined as having more than 250 employees. Nearly half (47%) said their use of AI is governed by formal processes, up from 15% earlier this year, while one-third (34%) use voluntary codes of practice.
Despite these figures, just 24% monitor employee use of AI tools and only 30% have formal processes to assess AI risks and mitigation measures. Only 22% restrict staff from using unauthorised AI, suggesting limited safeguards overall.
UK leads in mentioning governance
The BSI's analysis of corporate annual reports noted a marked difference between countries in their approach to AI governance. Governance and regulation were referenced 80% more frequently in UK reports compared to those from India, and 73% more than in reports from China, pointing to growing international divergence in the prioritisation of oversight.
Despite this, only 29% of UK businesses reported having an AI governance programme, which falls to just 14% among small businesses. BSI warned that smaller firms may face significant challenges if forced to address risks reactively.
Data and risk concerns
Awareness of the data used to train AI systems is also declining. Globally, only 28% of leaders said they knew what data sources were used in their AI tools, a decrease from 35% earlier in the year. Furthermore, just 40% of businesses confirmed having clear processes for using confidential data in AI training.
"The business community is steadily building up its understanding of the enormous potential of AI, but the governance gap is concerning and must be addressed. While it can be a force for good, AI will not be a panacea for sluggish growth, low productivity and high costs without strategic oversight and clear guardrails - and indeed without this being in place, new risks to businesses could emerge. Divergence in approaches between organizations and markets creates real risks of harmful applications. Overconfidence, coupled with fragmented and inconsistent governance approaches, risks leaving many organizations vulnerable to avoidable failures and reputational damage. It's imperative that businesses move beyond reactive compliance to proactive, comprehensive AI governance," said Susan Taylor Martin, CEO of BSI.
Risk and security management
Almost one-third (32%) of executives noted that AI has introduced risks or weaknesses in their businesses. Only a third (33%) of companies follow a standardised process when incorporating new AI tools. The proportion of organisations embedding AI-related risks within broader compliance commitments has fallen to 49%, down from 60% six months ago, and only 30% conduct formal risk assessments for AI vulnerabilities.
Sectors also varied in their focus. Annual reports from financial services organisations paid the greatest attention to AI-related risk and security, with particular emphasis on cybersecurity, likely reflecting the sector's consumer protection and reputational priorities. By contrast, technology and transport firms placed less emphasis on these risks, reflecting sectoral differences in management approaches.
Low incident response and process duplication
The survey identified limited preparedness for addressing errors or failures involving AI systems. Just 32% of businesses have processes for recording AI-related issues or inaccuracies, and only 29% have frameworks for managing AI-related incidents and ensuring timely responses.
About 18% stated that their business operations would not continue if generative AI tools were unavailable, indicating a notable dependency. Additionally, 43% of leaders acknowledged that AI investment has diverted resources from other initiatives, while only 29% had processes to avoid duplicating AI services across departments.
Emphasis on automation over training
The study found the term "automation" appeared nearly seven times more often than words related to upskilling or training in annual reports. Over half of leaders expressed confidence in their workforce's ability to use AI effectively, with 56% believing entry-level staff have the necessary skills and 57% confident in their organisation as a whole.
Just over half (55%) of respondents said they could provide training to use generative AI critically, strategically, and analytically. One-third (34%) have specific learning and development programmes to support AI training, though 64% stated they had already received training to use or manage AI safely. The data suggests that training is often reactive, triggered by fear or uncertainty about AI's potential disruption, rather than part of a proactive strategy.
Susan Taylor Martin, CEO, BSI said: "The business community is steadily building up its understanding of the enormous potential of AI, but the governance gap is concerning and must be addressed. While it can be a force for good, AI will not be a panacea for sluggish growth, low productivity and high costs without strategic oversight and clear guardrails - and indeed without this being in place, new risks to businesses could emerge. Divergence in approaches between organizations and markets creates real risks of harmful applications. Overconfidence, coupled with fragmented and inconsistent governance approaches, risks leaving many organizations vulnerable to avoidable failures and reputational damage. It's imperative that businesses move beyond reactive compliance to proactive, comprehensive AI governance."
BSI suggested that businesses move beyond compliance and instead embed comprehensive governance structures to manage AI responsibly. Failure to do so, it warned, could expose organisations to a range of legal, operational and reputational challenges.