SecurityBrief UK - Technology news for CISOs & cybersecurity decision-makers
1767627368624

European firms lack AI shutdown plans, survey finds

Tue, 24th Mar 2026

ISACA has published European survey findings showing that many organisations do not know how quickly they could halt an AI system in a crisis, pointing to gaps in AI oversight.

The survey of 681 digital trust professionals in Europe found that 59% did not know how quickly their organisation could stop an AI system during a security incident. Only 21% said they could do so within half an hour.

The findings suggest many businesses have introduced AI into day-to-day operations without clear processes for shutting systems down when something goes wrong. As AI becomes more deeply embedded in core business functions, this raises questions about operational resilience and organisations' ability to respond under pressure.

Control gaps

The responses point to a mismatch between the spread of AI tools and the governance structures needed to manage them. The problem extends beyond the ability to stop systems quickly.

Fewer than half of respondents, 42%, said they were confident their organisation could investigate and explain a serious AI incident to leadership or regulators. Only 11% were completely confident in their ability to do so.

This matters as AI accountability rules tighten in Europe. The EU AI Act requires organisations to demonstrate explainability and accountability, meaning businesses may need not only technical controls but also records, reporting lines and staff who can interpret how systems behaved.

Accountability questions

The poll also found uncertainty over who is responsible when AI causes harm. One in five respondents said they did not know who would be ultimately accountable in that situation, while only 38% identified the board or an executive.

Disclosure within organisations was another weak point. A third of respondents said their employer did not require staff to disclose when AI had been used in work products, limiting visibility into where AI is being deployed and how heavily it shapes decisions or outputs.

Some respondents did report human oversight of AI activity. Forty per cent said people approve most AI-generated actions before execution, while 26% said decisions are reviewed after the event.

Even so, human review alone may not be enough if organisations lack broader governance processes. Without clear audit trails, incident plans and accountability structures, businesses may struggle to spot failures early or explain them afterwards.

Wider risk

The results come as AI use spreads across sectors including finance, retail, healthcare, logistics and professional services. In many companies, AI systems now influence customer interactions, internal workflows and decision-making, increasing the potential impact of errors or misuse.

The research suggests some organisations still treat AI risk mainly as a technical issue rather than a broader governance challenge. That distinction is likely to become more important as boards face closer scrutiny over how automated systems are used and controlled.

Chris Dimitriadis, Chief Global Strategy Officer at ISACA, said: "What this research reflects is that our thirst to innovate is not matched by our desire to govern change, exposing us to critical risks. The tools to govern AI responsibly already exist. Risk management, prevention controls, detection mechanisms, incident response and recovery strategies are the foundations of good cybersecurity practice, and they need to be applied to AI with the same rigour and urgency.

"The gap between deployment and governance is not closing; it is growing. Organisations need to act quickly. That starts by establishing who is accountable, building incident response capability, and creating visibility over AI use through audit to foster a culture of meaningful oversight.

"But truly closing the gap can't be done by process changes alone. Rather, it will require professionals who have the expertise to evaluate AI risk rigorously, embed oversight across the full lifecycle, and translate that into decisions that stand up to board and regulatory scrutiny. The organisations that get this right are those that focus on customer and overall stakeholder trust and those that will lead through sustainable innovation."