SecurityBrief UK - Technology news for CISOs & cybersecurity decision-makers
Editorial cybersecurity ops ai threat monitoring overreach risk gov
Mon, 23rd Mar 2026

OpenText has published a global study on AI use in cybersecurity, finding that many organisations are deploying generative AI before putting governance and security measures in place.

Conducted with the Ponemon Institute, the report surveyed 1,878 IT and IT security practitioners across North America, Asia-Pacific, Europe, the Middle East, Africa and Latin America. It found that 52% of enterprises have fully or partly deployed generative AI, but only one in five reported what the study described as AI maturity in cybersecurity.

The study defined that level of maturity as fully deploying AI in cybersecurity activities while also assessing security risks. By that measure, 79% of organisations have yet to reach full maturity.

Governance gaps

The findings highlight weaknesses in the policies and controls surrounding AI systems. Only 43% of respondents said their organisation had adopted a risk-based strategy to govern AI systems and address threats such as bias, security issues and ethical concerns.

Data privacy was another weak point. Just 41% said their organisation had AI-specific data privacy policies, while 59% said AI made it harder to comply with privacy and security regulations.

The study also pointed to concern over how AI systems respond to user inputs. Some 58% of respondents said prompt or input risks, including misleading, inaccurate or harmful responses, were very difficult or extremely difficult to reduce. Another 56% reported difficulty managing user risks, including the unintended spread of misinformation.

Bias was also a concern. Nearly two-thirds of respondents, or 62%, said it was very difficult or extremely difficult to minimise model and bias risks, including unfair or discriminatory outputs.

Mixed results

The survey suggests the operational gains often associated with AI in security are not yet widely established. Just 51% of respondents said AI was effective in reducing the time needed to detect anomalies or emerging threats.

Fewer than half, or 48%, said AI was effective in threat detection and in uncovering deeper insights while reducing manual workload. The results suggest many organisations are still struggling to turn AI deployment into consistent improvements in security operations.

Reliability problems appear to be one reason. The report found that 45% of respondents cited errors in AI decision rules as a leading barrier to effectiveness, while 40% pointed to errors in data inputs used by AI systems.

Trust and explainability also remain obstacles. Fewer than half of organisations, 47%, said their AI models could learn robust norms and make safe decisions autonomously.

That lack of confidence is keeping people in the loop. More than half of respondents, or 51%, said human oversight is needed in AI governance because attackers can adapt quickly.

"AI maturity isn't just about adopting AI tools-it's about doing it responsibly," said Muhi Majzoub, EVP, Product & Engineering, OpenText.

"Security and governance are foundational to getting real value from AI. When they're built into AI systems from the start, organizations can operate with greater transparency, monitor systems continuously, and trust the outcomes AI delivers."

Broad sample

The survey drew responses from executives, decision-makers and practitioners in IT security, engineering, infrastructure, risk and compliance, and related roles involved in AI and security strategy. Participating organisations came from sectors including financial services, healthcare, technology, energy and manufacturing.

The breadth of the sample suggests these issues are not confined to a single region or industry. Instead, the findings point to a broad pattern of companies moving ahead with AI in security while still working through the practical and regulatory implications.

For businesses, the figures highlight a gap between adoption and control. Companies are introducing AI into critical operations, but many do not yet appear to have the privacy rules, governance frameworks or confidence in outputs needed to rely on those systems with less human intervention.

Majzoub said organisations that want to advance further with AI need clearer controls from an early stage. "The leaders in this next phase of AI adoption will be those who build transparency and control into AI from the start," he said. "As AI becomes embedded in day-to-day operations, organizations need secure information management as the foundation; clear governance frameworks, policy-based controls, and continuous monitoring that ensure AI systems remain trustworthy and compliant. Just as important is aligning AI with the right data, security practices, and oversight from the outset so innovation can scale responsibly and deliver measurable business value."