Fewer than half of firms ready for secure AI, Delinea study finds
A new report from Delinea reveals that only 44% of organisations consider their security architecture fully equipped to support secure artificial intelligence (AI), despite high levels of confidence in current security measures.
The research, titled "AI in Identity Security Demands a New Playbook," is based on a global survey of over 1,700 IT decision-makers and sheds light on significant gaps between organisational confidence and their real capabilities in securing AI systems and machine identities. While 93% of respondents expressed confidence that their machine identity security is keeping pace with threats such as AI manipulation, only 61% reported having full visibility into all machine identities. Furthermore, just 48% had identity governance for AI entities.
AI adoption and exposure
According to the report, AI deployment is widespread: 94% of companies globally are currently using or piloting some form of AI in IT operations. More than half utilise generative or agentic AI, with 40% specifically employing agentic AI to enhance their security operations.
The research highlighted the particular vulnerability of agentic AI, which refers to autonomous AI agents capable of making independent decisions. With 66% of organisations now using agentic AI, limited oversight could expose them to risks such as unchecked decision-making and the potential for compromise by external threat actors.
Shadow AI and governance challenges
Unapproved AI tool usage – referred to as shadow AI – is a routine issue. The report found that 56% of organisations experience shadow AI at least once a month, and for a third of respondents, these incidents occur multiple times per month. This prevalence indicates a persistent challenge in controlling the sanctioned use of AI tools across IT operations.
Despite widespread AI adoption, robust governance remains uncommon. Just 57% of organisations reported having acceptable use policies for AI tools, and only 55% stated that access controls for AI agents were in place. The lack of clear oversight and policy leaves many organisations vulnerable to unauthorised use and mismanaged AI technologies.
Security concerns and policy effectiveness
Organisations surveyed identified several key AI identity security concerns, including AI-generated phishing and deepfakes, AI-driven credential theft, agentic AI systems operating with unchecked access, unsanctioned use of AI tools, and insufficient visibility into AI access workflows.
While 89% have implemented policies intended to restrict or monitor AI tool access to sensitive data, only 52% rated those controls as comprehensive. This figure fell to just 30% amongst companies with fewer than 50 employees. The findings suggest that while monitoring is widespread, policy implementation and enforcement are inconsistent, particularly among smaller enterprises.
"Agentic AI demands agentic security. Organisations must rethink how they approach identity, building adaptive, risk-aware systems that verify and secure every action, whether it's human- or machine-driven. AI agents, in particular, require more granular and dynamic identity access controls than the traditional role-based approaches. More broadly, every organization must build out a comprehensive AI governance model to ensure that it's being used securely and as intended," said Art Gilliland, CEO of Delinea.
The report's authors note that the gap between perceived security capability and actual practice is exposing organisations to the risk of AI-related security threats. The limited adoption of comprehensive AI governance frameworks and visibility into machine identities suggests that many companies could be overestimating their ability to address the risks posed by increased AI deployment.
In terms of future outlook, nearly half of the organisations surveyed expressed hope that their security roadmaps would enhance the security of AI deployments within the next two years. The pace of change in the threat landscape, however, underlines the urgency with which organisations need to implement these improvements.