Security fears keep half of agentic AI stuck in pilots
Dynatrace has published a global survey that finds around half of agentic AI projects remain in proof-of-concept or pilot stages, with security and governance concerns limiting wider deployment.
The study, titled The Pulse of Agentic AI 2026, draws on responses from 919 senior leaders involved in agentic AI development and implementation at large enterprises with annual revenues of USD $100 million or more. It reports continued investment plans alongside a cautious approach to autonomy in production environments.
According to the findings, enterprises do not cite scepticism about AI as the main cause of delays. They cite constraints around governance, validation and safe scaling of autonomous systems.
Stalled projects
The survey found approximately 50% of agentic AI projects remain in proof-of-concept or pilot stages. It also reports that adoption is "still early but growing rapidly", with 26% of organisations saying they have 11 or more agentic AI projects.
Respondents expect further spending increases. Dynatrace reported that 74% expect AI budgets to rise again next year. The report also states that 48% anticipate budget increases of at least USD $2 million.
Leaders reported a focus on operational use cases. AI agents appear most commonly in IT operations and DevOps, where 72% of respondents said they deploy them. The next most common areas were software engineering at 56% and customer support at 51%.
Barriers cited
The survey points to risk controls and technical scale as the biggest blockers. Security, privacy or compliance concerns topped the list at 52%. Technical challenges to managing and monitoring agents at scale followed at 51%.
Skills also featured as a constraint. A shortage of skilled staff or training ranked next at 44%.
Dynatrace also asked about validation practices. The top methods included data quality checks at 50% and human review of agent outputs at 47%. Monitoring for drift or anomalies came in at 41%.
The report adds that 44% still use manual methods to review communication flows among AI agents. It presents this as a sign of gaps in oversight mechanisms as deployments expand.
Human oversight
The research suggests most organisations keep people in the loop when agentic AI makes decisions. It reports that 69% of agentic AI-powered decisions still get verified by humans. It also says 87% of organisations are actively building or deploying agents that require human supervision.
Only 13% of organisations say they use fully autonomous agents. A further 23% say they rely exclusively on human-supervised agents. Dynatrace reports that 64% deploy a mix of autonomous and human-supervised agents.
The survey also describes expected collaboration splits between humans and AI. Leaders expect a 50/50 human-AI collaboration for IT and routine customer-support applications. They expect a 60/40 human-AI collaboration for business applications.
"Organisations are not slowing adoption because they question the value of AI, but because scaling autonomous systems safely requires confidence that those systems will behave reliably and as intended in real-world conditions," said Alois Reitbauer, Chief Technology Strategist, Dynatrace.
Dynatrace positions observability as part of the approach for controlling agentic AI in production. Observability tools track behaviour and performance in software systems. Dynatrace sells an observability platform and markets it for use with AI-driven systems.
The report says observability already appears across the agentic AI lifecycle. It found the highest adoption during implementation at 69%. Operationalisation followed at 57%, with development at 54%.
Dynatrace also reports a split in how organisations use agentic AI. It says 50% use it for both internal and external use cases. Another 33% use it for internal purposes only, while 18% use it for external purposes only.
On production maturity, the report states that 50% have agentic AI projects in production for limited use cases. It says 44% have projects in broad adoption across select departments. It also reports that 23% have projects in mature, enterprise-wide integration.
"Observability is a vital component of a successful agentic AI strategy," said Reitbauer. "The Dynatrace AI Centre of Excellence (AI CoE) works with many of our largest customers, and as organizations push toward greater autonomy, they need real-time visibility into how AI agents behave, interact, and make decisions. Observability not only helps teams understand performance and outcomes, but it provides the transparency and confidence required to scale agentic AI responsibly and with appropriate oversight."