SecurityBrief UK - Technology news for CISOs & cybersecurity decision-makers
Brd murdoch

How to hike the chasm from AI hype to reality in customer experience

Tue, 24th Feb 2026

You've seen the bold AI bets and failures in the headlines and the conflicting research such as: Gartner predicting agentic AI will autonomously resolve 80% of common customer service issues without human intervention by 2029; but MIT reporting that 95% of generative AI pilot projects are failing. 

What we're seeing play out is AI hype vs. the complex reality. The impact of AI on customer experience (CX) and support operations will be transformative, but it's going to take time, strategic planning, and a sober assessment of risk.

A lot of vendors are promising autonomous agents and instant ROI, but the truth is that most enterprises are taking a measured, gradual approach. We know this from first-hand experience, not just customer conversations. At Deskpro, we've seen success integrating AI, such as our engineering team improving productivity with AI coding assistants and our own support team seeing their jobs become easier with AI integrated into their workstreams. But we've also experienced failures, including a four-month project to replace certain human roles that we ultimately terminated and had to write off the cost.

AI is not a panacea and shouldn't be adopted just due to hype. Successful integration of AI into business processes, especially in customer service, requires realistic expectations, careful planning, and an understanding that meaningful results take time. Meaningful ROI from increased productivity and reduced costs is absolutely achievable, but getting there is a journey, not just something that you can turn on overnight.

Let's dive into the current, varying levels of AI maturity we're seeing in the market, the major barriers to widespread AI adoption, and how organizations can move forward in their AI journeys. 

The customer service AI maturity spectrum

Understanding your organizations' position on the AI maturity curve is important. Whether you're in North America managing rapid digital transformation, a European organization navigating GDPR compliance, or an Asia-Pacific company scaling AI capabilities, this framework applies universally. However, the pace of advancement varies by geography, industry, and regulatory environment. What matters is knowing where you are and having a clear, realistic path forward.

Here's a breakdown of the five stages of a simple CX AI Maturity Spectrum:

  • Level 1 – Reactive: No formal AI integration. Shadow AI usage exists as employees use tools, like ChatGPT, sometimes without authorization, to assist with ticket summarization or drafting customer replies.

  • Level 2 – Organized: Basic, managed AI tools like chatbots, agent assist features, translations. This is where most organizations sit today. 

  • Level 3 – Strategic: Systematic AI integration into workflows using techniques such as sentiment analysis, intent detection and improving the ability for accurate, timely AI responses by indexing all relevant public-facing data. Governance is established; compliance is simplified.

  • Level 4 – Intelligent: Comprehensive integration of AI with AI agents leveraging enterprise systems and data, enabling the autonomous completion of tasks. For certain use cases, full conversational AI with integrated routing.

  • Level 5 – Autonomous: Think Skynet! Fully autonomous systems handling complex interactions and strategic decisions. Agent to agent communication, minimizing requirements for "human in the loop." 

Most organizations are experimenting at Levels 2 and 3, but they're not necessarily successfully adopting these capabilities into production. According to our State of AI in Support Operations: Balancing Innovation and Compliance report, more than half of organizations are still in pilot or trial phases. They haven't moved to production yet. Organisations have the basic tools like chatbots and agent assistance, but without addressing organizational readiness first, they watch pilots stall. The gap between having AI tools and actually using them effectively is where most companies get stuck.

Organisational readiness and security are the main barriers to AI adoption

When presenting at a recent conference, I surveyed the audience about what held their organizations back from moving up the AI maturity spectrum? The top answers were consistently security, compliance and organizational readiness. Similarly, our report found AI security to be the linchpin to AI adoption. Eighty-one percent of respondents rate security as "critical" or "very important" when it comes to adopting AI-enabled support tools, and the requirement for secure AI is set to intensify: 74% of respondents stated they expect to increase their focus on AI security over the next two years. What does this actually mean for safely and securely adopting AI-enabled support solutions?

Lack of organisational readiness is the silent killer of AI adoption 

Organisational readiness is an often-overlooked factor in AI adoption across enterprises. Employees worry AI will replace them: 61% of workers believe AI could replace their current role within the next three to five years.

This fear doesn't disappear because you deploy new technology. It grows, especially if employees aren't trained to use AI or don't understand why it's being introduced. Unsuccessful AI implementations typically struggle not because the technology is bad, but because the organization didn't prepare its people. When you don't communicate the purpose of AI, when you don't train people to use it effectively, and when you don't acknowledge the legitimate concerns people have, adoption stalls. Worse, people find workarounds that actually increase security risk, introducing shadow AI as a real problem.

Your organization's AI journey requires training and buy-in, but the payoff is immense. Once human agents are introduced to AI assistance that genuinely helps them with repetitive tasks, they absolutely love it and never want to go back. We saw this internally at Deskpro. Support teams that were skeptical become advocates. A large measure of success depends on the front end work: communication, training, and honest conversations with your team. Get this right and adoption will accelerate naturally. 

AI security: Compliance and deployment across markets

AI security considerations vary significantly across geographies. North American organizations tend to prioritize SOC 2 compliance, along with things like HIPAA for healthcare and PCI DSS for handling credit card data. European organizations must navigate GDPR and increasingly stringent data protection regulations, and are now grappling with the EU AI Act. Organisations in many European, middle eastern and Asia Pacific countries face sovereign data restrictions, and multinational enterprises must navigate these requirements simultaneously. Yet the core security challenge remains universal: how do you leverage AI's power while protecting your organization's and customers' data?

Security when contemplating AI adoption has moved beyond just a platform feature. It's now a veto power in the technology procurement process. According to Deskpro's research, 78% of organizations involve their IT or Security teams in the final tech purchasing decision. The biggest risk when moving from Level 3 (Strategic) to Level 4 (Intelligent) on the AI maturity spectrum is exposing the organization's sensitive data to the large language models (LLM) powering the AI agents, which are vulnerable to many cybersecurity threats such as prompt injection attacks. 

Most AI-powered SaaS solutions process data over public cloud infrastructure. When data flows from one service (like a help desk) to another (like an AI model hosted in the cloud), it must be transmitted in an unencrypted state, unless specific security infrastructure is in place. For most organizations, this is acceptable, because hyperscaler security is excellent. But for regulated industries, organizations handling sensitive data, or those with data sovereignty requirements, this architecture introduces risk. For these organizations, alternative deployment models exist via Virtual Private Clouds and sovereign clouds where your help desk and corporate data stay within your own controlled network access and can communicate securely with AI foundation models through authenticated channels. 

If this deployment option still does not meet your strict security and compliance requirements, the only remaining option is to deploy everything into your own environment. Many larger enterprises are building their own LLMs or leveraging commercial model providers to ensure secure AI capabilities. Being able to use this investment to power your support and service is the holy grail for providing the highest value customer experience.  

Understanding which security model your organization actually needs, before you select a vendor, is important. This decision should account for your geographic location, regulatory requirements, and industry standards. 

Advancing AI maturity requires organizations to address both technical and architectural security issues, as well as compliance with regulations and internal controls. Choosing the right deployment and security strategies, and understanding the unique risks introduced by AI, is essential before proceeding to higher levels of AI adoption.

How to implement AI safely

The pace and approach to AI adoption should reflect your organization's starting point, market dynamics, and regulatory environment. It requires patience and a phased approach: you can get ready and start implementing basic features (Level 2: Organized/Level3: Strategic) in about three to six months. However, the move to Level 4 – truly intelligent AI with omnichannel, multi-agent, enterprise data integration – is a strategic project that can, and probably should, take multiple years. 

Believe it or not, security and compliance teams are there to help! Choosing a path forward now that supports both current and anticipated data protection needs cannot only prevent security and compliance breaches, it can prevent costly reworks of your CX infrastructure in the future. 

Organisations should start by focusing on incremental improvements and practical applications for AI, while prioritizing security, compliance, and customer outcomes over marketing hype. Assess your organization's AI maturity, identify internal and external barriers, and recognize that adopting AI is a business transformation (much like all those "digital transformation" projects) that requires leadership support, proper training, and robust data and security practices.

Despite what the market is saying, you're likely not behind your colleagues across the industry! While AI has amazing potential, full production implementations for CX are nowhere near as commonplace today as the hype might lead you to believe.