SecurityBrief UK - Technology news for CISOs & cybersecurity decision-makers
Email attachment20260422 2053092 tim2gt

UK hospitality operators worry over AI data security

Wed, 22nd Apr 2026 (Yesterday)

More than half of hospitality businesses in the UK and Ireland are concerned about data security and privacy when using artificial intelligence tools, according to a new study from Access Hospitality. The research found 51% of operators in the region hold those concerns.

The findings highlight a gap between adoption and confidence as operators bring AI into day-to-day operations. More than 28% of hospitality operators in the UK and Ireland are already using AI across multiple departments, while another 20% are exploring how the technology could be used in their businesses.

That uptake is matched by caution over how commercial information is handled. The study found 45% of operators in the UK and Ireland are concerned about sharing important company data with AI tools, 38% are worried about data protection regulations, and 31% cite limited understanding of AI tools as a concern.

Smaller operators appear more wary than larger groups. Businesses with fewer than 25 venues were the most hesitant, with more than half worried about sharing data with AI tools, compared with 33% of larger venue operators.

The figures come from a survey of 1,000 businesses and 8,000 consumers across six international markets examining attitudes to privacy and secure AI use in hospitality. In the UK, consumer concerns also centred on personal data and surveillance.

Some 28.42% of UK consumers said they feared their personal information or habits could be misused through AI, while 20.28% said they were concerned about feeling constantly monitored or surveilled. Overall sentiment was mixed, with 41% of UK consumers worried about increased AI use in hospitality and 37% excited.

Operator concerns

The study suggests concern levels in the UK and Ireland are broadly in line with several other markets, though not the highest. Australia recorded the highest proportion of operators worried about data security and privacy at 57%, followed by Indonesia at 53%, while the US stood at 49%.

On regulation, concern was highest in Indonesia at 53%, followed by Germany at 50%, while the UK and Ireland recorded 38%. The findings suggest worries about privacy, regulation and internal understanding vary by market, even as AI adoption becomes more common across the sector.

Across hospitality segments, concerns about sharing company data with AI tools were similar. Food and beverage businesses recorded concern at 45%, while hotels were close behind at 46%.

The findings point to a need for clearer internal controls as AI moves into more parts of the business. Advice from executives at parent company The Access Group focused on governance, staff training and software choice.

Internal controls

Champa Magesh, managing director of Access Hospitality, said businesses should formalise how employees use AI systems and what information can be entered into them.

"Create a formal AI policy that outlines what data can and cannot be entered into AI systems, which tools are approved and who is responsible for oversight.

Make sure this is clearly communicated to all staff members and confirm they understand what the policy means," Magesh said.

Connor Whelan, chief information officer at The Access Group, said policy needs to be backed by technical restrictions.

"Most data breaches aren't the result of sophisticated attacks - they come from everyday gaps that any organisation can fall prey to. When it comes to AI, having a clear policy is important, but it has to be backed by the right technical controls. People need to know what they can and can't put into an AI system, and the technology should enforce those boundaries, not just rely on good intentions. That combination - policy, controls and regular reassessment - is how businesses meaningfully reduce risk," Whelan said.

Magesh also said employers should train staff on what constitutes sensitive business information and the risks of entering such material into AI systems. That reflects the study's finding that almost a third of operators are concerned about their own limited understanding of the technology.

Platform choice

The group also warned against entering sensitive information into public AI tools where data storage and reuse may be unclear. Operators should instead favour systems that keep information under the business's control.

"Avoid entering sensitive information into publicly available AI tools that do not guarantee how information will be used and stored. Systems like OpenAI pose serious risks to confidentiality and data security when used by individuals who do not know how these systems process the information.

Choose secure platforms that protect data and ensure it remains secure and under your control," Magesh said.

Diego Baldini, chief information security officer at The Access Group, said: "The moment you enter sensitive business data into a publicly available AI tool, you lose control of it. That's not a risk worth taking. The businesses getting AI right are choosing platforms built with security at the core, where data is ringfenced, stays under their ownership, and isn't being used to train models they have no visibility into. That's the standard operators should be demanding."