SecurityBrief UK - Technology news for CISOs & cybersecurity decision-makers
Ai cyber attack zero trust server room glowing data shield

AI reshapes data privacy as trust & attacks escalate

Thu, 29th Jan 2026

Security leaders are warning that artificial intelligence is reshaping data privacy risks, as businesses face rising consumer mistrust and more automated cyber attacks on Data Privacy Day.

Experts from identity specialist Ping Identity and cyber security firm Tenable said traditional approaches to privacy, compliance and access control no longer match the pace of change in AI-driven threats or the growth of machine identities inside corporate systems.

Consumer trust

Patrick Harding, Chief Product Architect at Ping Identity, said privacy concerns have become central to how customers assess organisations and their digital services.

"This week offers an opportunity to pause and assess the rapidly evolving landscape of digital trust, as privacy really boils down to choice and trust around how personal data is being used. Data privacy is no longer a passing concern for consumers - it has become a defining factor in how they judge brands, with three-quarters now more worried about the safety of their personal data than they were five years ago, and a mere 14% trusting major organisations to handle identity data responsibly," said Patrick Harding, Chief Product Architect, Ping Identity.

Harding said organisations face a growing expectation that they explain how they collect and process personal data and that they adopt stronger verification around digital interactions.

AI-driven threats

Both commentators highlighted the role of AI in expanding the scale and sophistication of attacks against personal and corporate data.

Harding pointed to social engineering and account takeover as areas where AI is already challenging existing security controls.

"Whether it's social engineering, state-sponsored impersonation or account takeover risks, AI will continue to test what we know to be true. As threats advance and AI agents increasingly act on behalf of humans, only the continuously verified should be trusted as authentic. For businesses, the path forward is clear: trust must be earned through transparency, verification, and restraint in how personal data is collected and used. The businesses that adopt a "verify everything" approach that puts privacy at the centre and builds confidence across every identity, every interaction and every decision, will have the competitive edge," said Harding.

Bernard Montel, EMEA Field CTO at Tenable, said AI is now a core feature of both offensive and defensive cyber activity.

Machine behaviour

Montel said organisations increasingly run AI systems and agents that act inside sensitive environments. He said this changes the focus of security teams.

"This Data Privacy Day, protecting personal data is about more than compliance; it's about defending freedom and privacy. As scams and extortion exploit exposed information, data leaks are causing real-world harm.

"With cybercriminals weaponising AI, attacks are becoming faster, smarter and harder to detect. At the same time, companies are adopting agentic AI, introducing a new risk: digital identities acting independently within sensitive systems. Effective governance now demands visibility into machine behaviour, not just human access.

"To combat these emerging challenges, businesses must invest in identity governance. Compliance should also be the baseline, with prevention and resilience built in from day one," said Bernard Montel, EMEA Field CTO, Tenable.

Both experts framed visibility into AI behaviour and machine identities as a requirement alongside existing controls over human users.