SecurityBrief UK - Technology news for CISOs & cybersecurity decision-makers
Story 297219

Ofcom probes X over Grok deepfake sexual imagery risk

Fri, 16th Jan 2026

UK regulator Ofcom has launched an investigation into social media platform X after complaints over the creation and circulation of deepfake sexual images generated by its Grok artificial intelligence tool, as pressure builds on technology firms over online safety and generative AI.

The move comes as X said its Grok image tool would stop "undressing" images of real people in jurisdictions where this is illegal, following criticism from the UK government and campaigners. The case is emerging as an early test of the UK's Online Safety Act and of how far regulators will push platforms on AI-driven harms.

Regulatory scrutiny

Ofcom, which gained new oversight powers through the Online Safety Act, has begun examining whether X has appropriate systems and processes in place to address the use of Grok for non-consensual sexual imagery and other potential harms. The investigation is expected to focus on how the platform assesses risk, moderates AI-generated content and responds to user reports.

The Grok tool, which X has positioned as part of its broader AI development, has been criticised after reports that users could generate sexually explicit deepfakes of real individuals. Legal and online safety specialists say the case is likely to become one of the first high-profile tests of Ofcom's enforcement approach under the new regime.

"Ofcom's announcement of an investigation into X is a welcome step. It is a real opportunity for Ofcom to show that it has teeth under the Online Safety regime and that social media platforms cannot hide when serious harms are alleged.

"Attempts to dress up Grok's publication of non-consensual sexual images as 'freedom of speech' are deeply misplaced. Free speech is not absolute and should not be used to defend sexual exploitation.The spotlight is now on Ofcom to demonstrate action in real terms," said Jamie Hurworth, Dispute Resolution Senior Associate and Online Safety Act expert, Payne Hicks Beach.

AI tools under pressure

X's announcement that Grok will block the "undressing" of real people in countries where this is against the law highlights the jurisdictional complexity facing global platforms. The company has not indicated that the change will apply uniformly worldwide. That approach raises questions about whether safety standards will differ between markets, and how enforcement will work in practice when content crosses borders.

Safety and privacy advocates argue that the use of AI tools for non-consensual intimate imagery creates new risks for victims. Law firms expect a rise in civil actions and complaints to regulators as deepfake technology becomes more accessible and more convincing.

Safety by design

Industry executives say the Grok case illustrates wider design questions for generative AI systems, including how platforms embed safeguards at launch and how quickly they respond when misuse surfaces. The Online Safety Act places duties on platforms in the UK to assess and mitigate risks, including those affecting children and vulnerable users.

"The Grok controversy highlights a fundamental point: safety can't be an afterthought in generative AI. This story is bigger than one feature or one platform. It is a stress test for how online safety works in the age of generative AI - what "safety by design" looks like, what "effective age assurance" means in practice, and how quickly companies respond when harm is predictable and preventable," says Ricardo Amper, CEO and Founder of Incode.

"Platforms can reduce harm at two points: creation and sharing. If they run the AI tool, they should block obviously abusive requests, like sexualising a real person or a child - as a predictable misuse, not an edge case and act quickly to remove and prevent the resharing of illegal content. The wider point is that generative AI is now widely available, so the goal is not to pretend this can be eliminated overnight. The goal is harm reduction at scale: make abuse harder to create on mainstream platforms, harder to share, and quicker to remove when it appears.

"There is no single silver bullet. The most effective approach is layered - sensible defaults that limit exposure to sexual content, clear enforcement against illegal content, and age assurance that is proportionate to the risk. Build safety into the product, make child protection immediate and default, and treat victims like the priority - because they are."