UK Regulator Probes X Over Grok AI Deepfake Images

UK Regulator Probes X Over Grok AI Deepfake Images

The UK media regulator Ofcom has opened an investigation into X, the social media platform owned by Elon Musk, over concerns linked to its built-in AI assistant, Grok. The probe follows reports that users have recently been using the AI tool to generate and share intimate images, including so-called deepfakes, on the platform.

According to Ofcom, the investigation focuses on whether X has breached UK online safety legislation, which requires digital platforms to prevent the distribution of illegal content and to act swiftly when such material appears.

Concerns Over AI-Generated Content

Regulators say the case highlights growing risks associated with generative artificial intelligence, particularly when tools are embedded directly into large social networks. Ofcom is assessing whether X has adequate safeguards in place to stop users from creating or sharing unlawful material using Grok, and whether the platform responded appropriately once such content began circulating.

UK law places responsibility on platform operators to mitigate harm, even when content is created by users rather than the company itself.

Government Moves Toward New Deepfake Rules

Alongside Ofcom’s investigation, the UK government has confirmed it is working on new regulations aimed at explicitly banning the uploading and distribution of non-consensual intimate deepfakes on social media platforms. The proposed measures would strengthen existing protections and give authorities clearer enforcement powers as AI-generated content becomes more widespread.

Officials say the planned rules are intended to close legal gaps and ensure that emerging technologies cannot be used to bypass existing content standards.

X Under Growing Regulatory Pressure

The investigation adds to broader scrutiny facing X in multiple jurisdictions over content moderation and compliance with local laws. Ofcom has not set a timeline for the conclusion of its inquiry, and no penalties have been announced at this stage.

X has not publicly commented in detail on the investigation, though the platform has previously stated that it aims to comply with local regulations and improve safety systems around AI-generated content.

A Test Case for AI and Platform Accountability

The case is being closely watched as a potential precedent for how regulators handle AI tools integrated into social platforms. As generative AI becomes more powerful and accessible, authorities across Europe and beyond are increasingly focused on how companies prevent misuse while still allowing innovation.

For the UK, the outcome of the Ofcom investigation could shape future enforcement of online safety rules in the age of artificial intelligence.

Latest Articles

avatar