X commits to 24-hour illegal content reviews as Ofcom keeps Grok investigation open

2 Sources

Share

Elon Musk's X has agreed to review illegal hate speech and terrorist content within 24 hours on average, restrict UK access to proscribed groups, and report quarterly to Ofcom. The deal follows months of regulatory pressure, but a separate investigation into Grok AI's deepfake capabilities remains active.

X Strikes Deal with UK Communications Regulator on Illegal Content

X has reached an agreement with Ofcom, Britain's communications regulator, committing to faster removal of illegal content after sustained regulatory pressure throughout 2024 and early 2025

1

. Under the deal announced Friday, Elon Musk's platform will review suspected illegal hate speech and terrorist content within an average of 24 hours of being reported, with at least 85% of flagged posts assessed within 48 hours

2

. The platform must also submit quarterly performance data to Ofcom over the next year, providing the regulator with its first granular dataset on whether platform-side commitments actually improve illegal content removal

1

.

Source: France 24

Source: France 24

Crackdown on Illegal Content Targets Proscribed Organizations

The commitments extend beyond review timelines. X has promised to restrict UK access to accounts operated by or on behalf of proscribed organizations under British terrorism law

1

. The platform will also engage external experts to overhaul its reporting system, which civil society groups have repeatedly criticized as opaque

1

. This procedural clarity matters because flagged content not being clearly received or acted upon has formed the substance of most complaints filed against X with Ofcom over the past year

1

.

Online Safety Act Framework Drives Enforcement

The agreement represents the operational expression of the Online Safety Act framework that became law in 2023, requiring the largest social media platforms to remove illegal content quickly or face fines up to 10% of global turnover

1

. Oliver Griffiths, Ofcom's online safety director, stated that "terrorist content and illegal hate speech is persisting on some of the largest social media sites," calling X's commitments "a step forward, but there's a lot more to do"

2

. Suzanne Cater, Ofcom's online safety enforcement director, emphasized the gap had become particularly important following recent hate-motivated crimes suffered by Britain's Jewish community

1

.

Antisemitic Content Surge Prompts Action

The commitments follow sustained campaigning after last year's attack on Heaton Park Synagogue near Manchester, according to Imran Ahmed of the Center for Countering Digital Hate

1

. A fatal incident in north London last month that police are treating as terrorism, combined with CCDH monitoring that documented a flood of antisemitic content on X following the Golders Green attack, intensified pressure on the platform

1

. Danny Stone, chief executive of the Antisemitism Policy Trust, described the package as "a good start" but said X was still "failing in so many regards" to tackle racism

1

.

Grok AI Investigation Remains Active

Ofcom was careful to note that its formal investigation into X remains open, including questions raised by its Grok AI assistant

1

. In January, Ofcom opened a probe into Grok AI's image-creation feature that has been used to produce sexualized deepfakes

2

. Earlier this month, X limited Grok's image-editing features to paid users after a deepfake controversy and UK ban threat, but Friday's commitments do not resolve that investigation

1

. Britain's data watchdog has also launched a wider investigation into xAI to determine whether the companies complied with personal data law regarding Grok's generation of harmful content

2

.

Regulatory Pressure Mounts Across Multiple Jurisdictions

The UK agreement lands inside a queue of regulatory challenges rather than resolving them. The European Commission has an open proceeding examining whether X is failing to curb hate speech, with the company identified as the largest single source of disinformation in the Commission's own monitoring

1

. Australian and Singaporean regulators have pressed on adjacent issues, creating a complex international compliance landscape for the platform

1

. The 24-hour review pledge and 85%-within-48-hours backstop provide measurable metrics that Ofcom can audit, establishing accountability mechanisms that other jurisdictions may watch closely

1

.

Today's Top Stories

TheOutpost.ai

Don’t drown in AI news. We cut through the noise - filtering, ranking and summarizing the most important AI news, breakthroughs and research daily. Spend less time searching for the latest in AI and get straight to action.

Instagram logo
LinkedIn logo
Youtube logo
© 2026 TheOutpost.AI All rights reserved