2 Sources
[1]
Musk's X commits to UK regulator on hate speech, with Grok probe still open
Elon Musk's platform has agreed to review illegal hate and terrorism posts within a day on average, restrict UK-proscribed groups, and report quarterly to the regulator. A separate Ofcom investigation continues. X has agreed to a set of commitments on illegal hate speech and terrorist content with Ofcom, Britain's communications regulator said on Friday, after months of pressure that escalated through the autumn and winter. Under the deal, Elon Musk's platform will review suspected illegal hate and terrorism posts within 24 hours on average, will assess at least 85% within 48 hours, and will submit quarterly performance data to the regulator over the next year. The platform has also promised to restrict UK access to accounts operated by or on behalf of organisations proscribed under British terrorism law, and to engage external experts to overhaul a reporting flow that civil-society groups have repeatedly described as opaque. The wording matters here, because flagged content not being clearly received or acted on has been the substance of most complaints filed against X with Ofcom over the past year. Suzanne Cater, Ofcom's online safety enforcement director, said in a statement that 'terrorist content and illegal hate speech is persisting on some of the largest social media sites', and that the gap had become 'of particular importance in the UK following a number of recent hate-motivated crimes suffered by the country's Jewish community'. Imran Ahmed of the Center for Countering Digital Hate said the commitments followed 'sustained campaigning' after last year's attack on Heaton Park Synagogue near Manchester. Britain has had a difficult run of incidents to absorb. The Heaton Park attack was followed by a fatal incident in north London last month that police are treating as terrorism, and CCDH's own monitoring after the Golders Green attack documented what it described as a flood of antisemitic posts on X (the underlying CCDH dataset is here). The new commitments do not address those incidents directly. They set the procedural floor underneath them. The reception was mixed. Danny Stone, chief executive of the Antisemitism Policy Trust, described the package as 'a good start' but said X was still 'failing in so many regards' to tackle racism. Ofcom itself was careful to note that its formal investigation into X, including the company's systems for handling illegal content and questions raised by its Grok AI assistant, remains open. Friday's agreement is a negotiated commitment, not a settlement. There is a separate Grok track running in parallel. Ofcom is examining how X handles AI-generated sexualised imagery created with the chatbot, and earlier this month X limited Grok's image-editing features to paid users after a deepfake controversy and UK ban threat. The Friday commitments do not resolve that thread. They sit alongside it. The wider context is familiar to anyone following the platform's regulatory pipeline. The European Commission has an open proceeding into whether X is failing to curb hate speech, and the company is the largest single source of disinformation on the Commission's own monitoring. Australian and Singaporean regulators have pressed on adjacent issues. The UK pact lands inside a queue rather than at the end of one. Substantively, the new commitments are the operational expression of the Online Safety Act framework that became law in 2023, with the largest platforms now required to take down illegal content quickly or face fines of up to 10% of global turnover. The 24-hour review pledge is the kind of measurable metric the regulator has wanted on paper. The 85%-within-48-hours backstop reads like a number worked out so Ofcom can audit it. The quarterly data, delivered over the next year, will be the first granular dataset the regulator has on whether platform-side commitments actually move illegal-content removal in the direction the law intended.
[2]
X pledges crackdown on illegal content in UK
London (AFP) - Elon Musk's X has committed to cracking down on illegal content to protect UK users, Britain's media regulator said Friday, as it steps up pressure on social media platforms. The commitments include reviewing suspected illegal terrorist and hate content within an average of 24 hours of it being reported, and blocking accounts linked to proscribed terrorist organisations in the UK, Ofcom said in a statement. Ofcom launched a programme last year to ensure the biggest social media companies have adequate systems in place to deal with illegal material shared on their platforms. "We have evidence that terrorist content and illegal hate speech is persisting on some of the largest social media sites," said Oliver Griffiths, Ofcom's online safety director. He said X's commitments were "a step forward, but there's a lot more to do". X, which was called Twitter before Musk bought it, will be required to submit quarterly performance data over 12 months so Ofcom can monitor whether it is delivering safety improvements for UK users. Contacted by AFP, X did not immediately respond. The regulator said online safety concerns have become particularly pronounced in the context of a recent spate of antisemitic attacks in the UK. In January, Ofcom opened a probe into X over its AI chatbot Grok's image-creation feature that has been used to produce sexualised deepfakes. Ofcom said on Friday that its investigation into Grok remained ongoing. Britain's data watchdog has also launched a wider investigation into Musk's X and xAI -- which developed the Grok AI tool -- to see whether the companies complied with personal data law when it came to Grok's generation of sexualised deepfakes.
Share
Copy Link
Elon Musk's X has agreed to review illegal hate speech and terrorist content within 24 hours on average, restrict UK access to proscribed groups, and report quarterly to Ofcom. The deal follows months of regulatory pressure, but a separate investigation into Grok AI's deepfake capabilities remains active.
X has reached an agreement with Ofcom, Britain's communications regulator, committing to faster removal of illegal content after sustained regulatory pressure throughout 2024 and early 2025
1
. Under the deal announced Friday, Elon Musk's platform will review suspected illegal hate speech and terrorist content within an average of 24 hours of being reported, with at least 85% of flagged posts assessed within 48 hours2
. The platform must also submit quarterly performance data to Ofcom over the next year, providing the regulator with its first granular dataset on whether platform-side commitments actually improve illegal content removal1
.
Source: France 24
The commitments extend beyond review timelines. X has promised to restrict UK access to accounts operated by or on behalf of proscribed organizations under British terrorism law
1
. The platform will also engage external experts to overhaul its reporting system, which civil society groups have repeatedly criticized as opaque1
. This procedural clarity matters because flagged content not being clearly received or acted upon has formed the substance of most complaints filed against X with Ofcom over the past year1
.The agreement represents the operational expression of the Online Safety Act framework that became law in 2023, requiring the largest social media platforms to remove illegal content quickly or face fines up to 10% of global turnover
1
. Oliver Griffiths, Ofcom's online safety director, stated that "terrorist content and illegal hate speech is persisting on some of the largest social media sites," calling X's commitments "a step forward, but there's a lot more to do"2
. Suzanne Cater, Ofcom's online safety enforcement director, emphasized the gap had become particularly important following recent hate-motivated crimes suffered by Britain's Jewish community1
.The commitments follow sustained campaigning after last year's attack on Heaton Park Synagogue near Manchester, according to Imran Ahmed of the Center for Countering Digital Hate
1
. A fatal incident in north London last month that police are treating as terrorism, combined with CCDH monitoring that documented a flood of antisemitic content on X following the Golders Green attack, intensified pressure on the platform1
. Danny Stone, chief executive of the Antisemitism Policy Trust, described the package as "a good start" but said X was still "failing in so many regards" to tackle racism1
.Related Stories
Ofcom was careful to note that its formal investigation into X remains open, including questions raised by its Grok AI assistant
1
. In January, Ofcom opened a probe into Grok AI's image-creation feature that has been used to produce sexualized deepfakes2
. Earlier this month, X limited Grok's image-editing features to paid users after a deepfake controversy and UK ban threat, but Friday's commitments do not resolve that investigation1
. Britain's data watchdog has also launched a wider investigation into xAI to determine whether the companies complied with personal data law regarding Grok's generation of harmful content2
.The UK agreement lands inside a queue of regulatory challenges rather than resolving them. The European Commission has an open proceeding examining whether X is failing to curb hate speech, with the company identified as the largest single source of disinformation in the Commission's own monitoring
1
. Australian and Singaporean regulators have pressed on adjacent issues, creating a complex international compliance landscape for the platform1
. The 24-hour review pledge and 85%-within-48-hours backstop provide measurable metrics that Ofcom can audit, establishing accountability mechanisms that other jurisdictions may watch closely1
.Summarized by
Navi
[2]
10 Jan 2026•Policy and Regulation

03 Feb 2026•Policy and Regulation

09 Jan 2026•Policy and Regulation

1
Technology

2
Health

3
Policy and Regulation
