UK Government Forces Tech Firms to Remove Abusive Images Within 48 Hours or Face Blockade

4 Sources

Share

The UK government has proposed strict new rules requiring tech companies to remove non-consensual intimate images within 48 hours of being reported. Companies that fail to comply could face fines of up to 10% of their global revenue or have their services blocked entirely in the UK. The measures come after widespread criticism of X's Grok AI tool, which was used to create sexualized deepfake images of women and children.

UK Government Introduces Strict 48-Hour Takedown Rule for Abusive Images

The UK government has unveiled sweeping new regulations that will force tech firms to remove abusive images from their platforms within 48 hours of being reported, marking one of the most aggressive regulatory moves against online violence against women and girls. Companies that fail to comply with the 48-hour takedown rule could face fines of up to 10% of their global revenue or have their services blocked entirely in the UK

1

. Prime Minister Keir Starmer called the issue a "national emergency" that requires an immediate response, emphasizing that the burden of tackling abuse must no longer fall on victims but on perpetrators and the companies that enable harm

2

.

Source: ET

Source: ET

The proposed amendments to the Crime and Policing Bill will give police greater powers to enforce takedown measures for non-consensual intimate images, including revenge porn and deepfake nudes

3

. Under the new system, victims would only need to report images once, with Ofcom triggering alerts across multiple platforms simultaneously, removing the burden of chasing content site to site

4

.

Grok AI Controversy Sparks Regulatory Crackdown

The new regulations come weeks after Elon Musk's X platform drew international condemnation when its Grok AI tool was used to create and distribute AI-generated images of real people in compromising positions. Analysis conducted for the Guardian revealed that approximately 6,000 "bikini demands" were being made to the AI chatbots every hour, with many requests made to create images of women in sexually explicit poses

2

. Child safety groups also discovered sexualized AI-generated images of children on the dark web, intensifying calls for action

1

.

Source: Bloomberg

Source: Bloomberg

X restricted access to Grok and blocked the feature, referred to as "bikini mode," after widespread outcry from governments worldwide. However, the incident provided momentum for a broader movement to restrict social media companies, with several European governments now weighing social media bans for younger teenagers, building on legislation passed in Australia last year

1

.

Digital Watermarking and Hash Matching Technology

Ofcom is exploring innovative technological solutions to combat the spread of non-consensual intimate images, including digital watermarking that would allow images to be automatically flagged and removed every time they are reposted

2

. The regulator is considering treating these images with the same severity as child sexual abuse and terrorism content, which already use hash matching technology. This process assigns videos a unique digital signature that can be matched against databases of abusive content, a system already employed by Google, Meta, X and others for child sexual abuse material

2

.

Anne Craanen, who researches online misogyny at the Institute for Strategic Dialogue, stated that "48 hours is certainly possible," though she noted it is longer than the timeframe for removing terrorist content in the EU. India has recently mandated that social media companies remove some deepfake content within three hours, suggesting even tighter timelines are technically feasible

2

.

Enforcement Powers and Industry Response

The measures, which could come into force as early as this summer, will make creating or sharing non-consensual intimate images a "priority offence" under the Online Safety Act, giving it the same level of seriousness as child abuse images or terrorism

2

. Technology Secretary Liz Kendall declared that "the days of tech firms having a free pass are over," emphasizing that no woman should have to chase platform after platform waiting days for an image to come down

3

.

The UK's Revenge Porn Helpline has reported that while it succeeds more than 90% of the time in getting content removed, platforms aren't always compliant and it can take several requests. David Wright, chief executive officer of the UK Safer Internet Centre, emphasized that victims want help now, "not in a few hours or a few days"

1

. The government will also publish guidance for internet companies on how to block rogue websites that host nudification tools and fall outside the reach of the Online Safety Act

3

.

Broader Implications for Online Safety

Starmer wrote that institutional misogyny being "woven into the fabric of our institutions" meant the problem had not been taken seriously enough, with arguments from women often dismissed as exaggerated or isolated incidents

2

. The government has also launched a consultation on measures including an Australian-style ban on under-16s using social media, ensuring it can implement such restrictions quickly if recommended

3

. Ireland's data privacy regulator announced that X faces an EU privacy investigation over the non-consensual deepfakes created by Grok, signaling coordinated international action

3

. The fines on tech companies could amount to billions of pounds for major platforms, creating significant financial incentives for compliance with the new regulations.

Source: Sky News

Source: Sky News

Today's Top Stories

TheOutpost.ai

Your Daily Dose of Curated AI News

Don’t drown in AI news. We cut through the noise - filtering, ranking and summarizing the most important AI news, breakthroughs and research daily. Spend less time searching for the latest in AI and get straight to action.

© 2026 Triveous Technologies Private Limited
Instagram logo
LinkedIn logo