2 Sources
2 Sources
[1]
AI tools 'exploited' for racist European city videos
London (AFP) - Daubed in Arabic-looking graffiti, London's Big Ben is shown smouldering above piles of rubbish and crowds dressed in traditional Islamic garb in an AI-generated, dystopian vision of the British capital. Far-right leaders and politicians are seizing on such clips of reimagined European cities changed by migration to promote racist views, falsely suggesting AI is objectively predicting the future. The videos -- which show immigrants "replacing" white people -- can be made quickly using popular chatbots, despite guardrails intended to block harmful content, experts told AFP. "AI tools are being exploited to visualise and spread extremist narratives," the CEO of the Center for Countering Digital Hate watchdog, Imran Ahmed, told AFP. British far-right leader Tommy Robinson in June re-posted the video of "London in 2050" on X, gaining over half a million views. "Europe in general is doomed," one viewer responded. Robinson -- who has posted similar AI videos of New York, Milan and Brussels -- led the largest far-right march in central London for many years in September, when up to 150,000 people demonstrated against the influx of migrants. "Moderation systems are consistently failing across all platforms to prevent this content from being created and shared," said Ahmed of the Center for Countering Digital Hate. He singled out X, owned by tech billionaire Elon Musk, as "very powerful for amplifying hate and disinformation". TikTok has banned the creator account behind the videos posted by Robinson. According to the platform, it bans accounts that repeatedly promote hateful ideology, including conspiracy theories. But such videos have gained millions of views across social media and have been reposted by Austrian radical nationalist Martin Sellner and Belgian right-wing parliamentarian Sam van Rooy. Italian MEP Silvia Sardone from rightwing populist party Lega in April posted a dystopian video of Milan on Facebook, asking whether "we really want this future". Dutch far-right leader Geert Wilders' Party for Freedom released an AI video of women in Muslim headscarves for the October elections titled "Netherlands in 2050". He has predicted that Islam will be the Netherlands' largest religion by that time, despite just six percent of the population identifying as Muslim. Such videos amplify "harmful stereotypes... that can fuel violence", said Beatriz Lopes Buarque, an academic at the London School of Economics researching digital politics and conspiracy theories. "Mass radicalisation facilitated by AI is getting worse," she told AFP. 'Hate is profitable' Using a pseudonym, the creator of the videos reposted by Robinson offers paid courses to teach people how to make their own AI clips, suggesting "conspiracy theories" make a "great" topic to attract clicks. "The problem is that now we live in a society in which hate is very profitable," Buarque said. Racist video creators appear to be based in various countries including Greece and Britain, although they hide their locations. Their videos are a "visual representation of the great replacement conspiracy theory," Buarque said. Popularised by a French writer, this claims Western elites are complicit in eradicating the local population and "replacing" them with immigrants. "This particular conspiracy theory has often been mentioned as a justification for terrorist attacks," said Buarque. Round dates such as 2050 also crop up in a similar "white genocide" conspiracy theory, which has anti-Semitic elements, she added. AFP digital reporters in Europe asked ChatGPT, GROK, Gemini and VEO 3 to show London and other cities in 2050, but found this generally generated positive images. Experts, however, said chatbots could be easily guided to create racist images. None has moderation that "is 100 percent accurate", said Salvatore Romano, head of research at AI Forensics. "This... leaves the space for malicious actors to exploit chatbots to produce images like the ones on migrants." Marc Owen Jones, an academic specialising in disinformation at Northwestern University's Qatar campus, found ChatGPT refused to show ethnic groups "in degrading, stereotypical, or dehumanising ways". But it agreed to visualise "a bleak, diverse, survivalist London" and then make it "more inclusive, with mosques too". The final image shows bearded, ragged men rowing on a rubbish-strewn River Thames, with mosques dominating the skyline. AFP, along with more than 100 other fact-checking organisations, is paid by TikTok and Facebook parent Meta to verify videos that potentially contain false information.
[2]
AI tools 'exploited' for racist European city videos - The Economic Times
AI-generated videos showing a ruined London covered in Arabic-style graffiti and Islamic imagery are being used by far-right figures to spread racist messages. These clips falsely suggest AI is predicting Europe's future. Experts warn that extremists are misusing AI tools to promote harmful content despite safety measures meant to stop this.Daubed in Arabic-looking graffiti, London's Big Ben is shown smouldering above piles of rubbish and crowds dressed in traditional Islamic garb in an AI-generated, dystopian vision of the British capital. Far-right leaders and politicians are seizing on such clips of reimagined European cities changed by migration to promote racist views, falsely suggesting AI is objectively predicting the future. The videos -- which show immigrants "replacing" white people -- can be made quickly using popular chatbots, despite guardrails intended to block harmful content, experts told AFP. "AI tools are being exploited to visualise and spread extremist narratives," the CEO of the Center for Countering Digital Hate watchdog, Imran Ahmed, told AFP. British far-right leader Tommy Robinson in June re-posted the video of "London in 2050" on X, gaining over half a million views. "Europe in general is doomed," one viewer responded. Robinson -- who has posted similar AI videos of New York, Milan and Brussels -- led the largest far-right march in central London for many years in September, when up to 150,000 people demonstrated against the influx of migrants. "Moderation systems are consistently failing across all platforms to prevent this content from being created and shared," said Ahmed of the Center for Countering Digital Hate. He singled out X, owned by tech billionaire Elon Musk, as "very powerful for amplifying hate and disinformation". TikTok has banned the creator account behind the videos posted by Robinson. According to the platform, it bans accounts that repeatedly promote hateful ideology, including conspiracy theories. But such videos have gained millions of views across social media and have been reposted by Austrian radical nationalist Martin Sellner and Belgian right-wing parliamentarian Sam van Rooy. Italian MEP Silvia Sardone from rightwing populist party Lega in April posted a dystopian video of Milan on Facebook, asking whether "we really want this future". Dutch far-right leader Geert Wilders' Party for Freedom released an AI video of women in Muslim headscarves for the October elections titled "Netherlands in 2050". He has predicted that Islam will be the Netherlands' largest religion by that time, despite just six percent of the population identifying as Muslim. Such videos amplify "harmful stereotypes... that can fuel violence", said Beatriz Lopes Buarque, an academic at the London School of Economics researching digital politics and conspiracy theories. "Mass radicalisation facilitated by AI is getting worse," she told AFP. 'Hate is profitable' Using a pseudonym, the creator of the videos reposted by Robinson offers paid courses to teach people how to make their own AI clips, suggesting "conspiracy theories" make a "great" topic to attract clicks. "The problem is that now we live in a society in which hate is very profitable," Buarque said. Racist video creators appear to be based in various countries including Greece and Britain, although they hide their locations. Their videos are a "visual representation of the great replacement conspiracy theory," Buarque said. Popularised by a French writer, this claims Western elites are complicit in eradicating the local population and "replacing" them with immigrants. "This particular conspiracy theory has often been mentioned as a justification for terrorist attacks," said Buarque. Round dates such as 2050 also crop up in a similar "white genocide" conspiracy theory, which has anti-Semitic elements, she added. AFP digital reporters in Europe asked ChatGPT, GROK, Gemini and VEO 3 to show London and other cities in 2050, but found this generally generated positive images. Experts, however, said chatbots could be easily guided to create racist images. None has moderation that "is 100 percent accurate", said Salvatore Romano, head of research at AI Forensics. "This... leaves the space for malicious actors to exploit chatbots to produce images like the ones on migrants." Marc Owen Jones, an academic specialising in disinformation at Northwestern University's Qatar campus, found ChatGPT refused to show ethnic groups "in degrading, stereotypical, or dehumanising ways". But it agreed to visualise "a bleak, diverse, survivalist London" and then make it "more inclusive, with mosques too". The final image shows bearded, ragged men rowing on a rubbish-strewn River Thames, with mosques dominating the skyline. AFP, along with more than 100 other fact-checking organisations, is paid by TikTok and Facebook parent Meta to verify videos that potentially contain false information.
Share
Share
Copy Link
Far-right leaders are using AI-generated videos to promote xenophobic views of European cities' futures. These manipulated images, falsely presented as AI predictions, are spreading rapidly on social media platforms.
AI-generated videos depicting dystopian visions of European cities are being exploited by far-right leaders and politicians to promote racist and xenophobic views. These manipulated images, falsely presented as objective AI predictions of the future, are spreading rapidly across social media platforms, garnering millions of views and fueling extremist narratives
1
.The videos typically show reimagined European cities dramatically altered by migration, with scenes such as London's Big Ben surrounded by Arabic-looking graffiti and crowds in traditional Islamic attire. These AI-generated clips are being used to visualize the controversial 'great replacement' conspiracy theory, which claims that Western elites are complicit in replacing local populations with immigrants
1
.Prominent far-right figures have been quick to seize upon these AI-generated videos. British far-right leader Tommy Robinson shared a video titled 'London in 2050' on X (formerly Twitter), which gained over half a million views. Similar videos of New York, Milan, and Brussels have also been circulated
2
.The reach of these videos extends beyond the UK, with far-right leaders and politicians across Europe sharing similar content:
1
.Social media platforms are struggling to effectively moderate this type of content. Imran Ahmed, CEO of the Center for Countering Digital Hate, stated that 'moderation systems are consistently failing across all platforms to prevent this content from being created and shared.' He particularly criticized X, owned by Elon Musk, as being 'very powerful for amplifying hate and disinformation'
2
.While TikTok has banned the creator account behind some of these videos, the content continues to proliferate across various platforms. Experts warn that the ease of creating such content using popular AI chatbots, despite intended safeguards, is exacerbating the problem
1
.Related Stories
Beatriz Lopes Buarque, an academic at the London School of Economics, warns that these videos amplify 'harmful stereotypes that can fuel violence' and that 'mass radicalization facilitated by AI is getting worse.' The videos are often linked to conspiracy theories such as the 'great replacement' and 'white genocide,' which have been cited as justifications for terrorist attacks
2
.Experts emphasize that while AI chatbots like ChatGPT, GROK, Gemini, and VEO 3 generally produce positive images when asked about future cities, they can be manipulated to create racist content. Salvatore Romano, head of research at AI Forensics, notes that no AI moderation system is 100% accurate, leaving room for exploitation
1
.As AI tools become more sophisticated and accessible, the challenge of preventing their misuse for spreading extremist ideologies grows more complex, highlighting the need for improved AI governance and content moderation strategies.
Summarized by
Navi
[1]