2 Sources
2 Sources
[1]
Memes and conflict: Study shows surge of imagery and fakes can precede international and political violence
Imagine a country with deep political divisions, where different groups don't trust each other and violence seems likely. Now, imagine a flood of political images, hateful memes and mocking videos from domestic and foreign sources taking over social media. What is likely to happen next? The widespread use of social media during times of political trouble and violence has made it harder to prevent conflict and build peace. Social media is changing, with new technologies and strategies available to influence what people think during political crises. These include new ways to promote beliefs and goals, gain support, dehumanize opponents, justify violence and create doubt or dismiss inconvenient facts. At the same time, the technologies themselves are becoming more sophisticated. More and more, social media campaigns use images such as memes, videos and photos - whether edited or not - that have a bigger impact on people than just text. It's harder for AI systems to understand images compared with text. For example, it's easier to track posts that say "Ukrainians are Nazis" than it is to find and understand fake images showing Ukrainian soldiers with Nazi symbols. But these kinds of images are becoming more common. Just as a picture is worth a thousand words, a meme is worth a thousand tweets. Our team of computer and social scientists has tackled the challenge of interpreting image content by combining artificial intelligence methods with human subject matter experts to study how visual social media posts change in high-risk situations. Our research shows that these changes in social media posts, especially those with images, serve as strong indicators of coming mass violence. Surge of memes Our recent analysis found that in the two weeks leading up to Russia's 2022 invasion of Ukraine there was a nearly 9,000% increase in the number of posts and a more than 5,000% increase in manipulated images from Russian milbloggers. Milbloggers are bloggers who focus on current military conflicts. These huge increases show how intense Russia's online propaganda campaign was and how it used social media to influence people's opinions and justify the invasion. This also shows the need to better monitor and analyze visual content on social media. To conduct our analysis, we collected the entire history of posts and images from the accounts of 989 Russian milbloggers on the messaging app Telegram. This includes nearly 6 million posts and over 3 million images. Each post and image was time-stamped and categorized to facilitate detailed analysis. Media forensics We had previously developed a suite of AI tools capable of detecting image alterations and manipulations. For instance, one detected image shows a pro-Russian meme mocking anti-Putin journalist and former Russian soldier Arkady Babchenko, whose death was faked by Ukrainian security services to expose an assassination plot against him. The meme features the language "gamers don't die, they respawn," alluding to video game characters who return to life after dying. This makes light of Babchenko's predicament and illustrates the use of manipulated images to convey political messages and influence public opinion. This is just one example out of millions of images that were strategically manipulated to promote various narratives. Our statistical analysis revealed a massive increase in both the number of images and the extent of their manipulations prior to the invasion. Political context is critical Although these AI systems are very good at finding fakes, they are incapable of understanding the images' political contexts. It is therefore critical that AI scientists work closely with social scientists in order to properly interpret these findings. Our AI systems also categorized images by similarity, which then allowed subject experts to further analyze image clusters based on their narrative content and culturally and politically specific meanings. This is impossible to do at a large scale without AI support. For example, a fake image of French president Emmanuel Macron with Ukrainian governor Vitalii Kim may be meaningless to an AI scientist. But to political scientists the image appears to laud Ukrainians' outsize courage in contrast to foreign leaders who have appeared to be afraid of Russian nuclear threats. The goal was to reinforce Ukrainian doubts about their European allies. Meme warfare The shift to visual media in recent years brings a new type of data that researchers haven't yet studied much in detail. Looking at images can help researchers understand how adversaries frame each other and how this can lead to political conflict. By studying visual content, researchers can see how stories and ideas are spread, which helps us understand the psychological and social factors involved. This is especially important for finding more advanced and subtle ways people are influenced. Projects like this also can contribute to improving early warning efforts and reduce the risks of violence and instability.
[2]
Memes and conflict: Study shows surge of imagery and fakes can precede international and political violence
Imagine a country with deep political divisions, where different groups don't trust each other and violence seems likely. Now, imagine a flood of political images, hateful memes and mocking videos from domestic and foreign sources taking over social media. What is likely to happen next? The widespread use of social media during times of political trouble and violence has made it harder to prevent conflict and build peace. Social media is changing, with new technologies and strategies available to influence what people think during political crises. These include new ways to promote beliefs and goals, gain support, dehumanize opponents, justify violence and create doubt or dismiss inconvenient facts. At the same time, the technologies themselves are becoming more sophisticated. More and more, social media campaigns use images such as memes, videos and photos -- whether edited or not -- that have a bigger impact on people than just text. It's harder for AI systems to understand images compared with text. For example, it's easier to track posts that say "Ukrainians are Nazis" than it is to find and understand fake images showing Ukrainian soldiers with Nazi symbols. But these kinds of images are becoming more common. Just as a picture is worth a thousand words, a meme is worth a thousand tweets. Our team of computer and social scientists has tackled the challenge of interpreting image content by combining artificial intelligence methods with human subject matter experts to study how visual social media posts change in high-risk situations. Our research shows that these changes in social media posts, especially those with images, serve as strong indicators of coming mass violence. Surge of memes Our recent analysis found that in the two weeks leading up to Russia's 2022 invasion of Ukraine there was a nearly 9,000% increase in the number of posts and a more than 5,000% increase in manipulated images from Russian milbloggers. Milbloggers are bloggers who focus on current military conflicts. These huge increases show how intense Russia's online propaganda campaign was and how it used social media to influence people's opinions and justify the invasion. This also shows the need to better monitor and analyze visual content on social media. To conduct our analysis, we collected the entire history of posts and images from the accounts of 989 Russian milbloggers on the messaging app Telegram. This includes nearly 6 million posts and over 3 million images. Each post and image was time-stamped and categorized to facilitate detailed analysis. Media forensics We had previously developed a suite of AI tools capable of detecting image alterations and manipulations. For instance, one detected image shows a pro-Russian meme mocking anti-Putin journalist and former Russian soldier Arkady Babchenko, whose death was faked by Ukrainian security services to expose an assassination plot against him. The meme features the language "gamers don't die, they respawn," alluding to video game characters who return to life after dying. This makes light of Babchenko's predicament and illustrates the use of manipulated images to convey political messages and influence public opinion. This is just one example out of millions of images that were strategically manipulated to promote various narratives. Our statistical analysis revealed a massive increase in both the number of images and the extent of their manipulations prior to the invasion. Political context is critical Although these AI systems are very good at finding fakes, they are incapable of understanding the images' political contexts. It is therefore critical that AI scientists work closely with social scientists in order to properly interpret these findings. Our AI systems also categorized images by similarity, which then allowed subject experts to further analyze image clusters based on their narrative content and culturally and politically specific meanings. This is impossible to do at a large scale without AI support. For example, a fake image of French president Emmanuel Macron with Ukrainian governor Vitalii Kim may be meaningless to an AI scientist. But to political scientists the image appears to laud Ukrainians' outsize courage in contrast to foreign leaders who have appeared to be afraid of Russian nuclear threats. The goal was to reinforce Ukrainian doubts about their European allies. Meme warfare The shift to visual media in recent years brings a new type of data that researchers haven't yet studied much in detail. Looking at images can help researchers understand how adversaries frame each other and how this can lead to political conflict. By studying visual content, researchers can see how stories and ideas are spread, which helps us understand the psychological and social factors involved. This is especially important for finding more advanced and subtle ways people are influenced. Projects like this also can contribute to improving early warning efforts and reduce the risks of violence and instability.
Share
Share
Copy Link
A study combining AI and human expertise shows a significant increase in manipulated images on social media preceding political conflicts, highlighting the need for advanced monitoring tools.
A groundbreaking study combining artificial intelligence (AI) and human expertise has uncovered a significant increase in manipulated images on social media platforms preceding political conflicts. This research, conducted by a team of computer and social scientists, highlights the evolving landscape of online propaganda and its potential to incite violence
1
2
.The study found a staggering 9,000% increase in social media posts and a 5,000% surge in manipulated images from Russian military bloggers in the two weeks leading up to Russia's 2022 invasion of Ukraine. This dramatic spike underscores the intensity of Russia's online propaganda campaign and its use of social media to sway public opinion and justify the invasion
1
2
.To tackle the challenge of interpreting image content, researchers developed a suite of AI tools capable of detecting image alterations and manipulations. These tools were applied to analyze nearly 6 million posts and over 3 million images collected from 989 Russian military bloggers on the Telegram messaging app
1
2
.The study emphasizes the growing importance of visual content in shaping political narratives. Memes, videos, and manipulated photos are becoming increasingly prevalent and influential, often having a more significant impact than text-based content. As one researcher noted, "Just as a picture is worth a thousand words, a meme is worth a thousand tweets"
1
2
.While AI systems excel at identifying fake images, they struggle to understand the political context behind them. This limitation highlights the need for collaboration between AI scientists and social scientists to accurately interpret the findings. The research team employed a combination of AI-driven image categorization and human expert analysis to uncover culturally and politically specific meanings within image clusters
1
2
.Related Stories
The study's findings have significant implications for conflict prevention and early warning systems. By monitoring changes in social media posts, especially those containing manipulated images, researchers can potentially predict and mitigate the risks of mass violence and political instability
1
2
.This research opens up new avenues for studying the psychological and social factors involved in the spread of ideas and narratives through visual content. As social media continues to evolve, understanding these dynamics becomes crucial for developing more sophisticated methods of detecting and countering online propaganda
1
2
.Summarized by
Navi
[1]
09 Oct 2024•Technology
21 Jun 2025•Technology
30 May 2025•Technology