3 Sources
[1]
Weaponized storytelling: How AI is helping researchers sniff out disinformation campaigns
It is not often that cold, hard facts determine what people care most about and what they believe. Instead, it is the power and familiarity of a well-told story that reigns supreme. Whether it's a heartfelt anecdote, a personal testimony or a meme echoing familiar cultural narratives, stories tend to stick with us, move us and shape our beliefs. This characteristic of storytelling is precisely what can make it so dangerous when wielded by the wrong hands. For decades, foreign adversaries have used narrative tactics in efforts to manipulate public opinion in the United States. Social media platforms have brought new complexity and amplification to these campaigns. The phenomenon garnered ample public scrutiny after evidence emerged of Russian entities exerting influence over election-related material on Facebook in the lead-up to the 2016 election. While artificial intelligence is exacerbating the problem, it is at the same time becoming one of the most powerful defenses against such manipulations. Researchers have been using machine learning techniques to analyze disinformation content. At the Cognition, Narrative and Culture Lab at Florida International University, we are building AI tools to help detect disinformation campaigns that employ tools of narrative persuasion. We are training AI to go beyond surface-level language analysis to understand narrative structures, trace personas and timelines and decode cultural references. Disinformation vs. misinformation In July 2024, the Department of Justice disrupted a Kremlin-backed operation that used nearly a thousand fake social media accounts to spread false narratives. These weren't isolated incidents. They were part of an organized campaign, powered in part by AI. Disinformation differs crucially from misinformation. While misinformation is simply false or inaccurate information - getting facts wrong - disinformation is intentionally fabricated and shared specifically to mislead and manipulate. A recent illustration of this came in October 2024, when a video purporting to show a Pennsylvania election worker tearing up mail-in ballots marked for Donald Trump swept platforms such as X and Facebook. Within days, the FBI traced the clip to a Russian influence outfit, but not before it racked up millions of views. This example vividly demonstrates how foreign influence campaigns artificially manufacture and amplify fabricated stories to manipulate U.S. politics and stoke divisions among Americans. Humans are wired to process the world through stories. From childhood, we grow up hearing stories, telling them and using them to make sense of complex information. Narratives don't just help people remember - they help us feel. They foster emotional connections and shape our interpretations of social and political events. This makes them especially powerful tools for persuasion - and, consequently, for spreading disinformation. A compelling narrative can override skepticism and sway opinion more effectively than a flood of statistics. For example, a story about rescuing a sea turtle with a plastic straw in its nose often does more to raise concern about plastic pollution than volumes of environmental data. Usernames, cultural context and narrative time Using AI tools to piece together a picture of the narrator of a story, the timeline for how they tell it and cultural details specific to where the story takes place can help identify when a story doesn't add up. Narratives are not confined to the content users share - they also extend to the personas users construct to tell them. Even a social media handle can carry persuasive signals. We have developed a system that analyzes usernames to infer demographic and identity traits such as name, gender, location, sentiment and even personality, when such cues are embedded in the handle. This work, presented in 2024 at the International Conference on Web and Social Media, highlights how even a brief string of characters can signal how users want to be perceived by their audience. For example, a user attempting to appear as a credible journalist might choose a handle like @JamesBurnsNYT rather than something more casual like @JimB_NYC. Both may suggest a male user from New York, but one carries the weight of institutional credibility. Disinformation campaigns often exploit these perceptions by crafting handles that mimic authentic voices or affiliations. Although a handle alone cannot confirm whether an account is genuine, it plays an important role in assessing overall authenticity. By interpreting usernames as part of the broader narrative an account presents, AI systems can better evaluate whether an identity is manufactured to gain trust, blend into a target community or amplify persuasive content. This kind of semantic interpretation contributes to a more holistic approach to disinformation detection - one that considers not just what is said but who appears to be saying it and why. Also, stories don't always unfold chronologically. A social media thread might open with a shocking event, flash back to earlier moments and skip over key details in between. Humans handle this effortlessly - we're used to fragmented storytelling. But for AI, determining a sequence of events based on a narrative account remains a major challenge. Our lab is also developing methods for timeline extraction, teaching AI to identify events, understand their sequence and map how they relate to one another, even when a story is told in nonlinear fashion. Objects and symbols often carry different meanings in different cultures, and without cultural awareness, AI systems risk misinterpreting the narratives they analyze. Foreign adversaries can exploit cultural nuances to craft messages that resonate more deeply with specific audiences, enhancing the persuasive power of disinformation. Consider the following sentence: "The woman in the white dress was filled with joy." In a Western context, the phrase evokes a happy image. But in parts of Asia, where white symbolizes mourning or death, it could feel unsettling or even offensive. In order to use AI to detect disinformation that weaponizes symbols, sentiments and storytelling within targeted communities, it's critical to give AI this sort of cultural literacy. In our research, we've found that training AI on diverse cultural narratives improves its sensitivity to such distinctions. Who benefits from narrative-aware AI? Narrative-aware AI tools can help intelligence analysts quickly identify orchestrated influence campaigns or emotionally charged storylines that are spreading unusually fast. They might use AI tools to process large volumes of social media posts in order to map persuasive narrative arcs, identify near-identical storylines and flag coordinated timing of social media activity. Intelligence services could then use countermeasures in real time. In addition, crisis-response agencies could swiftly identify harmful narratives, such as false emergency claims during natural disasters. Social media platforms could use these tools to efficiently route high-risk content for human review without unnecessary censorship. Researchers and educators could also benefit by tracking how a story evolves across communities, making narrative analysis more rigorous and shareable. Ordinary users can also benefit from these technologies. The AI tools could flag social media posts in real time as possible disinformation, allowing readers to be skeptical of suspect stories, thus counteracting falsehoods before they take root. As AI takes on a greater role in monitoring and interpreting online content, its ability to understand storytelling beyond just traditional semantic analysis has become essential. To this end, we are building systems to uncover hidden patterns, decode cultural signals and trace narrative timelines to reveal how disinformation takes hold.
[2]
Weaponized storytelling: How AI is helping researchers sniff out disinformation campaigns
It is not often that cold, hard facts determine what people care most about and what they believe. Instead, it is the power and familiarity of a well-told story that reigns supreme. Whether it's a heartfelt anecdote, a personal testimony or a meme echoing familiar cultural narratives, stories tend to stick with us, move us and shape our beliefs. This characteristic of storytelling is precisely what can make it so dangerous when wielded by the wrong hands. For decades, foreign adversaries have used narrative tactics in efforts to manipulate public opinion in the United States. Social media platforms have brought new complexity and amplification to these campaigns. The phenomenon garnered ample public scrutiny after evidence emerged of Russian entities exerting influence over election-related material on Facebook in the lead-up to the 2016 election. While artificial intelligence is exacerbating the problem, it is at the same time becoming one of the most powerful defenses against such manipulations. Researchers have been using machine learning techniques to analyze disinformation content. At the Cognition, Narrative and Culture Lab at Florida International University, we are building AI tools to help detect disinformation campaigns that employ tools of narrative persuasion. We are training AI to go beyond surface-level language analysis to understand narrative structures, trace personas and timelines and decode cultural references. Disinformation vs. misinformation In July 2024, the Department of Justice disrupted a Kremlin-backed operation that used nearly a thousand fake social media accounts to spread false narratives. These weren't isolated incidents. They were part of an organized campaign, powered in part by AI. Disinformation differs crucially from misinformation. While misinformation is simply false or inaccurate information -- getting facts wrong -- disinformation is intentionally fabricated and shared specifically to mislead and manipulate. A recent illustration of this came in October 2024, when a video purporting to show a Pennsylvania election worker tearing up mail-in ballots marked for Donald Trump swept platforms such as X and Facebook. Within days, the FBI traced the clip to a Russian influence outfit, but not before it racked up millions of views. This example vividly demonstrates how foreign influence campaigns artificially manufacture and amplify fabricated stories to manipulate U.S. politics and stoke divisions among Americans. Humans are wired to process the world through stories. From childhood, we grow up hearing stories, telling them and using them to make sense of complex information. Narratives don't just help people remember -- they help us feel. They foster emotional connections and shape our interpretations of social and political events. This makes them especially powerful tools for persuasion -- and, consequently, for spreading disinformation. A compelling narrative can override skepticism and sway opinion more effectively than a flood of statistics. For example, a story about rescuing a sea turtle with a plastic straw in its nose often does more to raise concern about plastic pollution than volumes of environmental data. Usernames, cultural context and narrative time Using AI tools to piece together a picture of the narrator of a story, the timeline for how they tell it and cultural details specific to where the story takes place can help identify when a story doesn't add up. Narratives are not confined to the content users share -- they also extend to the personas users construct to tell them. Even a social media handle can carry persuasive signals. We have developed a system that analyzes usernames to infer demographic and identity traits such as name, gender, location, sentiment and even personality, when such cues are embedded in the handle. This work, presented in 2024 at the International Conference on Web and Social Media, highlights how even a brief string of characters can signal how users want to be perceived by their audience. For example, a user attempting to appear as a credible journalist might choose a handle like @JamesBurnsNYT rather than something more casual like @JimB_NYC. Both may suggest a male user from New York, but one carries the weight of institutional credibility. Disinformation campaigns often exploit these perceptions by crafting handles that mimic authentic voices or affiliations. Although a handle alone cannot confirm whether an account is genuine, it plays an important role in assessing overall authenticity. By interpreting usernames as part of the broader narrative an account presents, AI systems can better evaluate whether an identity is manufactured to gain trust, blend into a target community or amplify persuasive content. This kind of semantic interpretation contributes to a more holistic approach to disinformation detection -- one that considers not just what is said, but who appears to be saying it and why. Also, stories don't always unfold chronologically. A social media thread might open with a shocking event, flash back to earlier moments and skip over key details in between. Humans handle this effortlessly -- we're used to fragmented storytelling. But for AI, determining a sequence of events based on a narrative account remains a major challenge. Our lab is also developing methods for timeline extraction, teaching AI to identify events, understand their sequence and map how they relate to one another, even when a story is told in a nonlinear fashion. Objects and symbols often carry different meanings in different cultures, and without cultural awareness, AI systems risk misinterpreting the narratives they analyze. Foreign adversaries can exploit cultural nuances to craft messages that resonate more deeply with specific audiences, enhancing the persuasive power of disinformation. Consider the following sentence: "The woman in the white dress was filled with joy." In a Western context, the phrase evokes a happy image. But in parts of Asia, where white symbolizes mourning or death, it could feel unsettling or even offensive. In order to use AI to detect disinformation that weaponizes symbols, sentiments and storytelling within targeted communities, it's critical to give AI this sort of cultural literacy. In our research, we've found that training AI on diverse cultural narratives improves its sensitivity to such distinctions. Who benefits from narrative-aware AI? Narrative-aware AI tools can help intelligence analysts quickly identify orchestrated influence campaigns or emotionally charged storylines that are spreading unusually fast. They might use AI tools to process large volumes of social media posts in order to map persuasive narrative arcs, identify near-identical storylines and flag coordinated timing of social media activity. Intelligence services could then use countermeasures in real time. In addition, crisis-response agencies could swiftly identify harmful narratives, such as false emergency claims during natural disasters. Social media platforms could use these tools to efficiently route high-risk content for human review without unnecessary censorship. Researchers and educators could also benefit by tracking how a story evolves across communities, making narrative analysis more rigorous and shareable. Ordinary users can also benefit from these technologies. The AI tools could flag social media posts in real time as possible disinformation, allowing readers to be skeptical of suspect stories, thus counteracting falsehoods before they take root. As AI takes on a greater role in monitoring and interpreting online content, its ability to understand storytelling beyond just traditional semantic analysis has become essential. To this end, we are building systems to uncover hidden patterns, decode cultural signals and trace narrative timelines to reveal how disinformation takes hold. This article is republished from The Conversation under a Creative Commons license. Read the original article.
[3]
Weaponised storytelling: How AI is helping researchers sniff out disinformation campaigns
While artificial intelligence is making the problem worse, it is also becoming one of the most powerful tools to fight against such manipulation. Researchers are using machine learning techniques to analyse disinformation. AI tools can help piece together details about who is telling the story, the timeline of how it is told, and cultural elements specific to where the story takes place. This can reveal when a story does not add up.It is not often that cold, hard facts determine what people care most about and what they believe. Instead, it is the power and familiarity of a well-told story that reigns supreme. Whether it's a heartfelt anecdote, a personal testimony or a meme echoing familiar cultural narratives, stories tend to stick with us, move us and shape our beliefs. This characteristic of storytelling is precisely what can make it so dangerous when wielded by the wrong hands. For decades, foreign adversaries have used narrative tactics in efforts to manipulate public opinion in the United States. Social media platforms have brought new complexity and amplification to these campaigns. The phenomenon garnered ample public scrutiny after evidence emerged of Russian entities exerting influence over election-related material on Facebook in the lead-up to the 2016 election. While artificial intelligence is exacerbating the problem, it is at the same time becoming one of the most powerful defences against such manipulations. Researchers have been using machine learning techniques to analyze disinformation content. At the Cognition, Narrative and Culture Lab at Florida International University, we are building AI tools to help detect disinformation campaigns that employ tools of narrative persuasion. We are training AI to go beyond surface-level language analysis to understand narrative structures, trace personas and timelines and decode cultural references. Disinformation vs misinformation In July 2024, the Department of Justice disrupted a Kremlin-backed operation that used nearly a thousand fake social media accounts to spread false narratives. These weren't isolated incidents. They were part of an organized campaign, powered in part by AI. Disinformation differs crucially from misinformation. While misinformation is simply false or inaccurate information - getting facts wrong - disinformation is intentionally fabricated and shared specifically to mislead and manipulate. A recent illustration of this came in October 2024, when a video purporting to show a Pennsylvania election worker tearing up mail-in ballots marked for Donald Trump swept platforms such as X and Facebook. Within days, the FBI traced the clip to a Russian influence outfit, but not before it racked up millions of views. This example vividly demonstrates how foreign influence campaigns artificially manufacture and amplify fabricated stories to manipulate US politics and stoke divisions among Americans. Humans are wired to process the world through stories. From childhood, we grow up hearing stories, telling them and using them to make sense of complex information. Narratives don't just help people remember - they help us feel. They foster emotional connections and shape our interpretations of social and political events. This makes them especially powerful tools for persuasion - and, consequently, for spreading disinformation. A compelling narrative can override scepticism and sway opinion more effectively than a flood of statistics. For example, a story about rescuing a sea turtle with a plastic straw in its nose often does more to raise concern about plastic pollution than volumes of environmental data. Usernames, cultural context and narrative time Using AI tools to piece together a picture of the narrator of a story, the timeline for how they tell it and cultural details specific to where the story takes place can help identify when a story doesn't add up. Narratives are not confined to the content users share - they also extend to the personas users construct to tell them. Even a social media handle can carry persuasive signals. We have developed a system that analyzes usernames to infer demographic and identity traits such as name, gender, location, sentiment and even personality, when such cues are embedded in the handle. This work, presented in 2024 at the International Conference on Web and Social Media, highlights how even a brief string of characters can signal how users want to be perceived by their audience. For example, a user attempting to appear as a credible journalist might choose a handle like @JamesBurnsNYT rather than something more casual like @JimB_NYC. Both may suggest a male user from New York, but one carries the weight of institutional credibility. Disinformation campaigns often exploit these perceptions by crafting handles that mimic authentic voices or affiliations. Although a handle alone cannot confirm whether an account is genuine, it plays an important role in assessing overall authenticity. By interpreting usernames as part of the broader narrative an account presents, AI systems can better evaluate whether an identity is manufactured to gain trust, blend into a target community or amplify persuasive content. This kind of semantic interpretation contributes to a more holistic approach to disinformation detection - one that considers not just what is said but who appears to be saying it and why. Also, stories don't always unfold chronologically. A social media thread might open with a shocking event, flash back to earlier moments and skip over key details in between. Humans handle this effortlessly - we're used to fragmented storytelling. But for AI, determining a sequence of events based on a narrative account remains a major challenge. Our lab is also developing methods for timeline extraction, teaching AI to identify events, understand their sequence and map how they relate to one another, even when a story is told in nonlinear fashion. Objects and symbols often carry different meanings in different cultures, and without cultural awareness, AI systems risk misinterpreting the narratives they analyze. Foreign adversaries can exploit cultural nuances to craft messages that resonate more deeply with specific audiences, enhancing the persuasive power of disinformation. Consider the following sentence: "The woman in the white dress was filled with joy." In a Western context, the phrase evokes a happy image. But in parts of Asia, where white symbolizes mourning or death, it could feel unsettling or even offensive. In order to use AI to detect disinformation that weaponises symbols, sentiments and storytelling within targeted communities, it's critical to give AI this sort of cultural literacy. In our research, we've found that training AI on diverse cultural narratives improves its sensitivity to such distinctions. Who benefits from narrative-aware AI? Narrative-aware AI tools can help intelligence analysts quickly identify orchestrated influence campaigns or emotionally charged storylines that are spreading unusually fast. They might use AI tools to process large volumes of social media posts in order to map persuasive narrative arcs, identify near-identical storylines and flag coordinated timing of social media activity. Intelligence services could then use countermeasures in real time. In addition, crisis-response agencies could swiftly identify harmful narratives, such as false emergency claims during natural disasters. Social media platforms could use these tools to efficiently route high-risk content for human review without unnecessary censorship. Researchers and educators could also benefit by tracking how a story evolves across communities, making narrative analysis more rigorous and shareable. Ordinary users can also benefit from these technologies. The AI tools could flag social media posts in real time as possible disinformation, allowing readers to be sceptical of suspect stories, thus counteracting falsehoods before they take root. As AI takes on a greater role in monitoring and interpreting online content, its ability to understand storytelling beyond just traditional semantic analysis has become essential. To this end, we are building systems to uncover hidden patterns, decode cultural signals and trace narrative timelines to reveal how disinformation takes hold.
Share
Copy Link
Researchers are developing AI tools to detect and combat disinformation campaigns that use narrative persuasion techniques, while also acknowledging AI's role in exacerbating the problem.
In an era where narratives often trump facts, the power of storytelling has become a double-edged sword. While stories can effectively convey complex information and evoke emotions, they can also be weaponized to manipulate public opinion. For decades, foreign adversaries have exploited narrative tactics to influence U.S. public sentiment, with social media platforms amplifying these efforts 1.
Source: The Conversation
The 2016 U.S. election brought this issue to the forefront when evidence emerged of Russian entities manipulating election-related content on Facebook. More recently, in July 2024, the Department of Justice disrupted a Kremlin-backed operation using nearly a thousand fake social media accounts to spread false narratives 2.
Artificial Intelligence (AI) plays a paradoxical role in this landscape. While it contributes to the problem by enabling more sophisticated disinformation campaigns, it's simultaneously emerging as a powerful defense against such manipulations. Researchers are leveraging machine learning techniques to analyze and detect disinformation content 3.
Source: Economic Times
At Florida International University's Cognition, Narrative and Culture Lab, scientists are developing AI tools to identify disinformation campaigns that employ narrative persuasion techniques. These tools go beyond surface-level language analysis to understand narrative structures, trace personas, extract timelines, and decode cultural references 1.
It's crucial to distinguish between disinformation and misinformation. While misinformation is simply inaccurate information, disinformation is intentionally fabricated and shared to mislead and manipulate. An example from October 2024 illustrates this distinction: a video falsely showing a Pennsylvania election worker destroying Trump-marked ballots went viral on social media platforms. The FBI later traced it to a Russian influence operation, but not before it had garnered millions of views 2.
Source: Tech Xplore
Researchers are developing various AI-powered tools to combat disinformation:
Username Analysis: A system that analyzes social media handles to infer demographic and identity traits, helping to assess the authenticity of user accounts 1.
Timeline Extraction: AI methods to identify events, understand their sequence, and map relationships between them, even in non-linear narratives 2.
Cultural Context Analysis: Tools to interpret objects and symbols that may carry different meanings across cultures, ensuring accurate narrative analysis 3.
These AI-driven approaches contribute to a more holistic method of disinformation detection, considering not just the content of messages but also who appears to be sharing them and why.
NVIDIA CEO Jensen Huang confirms the development of the company's most advanced AI architecture, 'Rubin', with six new chips currently in trial production at TSMC.
2 Sources
Technology
22 hrs ago
2 Sources
Technology
22 hrs ago
Databricks, a leading data and AI company, is set to acquire machine learning startup Tecton to bolster its AI agent offerings. This strategic move aims to improve real-time data processing and expand Databricks' suite of AI tools for enterprise customers.
3 Sources
Technology
22 hrs ago
3 Sources
Technology
22 hrs ago
Google is providing free users of its Gemini app temporary access to the Veo 3 AI video generation tool, typically reserved for paying subscribers, for a limited time this weekend.
3 Sources
Technology
14 hrs ago
3 Sources
Technology
14 hrs ago
Broadcom's stock rises as the company capitalizes on the AI boom, driven by massive investments from tech giants in data infrastructure. The chipmaker faces both opportunities and challenges in this rapidly evolving landscape.
2 Sources
Technology
22 hrs ago
2 Sources
Technology
22 hrs ago
Apple is set to introduce new enterprise-focused AI tools, including ChatGPT configuration options and potential support for other AI providers, as part of its upcoming software updates.
2 Sources
Technology
22 hrs ago
2 Sources
Technology
22 hrs ago