2 Sources
2 Sources
[1]
AI-generated misinformation can create confusion and hinder responses during emergencies
York University provides funding as a member of The Conversation CA-FR. In one of the first communications of its kind, the British Columbia Wildfire Service has issued a warning to residents about viral, AI-generated fake wildfire images circulating online. Judging by comments made by viewers on social media, some people did not realize the images were not authentic. As more advanced generative AI (genAI) tools become freely accessible, these incidents will increase. During emergencies, when people are stressed and need reliable information, such digital disinformation can cause significant harm by spreading confusion and panic. This vulnerability to disinformation stems from people's reliance on mental shortcuts during stressful times; this facilitates the spread and acceptance of disinformation. Content that is emotionally charged and sensational often captures more attention and is more frequently shared on social media. Based on our research and experience on emergency response and management, AI-generated misinformation during emergencies can cause real damage by disrupting disaster response efforts. Circulating misinformation People's motivations for creating, sharing and accepting disinformation during emergencies are complex and diverse. Some individuals may generate and spread disinformation for a number of reasons. Self-determination theory categorizes motivations as intrinsic -- related to the inherent interest or enjoyment of creating and sharing -- and extrinsic, which involve outcomes like financial gain or publicity. The creation of disinformation can be motivated by several factors. These include political, commercial or personal gain, prestige, belief, enjoyment and the desire to harm and sow discord. People may spread disinformation because they perceive it to be important, they have reduced decision-making capacity, they distrust other sources of information, or because they want to help, fit in, entertain others or self-promote. On the other hand, accepting disinformation may be influenced by a reduced capacity to analyze information, political affiliations, fixed beliefs and religious fundamentalism. Misinformation harms Harms caused by disinformation and misinformation can have varying levels of severity and can be categorized into direct, indirect, short-term and long-term harms. These can take many forms, including threatening people's lives, incomes, sense of security and safety networks. During emergencies, having access to trustworthy information about hazards and threats is critical. Disinformation, combined with poor collection, processing and understanding of urgent information, can lead to more direct casualties and property damage. Misinformation disproportionately affects vulnerable populations. When individuals receive risk and threat information, they usually check it through vertical (government, emergency management agencies and reputable media) and horizontal (friends, family members and neighbours) networks. The more complex the information, the more difficult and time-consuming the confirmation and validation process is. And as genAI improves, distinguishing between real and AI-generated information will become more difficult and resource-consuming. Debunking disinformation Disinformation can interrupt emergency communications. During emergencies, clear communication plays a major role in public safety and security. In these situations, how people process information depends on how much information they have, their existing knowledge, emotional responses to risk and their capacity to gather information. Disinformation intensifies the need for diverse communication channels, credible sources and clear messaging. Official sources are essential for verification, yet the growing volume of information makes checking for accuracy increasingly difficult. During the COVID-19 pandemic, for example, public health agencies flagged misinformation and disinformation as major concerns. Read more: How to address coronavirus misinformation spreading through messaging apps and email Digital misinformation circulated during disasters can lead to resources being improperly allocated, conflicting public behaviour and actions, and delayed emergency responses. Misinformation can also lead to unnecessary or delayed evacuations. In such cases, disaster management teams must contend not only with the crisis, but also with the secondary challenges created by misinformation. Counteracting disinformation Research reveals considerable gaps in the skills and strategies that emergency management agencies use to counteract misinformation. These agencies should focus on the detection, verification and mitigation of disinformation creation, sharing and acceptance. This complex issue demands co-ordinated efforts across policy, technology and public engagement: Fostering a culture of critical awareness: Educating the public, particularly younger generations, about the dangers of misinformation and AI-generated content is essential. Media literacy campaigns, school programs and community workshops can equip people with the skills to question sources, verify information and recognize manipulation. Clear policies for AI-generated content in news: Establishing and enforcing policies on how news agencies use AI-generated images during emergencies can prevent visual misinformation from eroding public trust. This could include mandatory disclaimers, editorial oversight and transparent provenance tracking. Strengthening platforms for fact-checking and metadata analysis: During emergencies, social platforms and news outlets should need rapid, large-scale fact-checking. Requiring platforms to flag, down-rank or remove demonstrably false content can limit the viral spread of misinformation. Intervention strategies need to be developed to nudge people about skeptical information they come across on social media. Clear legal consequences: In Canada, Section 181 of the Criminal Code already makes the intentional creation and spread of false information a criminal offence. Publicizing and enforcing such provisions can act as a deterrent, particularly for deliberate misinformation campaigns during emergencies. Additionally, identifying, countering and reporting misinformation should be incorporated into emergency management and public education. AI is rapidly transforming how information is created and shared during crises. In emergencies, this can amplify fear, misdirect resources and erode trust at the very moment clarity is most needed. Building safeguards through education, policy, fact-checking and accountability is essential to ensure AI becomes a tool for resilience rather than a driver of chaos.
[2]
AI-generated misinformation can create confusion and hinder responses during emergencies
In one of the first communications of its kind, the British Columbia Wildfire Service has issued a warning to residents about viral, AI-generated fake wildfire images circulating online. Judging by comments made by viewers on social media, some people did not realize the images were not authentic. As more advanced generative AI (genAI) tools become freely accessible, these incidents will increase. During emergencies, when people are stressed and need reliable information, such digital disinformation can cause significant harm by spreading confusion and panic. This vulnerability to disinformation stems from people's reliance on mental shortcuts during stressful times; this facilitates the spread and acceptance of disinformation. Content that is emotionally charged and sensational often captures more attention and is more frequently shared on social media. Based on our research and experience on emergency response and management, AI-generated misinformation during emergencies can cause real damage by disrupting disaster response efforts. Circulating misinformation People's motivations for creating, sharing and accepting disinformation during emergencies are complex and diverse. Some individuals may generate and spread disinformation for a number of reasons. Self-determination theory categorizes motivations as intrinsic -- related to the inherent interest or enjoyment of creating and sharing -- and extrinsic, which involve outcomes like financial gain or publicity. The creation of disinformation can be motivated by several factors. These include political, commercial or personal gain, prestige, belief, enjoyment and the desire to harm and sow discord. People may spread disinformation because they perceive it to be important, they have reduced decision-making capacity, they distrust other sources of information, or because they want to help, fit in, entertain others or self-promote. On the other hand, accepting disinformation may be influenced by a reduced capacity to analyze information, political affiliations, fixed beliefs and religious fundamentalism. Misinformation harms Harms caused by disinformation and misinformation can have varying levels of severity and can be categorized into direct, indirect, short-term and long-term harms. These can take many forms, including threatening people's lives, incomes, sense of security and safety networks. During emergencies, having access to trustworthy information about hazards and threats is critical. Disinformation, combined with poor collection, processing and understanding of urgent information, can lead to more direct casualties and property damage. Misinformation disproportionately affects vulnerable populations. When individuals receive risk and threat information, they usually check it through vertical (government, emergency management agencies and reputable media) and horizontal (friends, family members and neighbors) networks. The more complex the information, the more difficult and time-consuming the confirmation and validation process is. And as genAI improves, distinguishing between real and AI-generated information will become more difficult and resource-consuming. Debunking disinformation Disinformation can interrupt emergency communications. During emergencies, clear communication plays a major role in public safety and security. In these situations, how people process information depends on how much information they have, their existing knowledge, emotional responses to risk and their capacity to gather information. Disinformation intensifies the need for diverse communication channels, credible sources and clear messaging. Official sources are essential for verification, yet the growing volume of information makes checking for accuracy increasingly difficult. During the COVID-19 pandemic, for example, public health agencies flagged misinformation and disinformation as major concerns. Digital misinformation circulated during disasters can lead to resources being improperly allocated, conflicting public behavior and actions, and delayed emergency responses. Misinformation can also lead to unnecessary or delayed evacuations. In such cases, disaster management teams must contend not only with the crisis, but also with the secondary challenges created by misinformation. Counteracting disinformation Research reveals considerable gaps in the skills and strategies that emergency management agencies use to counteract misinformation. These agencies should focus on the detection, verification and mitigation of disinformation creation, sharing and acceptance. This complex issue demands coordinated efforts across policy, technology and public engagement: Additionally, identifying, countering and reporting misinformation should be incorporated into emergency management and public education. AI is rapidly transforming how information is created and shared during crises. In emergencies, this can amplify fear, misdirect resources and erode trust at the very moment clarity is most needed. Building safeguards through education, policy, fact-checking and accountability is essential to ensure AI becomes a tool for resilience rather than a driver of chaos. This article is republished from The Conversation under a Creative Commons license. Read the original article.
Share
Share
Copy Link
AI-generated fake images are causing confusion during emergencies, highlighting the need for better strategies to combat misinformation and protect public safety.
In a groundbreaking move, the British Columbia Wildfire Service has issued a warning about viral, AI-generated fake wildfire images circulating online. This incident highlights a growing concern: as advanced generative AI (genAI) tools become more accessible, the spread of digital disinformation during emergencies is likely to increase, potentially causing significant harm by spreading confusion and panic
1
2
.Source: Tech Xplore
During stressful times, people tend to rely on mental shortcuts, making them more susceptible to disinformation. Emotionally charged and sensational content often captures more attention and is frequently shared on social media. This vulnerability to misinformation can have serious consequences, especially during emergencies when access to reliable information is crucial
1
2
.The creation and spread of disinformation during emergencies stem from complex motivations. These can be categorized as intrinsic (related to inherent interest or enjoyment) and extrinsic (involving outcomes like financial gain or publicity). Factors motivating the creation of disinformation include:
1
2
AI-generated misinformation can significantly disrupt disaster response efforts. The harms caused by disinformation can be direct or indirect, short-term or long-term, and may include:
Source: The Conversation
1
2
As genAI technology improves, distinguishing between real and AI-generated information becomes increasingly difficult and resource-consuming. During emergencies, individuals typically verify information through vertical (government, emergency management agencies, reputable media) and horizontal (friends, family, neighbors) networks. However, the complexity and volume of information can make this process time-consuming and challenging
1
2
.Related Stories
To address the growing threat of AI-generated misinformation, experts recommend a multi-faceted approach:
Fostering Critical Awareness: Implement media literacy campaigns, school programs, and community workshops to educate the public about the dangers of misinformation and AI-generated content
1
.Clear Policies for AI-Generated Content: Establish and enforce policies on how news agencies use AI-generated images during emergencies, including mandatory disclaimers and transparent provenance tracking
1
.Strengthening Fact-Checking Platforms: Develop rapid, large-scale fact-checking capabilities for social platforms and news outlets during emergencies
1
.Incorporating Misinformation Management: Include identifying, countering, and reporting misinformation in emergency management and public education programs
2
.As AI continues to transform information creation and sharing during crises, it's crucial to build safeguards through education, policy, fact-checking, and accountability. By doing so, we can work towards ensuring that AI becomes a tool for resilience rather than a driver of chaos during emergencies
2
.Summarized by
Navi
[1]