The Outpost is a comprehensive collection of curated artificial intelligence software tools that cater to the needs of small business owners, bloggers, artists, musicians, entrepreneurs, marketers, writers, and researchers.
© 2025 TheOutpost.AI All rights reserved
Curated by THEOUTPOST
On Thu, 19 Sept, 12:04 AM UTC
2 Sources
[1]
44% of people report believing election-related misinformation - Adobe study
As AI-generated content becomes more prevalent, consumers seek ways to take back control. Here's what an Adobe survey uncovered ahead of the US presidential election. Believing what you see is more difficult than ever due to the ease and accessibility of generating synthetic content and how synthetic content is so easily spread online. As a result, many people have more difficulty trusting what they read, hear, and see in the media and digitally, especially amid politically contentious times like the upcoming US presidential election. On Tuesday, Adobe released its Authenticity in the Age of AI Study, which surveyed 2,000 US consumers regarding their thoughts on misinformation online ahead of the 2024 presidential election. Also: Is that photo real or AI? Google's 'About this image' aims to help you tell the difference Unsurprisingly, a whopping 94% of respondents reported being concerned about the spread of misinformation impacting the upcoming election, and nearly half of respondents (44%) shared being misled or believing election-related misinformation in the past three months. "Without a way for the public to verify the authenticity of digital content, we are approaching a breaking point where the public will no longer believe the things they see and hear online, even when they are true," said Jace Johnson, VP of Global Public Policy at Adobe. Also: Amazon joins C2PA steering committee to identify AI-generated content The emergence of generative AI (gen AI) has played a major factor, with 87% of respondents sharing that technology is making it more challenging to discern between reality and fake online, according to the survey. This concern for misinformation has concerned users so much that they are taking matters into their own hands and changing their habits to avoid further consuming misinformation. For example, 48% of respondents shared they stopped or curtailed the use of a specific social media platform due to the amount of misinformation found on it. Eighty-nine percent of respondents believe social media platforms should enforce stricter measures to prevent misinformation. Also: Google's DataGemma is the first large-scale Gen AI with RAG - why it matters "This concern about disinformation, especially around elections, isn't just a latent concern -- people are actually doing things about it," said Andy Parsons, Senior Director of the Content Authenticity Initiative at Adobe in an interview with ZDNET. "There's not much they can do except stop using social media or curtail their use because they're concerned that there's just too much disinformation." In response, 95% of respondents shared that they believe it is important to see attribution details next to election-related content to verify the information for themselves. Adobe positions its Content Credentials, "nutrition labels" for digital content that show users how the image was created, as part of the solution. Also: Global telcos pledge to adopt responsible AI guidelines Users can visit the Content Credentials site and drop an image they want to verify whether it was AI-generated or not. Then, the site can read the image's metadata and flag if it was created using an AI image generator that automatically implements Content Credentials to AI-generated content, such as Adobe Firefly and Microsoft Image Generator. Even if the photo was created with an image that didn't tag metadata, Content Credentials will match your image to similar images on the internet and let you know whether or not those images were AI-generated.
[2]
94% of Americans Are Concerned About Election Misinformation
A new Adobe study reveals that misinformation, deepfakes, and election interference are major concerns for Americans going into the 2024 presidential vote. Adobe's Authenticity in the Age of AI study asked 2,000+ Americans whether they think AI and misinformation pose a problem this election cycle. It finds that 94% of those surveyed believe misinformation will affect the presidential election this fall; 85% believe that election misinformation has spiked in the past three months; and nearly half admit they've fallen for misleading or fake posts in that same time period. "The proliferation of misinformation has eroded public trust. Without a way for to verify the authenticity of digital content, we're approaching a breaking point where the public will no longer believe the things they see and hear online, even when they are true," Adobe VP of Global Public Policy Jace Johnson said in a statement. Generative AI tools are also a part of the problem. Eight-seven percent of respondents say the AI boom has made it harder to discern whether something online is real or not, and the majority have doubted whether an online image on news sites is legit. Earlier this year, Google DeepMind research found that political deepfakes are the most common malicious use case of AI tools. But an overwhelming majority want to know more about how the media they see online has been altered or edited. Consumers want attribution details on images and videos, especially election-related content. Misinformation has also caused Americans to change their habits, with nearly half of respondents in the Adobe study saying they've ditched a social media platform (like X/Twitter or Facebook) due to the rise of fake content on it. The study also suggests the US public wants stricter regulations around online misinformation -- 89% want social media platforms to establish stricter misinformation rules, and 74% think the US government is not doing enough to stop misinformation's spread. Overall, many Americans are calling for increased transparency around image and video origins as concerns mount around the explosion of AI-generated falsehoods and misinformation online.
Share
Share
Copy Link
Recent studies by Adobe and PCMag highlight growing concerns about election-related misinformation. A significant portion of Americans believe false information, while an overwhelming majority express worry about its impact.
A recent study conducted by Adobe has revealed a troubling trend in the spread and belief of election-related misinformation. According to the research, 44% of people reported believing false information about elections 1. This statistic underscores the significant challenge faced by society in combating the proliferation of misleading information during electoral processes.
While the Adobe study highlights the extent of misinformation belief, a separate survey conducted by PCMag paints an even more alarming picture of public concern. The study found that a staggering 94% of Americans express worry about election misinformation 2. This near-universal concern among the population indicates a growing awareness of the potential threats posed by false information to the democratic process.
The widespread belief in and concern about election misinformation raises serious questions about the integrity of electoral systems. With nearly half of the population potentially influenced by false information, there is a real risk of this affecting voting behavior and election outcomes. The high level of concern among Americans also suggests a potential erosion of trust in the democratic process, which could have long-lasting implications for political stability and civic engagement.
Both studies highlight the various channels through which election misinformation spreads. Social media platforms, in particular, have been identified as major conduits for the dissemination of false information. The rapid spread of misinformation through these digital channels poses a significant challenge for fact-checkers, election officials, and tech companies alike in their efforts to combat the problem.
In light of these findings, there is an increasing focus on developing strategies to counter election-related misinformation. Tech companies, media organizations, and government agencies are implementing various measures, including improved content moderation, fact-checking initiatives, and public awareness campaigns. However, the persistence of the problem, as evidenced by these studies, suggests that current efforts may be insufficient in addressing the scale and complexity of the issue.
The studies underscore the critical importance of digital literacy in combating misinformation. As false information becomes more sophisticated and widespread, the ability of individuals to critically evaluate online content becomes crucial. Educational initiatives aimed at improving media literacy and critical thinking skills are increasingly seen as essential components in the fight against election misinformation.
The findings from both Adobe and PCMag paint a concerning picture for the future of democratic processes. If left unchecked, the widespread belief in and concern about election misinformation could lead to decreased voter turnout, increased political polarization, and a general erosion of faith in democratic institutions. Addressing this challenge will require a concerted effort from multiple stakeholders, including tech companies, media organizations, educational institutions, and government bodies.
Artificial intelligence poses a significant threat to the integrity of the 2024 US elections. Experts warn about the potential for AI-generated misinformation to influence voters and disrupt the electoral process.
2 Sources
2 Sources
A recent survey reveals widespread distrust among Americans regarding AI-generated election information. As the 2024 presidential election approaches, concerns about misinformation and the role of AI in shaping public opinion are growing.
8 Sources
8 Sources
Recent studies reveal that AI-generated misinformation and deepfakes had little influence on global elections in 2024, contrary to widespread concerns. The limited impact is attributed to the current limitations of AI technology and users' ability to recognize synthetic content.
2 Sources
2 Sources
Meta claims that AI-generated content played a minimal role in election misinformation on its platforms in 2024, contrary to widespread concerns about AI's potential impact on global elections.
14 Sources
14 Sources
As the 2024 U.S. presidential election approaches, artificial intelligence emerges as a powerful and potentially disruptive force, raising concerns about misinformation, deepfakes, and foreign interference while also offering new campaign tools.
6 Sources
6 Sources