Curated by THEOUTPOST
On Thu, 12 Sept, 8:04 AM UTC
8 Sources
[1]
Most in new survey wary of AI-powered election information
Most Americans say they don't trust AI-powered chatbots to produce accurate information about the 2024 election, according to a new survey from The Associated Press-NORC Center for Public Affairs Research and USAFacts. In the poll, released Tuesday, nearly two-thirds (64 percent) of respondents say they are either not very confident (38 percent) or not at all confident (26 percent) that "information from artificial intelligence, chatbots or search results, is reliable and factual." Another 30 percent say they are somewhat confident, but only 5 percent say they are either extremely confident (1 percent) or very confident (4 percent). Americans say they are worried it might be difficult to find "factual and accurate information about the 2024 presidential election," as Election Day nears. In the survey, 43 percent say AI use will make it much or somewhat more difficult to find information about the election, while 16 percent say AI use will make it much or somewhat easier. Another 40 percent say AI use will neither make it easier nor more difficult to find information. The results come at a time when Americans, particularly younger adults, increasingly rely on AI chatbots in their everyday lives. While AI-powered chatbots can be a great tool for Americans, some experts have raised concerns about the accuracy of the information that they are providing. Last month, five secretaries of state called on Elon Musk to fix an AI chatbot on the social platform X, saying it was spreading election misinformation. Artificial intelligence has also presented challenges to other aspects of elections. Ahead of the New Hampshire primary, an AI-generated robocall to voters was imitating President Biden's voice and urging them to stay home instead of voting. The Associated Press-NORC Center for Public Affairs Research and USAFacts poll of 1,019 adults was conducted July 29 to Aug. 8. The margin of sampling error for all respondents is 4 percentage points.
[2]
Most Americans don't trust AI-powered election information: AP-NORC/USAFacts survey
WASHINGTON -- Jim Duggan uses ChatGPT almost daily to draft marketing emails for his carbon removal credit business in Huntsville, Alabama. But he'd never trust an artificial intelligence chatbot with any questions about the upcoming presidential election. "I just don't think AI produces truth," the 68-year-old political conservative said in an interview. "Grammar and words, that's something that's concrete. Political thought, judgment, opinions aren't." Duggan is part of the majority of Americans who don't trust artificial intelligence, chatbots or search results to give them accurate answers, according to a new survey from The Associated Press-NORC Center for Public Affairs Research and USAFacts. About two-thirds of U.S. adults say they're not very or not at all confident that these tools provide reliable and factual information, the poll shows. The findings reveal that even as Americans have started using generative AI-fueled chatbots and search engines in their personal and work lives, most have remained skeptical of these rapidly advancing technologies. That's particularly true when it comes to information about high-stakes events such as elections. Earlier this year, a gathering of election officials and AI researchers found that AI tools did poorly when asked relatively basic questions, such as where to find the nearest polling place. Last month, several secretaries of state warned that the AI chatbot developed for the social media platform X was spreading bogus election information, prompting X to tweak the tool so it would first direct users to a federal government website for reliable information. Large AI models that can generate text, images, videos or audio clips at the click of a button are poorly understood and minimally regulated. Their ability to predict the most plausible next word in a sentence based on vast pools of data allows them to provide sophisticated responses on almost any topic -- but it also makes them vulnerable to errors. Americans are split on whether they think the use of AI will make it more difficult to find accurate information about the 2024 election. About 4 in 10 Americans say the use of AI will make it "much more difficult" or "somewhat more difficult" to find factual information, while another 4 in 10 aren't sure -- saying it won't make it easier or more challenging, according to the poll. A distinct minority, 16%, say AI will make it easier to find accurate information about the election. Griffin Ryan, a 21-year-old college student at Tulane University in New Orleans, said he doesn't know anyone on his campus who uses AI chatbots to find information about candidates or voting. He doesn't use them either, since he's noticed that it's possible to "basically just bully AI tools into giving you the answers that you want." The Democrat from Texas said he gets most of his news from mainstream outlets such as CNN, the BBC, NPR, The New York Times and The Wall Street Journal. When it comes to misinformation in the upcoming election, he's more worried that AI-generated deepfakes and AI-fueled bot accounts on social media will sway voter opinions. "I've seen videos of people doing AI deepfakes of politicians and stuff, and these have all been obvious jokes," Ryan said. "But it does worry me when I see those that maybe someone's going to make something serious and actually disseminate it." A relatively small portion of Americans -- 8% -- think results produced by AI chatbots such as OpenAI's ChatGPT or Anthropic's Claude are always or often based on factual information, according to the poll. They have a similar level of trust in AI-assisted search engines such as Bing or Google, with 12% believing their results are always or often based on facts. There already have been attempts to influence U.S. voter opinions through AI deepfakes, including AI-generated robocalls that imitated President Joe Biden's voice to convince voters in New Hampshire's January primary to stay home from the polls. More commonly, AI tools have been used to create fake images of prominent candidates that aim to reinforce particular negative narratives -- from Vice President Kamala Harris in a communist uniform to former President Donald Trump in handcuffs. Ryan, the Tulane student, said his family is fairly media literate, but he has some older relatives who heeded false information about COVID-19 vaccines on Facebook during the pandemic. He said that makes him concerned that they might be susceptible to false or misleading information during the election cycle. Bevellie Harris, a 71-year-old Democrat from Bakersfield, California, said she prefers getting election information from official government sources, such as the voter pamphlet she receives in the mail ahead of every election. "I believe it to be more informative," she said, adding that she also likes to look up candidate ads to hear their positions in their own words. ___ The poll of 1,019 adults was conducted July 29-Aug. 8, 2024, using a sample drawn from NORC's probability-based AmeriSpeak Panel, which is designed to be representative of the U.S. population. The margin of sampling error for all respondents is plus or minus 4.0 percentage points. The Associated Press receives support from several private foundations to enhance its explanatory coverage of elections and democracy. See more about AP's democracy initiative here. The AP is solely responsible for all content.
[3]
Most Americans don't trust AI-powered election information: AP-NORC/USAFacts survey
WASHINGTON (AP) -- Jim Duggan uses ChatGPT almost daily to draft marketing emails for his carbon removal credit business in Huntsville, Alabama. But he'd never trust an artificial intelligence chatbot with any questions about the upcoming presidential election. "I just don't think AI produces truth," the 68-year-old political conservative said in an interview. "Grammar and words, that's something that's concrete. Political thought, judgment, opinions aren't." Duggan is part of the majority of Americans who don't trust artificial intelligence, chatbots or search results to give them accurate answers, according to a new survey from The Associated Press-NORC Center for Public Affairs Research and USAFacts. About two-thirds of U.S. adults say they're not very or not at all confident that these tools provide reliable and factual information, the poll shows. The findings reveal that even as Americans have started using generative AI-fueled chatbots and search engines in their personal and work lives, most have remained skeptical of these rapidly advancing technologies. That's particularly true when it comes to information about high-stakes events such as elections. Earlier this year, a gathering of election officials and AI researchers found that AI tools did poorly when asked relatively basic questions, such as where to find the nearest polling place. Last month, several secretaries of state warned that the AI chatbot developed for the social media platform X was spreading bogus election information, prompting X to tweak the tool so it would first direct users to a federal government website for reliable information. Large AI models that can generate text, images, videos or audio clips at the click of a button are poorly understood and minimally regulated. Their ability to predict the most plausible next word in a sentence based on vast pools of data allows them to provide sophisticated responses on almost any topic -- but it also makes them vulnerable to errors. Americans are split on whether they think the use of AI will make it more difficult to find accurate information about the 2024 election. About 4 in 10 Americans say the use of AI will make it "much more difficult" or "somewhat more difficult" to find factual information, while another 4 in 10 aren't sure -- saying it won't make it easier or more challenging, according to the poll. A distinct minority, 16%, say AI will make it easier to find accurate information about the election. Griffin Ryan, a 21-year-old college student at Tulane University in New Orleans, said he doesn't know anyone on his campus who uses AI chatbots to find information about candidates or voting. He doesn't use them either, since he's noticed that it's possible to "basically just bully AI tools into giving you the answers that you want." The Democrat from Texas said he gets most of his news from mainstream outlets such as CNN, the BBC, NPR, The New York Times and The Wall Street Journal. When it comes to misinformation in the upcoming election, he's more worried that AI-generated deepfakes and AI-fueled bot accounts on social media will sway voter opinions. "I've seen videos of people doing AI deepfakes of politicians and stuff, and these have all been obvious jokes," Ryan said. "But it does worry me when I see those that maybe someone's going to make something serious and actually disseminate it." A relatively small portion of Americans -- 8% -- think results produced by AI chatbots such as OpenAI's ChatGPT or Anthropic's Claude are always or often based on factual information, according to the poll. They have a similar level of trust in AI-assisted search engines such as Bing or Google, with 12% believing their results are always or often based on facts. There already have been attempts to influence U.S. voter opinions through AI deepfakes, including AI-generated robocalls that imitated President Joe Biden's voice to convince voters in New Hampshire's January primary to stay home from the polls. More commonly, AI tools have been used to create fake images of prominent candidates that aim to reinforce particular negative narratives -- from Vice President Kamala Harris in a communist uniform to former President Donald Trump in handcuffs. Ryan, the Tulane student, said his family is fairly media literate, but he has some older relatives who heeded false information about COVID-19 vaccines on Facebook during the pandemic. He said that makes him concerned that they might be susceptible to false or misleading information during the election cycle. Bevellie Harris, a 71-year-old Democrat from Bakersfield, California, said she prefers getting election information from official government sources, such as the voter pamphlet she receives in the mail ahead of every election. "I believe it to be more informative," she said, adding that she also likes to look up candidate ads to hear their positions in their own words. The poll of 1,019 adults was conducted July 29-Aug. 8, 2024, using a sample drawn from NORC's probability-based AmeriSpeak Panel, which is designed to be representative of the U.S. population. The margin of sampling error for all respondents is plus or minus 4.0 percentage points. Swenson reported from New York. The Associated Press receives support from several private foundations to enhance its explanatory coverage of elections and democracy. See more about AP's democracy initiative here. The AP is solely responsible for all content.
[4]
Most Americans Don't Trust AI-Powered Election Information: AP-NORC/USAFacts Survey
WASHINGTON (AP) -- Jim Duggan uses ChatGPT almost daily to draft marketing emails for his carbon removal credit business in Huntsville, Alabama. But he'd never trust an artificial intelligence chatbot with any questions about the upcoming presidential election. "I just don't think AI produces truth," the 68-year-old political conservative said in an interview. "Grammar and words, that's something that's concrete. Political thought, judgment, opinions aren't." Duggan is part of the majority of Americans who don't trust artificial intelligence, chatbots or search results to give them accurate answers, according to a new survey from The Associated Press-NORC Center for Public Affairs Research and USAFacts. About two-thirds of U.S. adults say they're not very or not at all confident that these tools provide reliable and factual information, the poll shows. The findings reveal that even as Americans have started using generative AI-fueled chatbots and search engines in their personal and work lives, most have remained skeptical of these rapidly advancing technologies. That's particularly true when it comes to information about high-stakes events such as elections. Earlier this year, a gathering of election officials and AI researchers found that AI tools did poorly when asked relatively basic questions, such as where to find the nearest polling place. Last month, several secretaries of state warned that the AI chatbot developed for the social media platform X was spreading bogus election information, prompting X to tweak the tool so it would first direct users to a federal government website for reliable information. Large AI models that can generate text, images, videos or audio clips at the click of a button are poorly understood and minimally regulated. Their ability to predict the most plausible next word in a sentence based on vast pools of data allows them to provide sophisticated responses on almost any topic -- but it also makes them vulnerable to errors. Americans are split on whether they think the use of AI will make it more difficult to find accurate information about the 2024 election. About 4 in 10 Americans say the use of AI will make it "much more difficult" or "somewhat more difficult" to find factual information, while another 4 in 10 aren't sure -- saying it won't make it easier or more challenging, according to the poll. A distinct minority, 16%, say AI will make it easier to find accurate information about the election. Griffin Ryan, a 21-year-old college student at Tulane University in New Orleans, said he doesn't know anyone on his campus who uses AI chatbots to find information about candidates or voting. He doesn't use them either, since he's noticed that it's possible to "basically just bully AI tools into giving you the answers that you want." The Democrat from Texas said he gets most of his news from mainstream outlets such as CNN, the BBC, NPR, The New York Times and The Wall Street Journal. When it comes to misinformation in the upcoming election, he's more worried that AI-generated deepfakes and AI-fueled bot accounts on social media will sway voter opinions. "I've seen videos of people doing AI deepfakes of politicians and stuff, and these have all been obvious jokes," Ryan said. "But it does worry me when I see those that maybe someone's going to make something serious and actually disseminate it." A relatively small portion of Americans -- 8% -- think results produced by AI chatbots such as OpenAI's ChatGPT or Anthropic's Claude are always or often based on factual information, according to the poll. They have a similar level of trust in AI-assisted search engines such as Bing or Google, with 12% believing their results are always or often based on facts. There already have been attempts to influence U.S. voter opinions through AI deepfakes, including AI-generated robocalls that imitated President Joe Biden's voice to convince voters in New Hampshire's January primary to stay home from the polls. More commonly, AI tools have been used to create fake images of prominent candidates that aim to reinforce particular negative narratives -- from Vice President Kamala Harris in a communist uniform to former President Donald Trump in handcuffs. Ryan, the Tulane student, said his family is fairly media literate, but he has some older relatives who heeded false information about COVID-19 vaccines on Facebook during the pandemic. He said that makes him concerned that they might be susceptible to false or misleading information during the election cycle. Bevellie Harris, a 71-year-old Democrat from Bakersfield, California, said she prefers getting election information from official government sources, such as the voter pamphlet she receives in the mail ahead of every election. "I believe it to be more informative," she said, adding that she also likes to look up candidate ads to hear their positions in their own words. ___ The poll of 1,019 adults was conducted July 29-Aug. 8, 2024, using a sample drawn from NORC's probability-based AmeriSpeak Panel, which is designed to be representative of the U.S. population. The margin of sampling error for all respondents is plus or minus 4.0 percentage points. The Associated Press receives support from several private foundations to enhance its explanatory coverage of elections and democracy. See more about AP's democracy initiative here. The AP is solely responsible for all content. Copyright 2024 The Associated Press. All rights reserved. This material may not be published, broadcast, rewritten or redistributed.
[5]
Most Americans don't trust AI-powered election information: AP-NORC/USAFacts survey
WASHINGTON (AP) -- Jim Duggan uses ChatGPT almost daily to draft marketing emails for his carbon removal credit business in Huntsville, Alabama. But he'd never trust an artificial intelligence chatbot with any questions about the upcoming presidential election. "I just don't think AI produces truth," the 68-year-old political conservative said in an interview. "Grammar and words, that's something that's concrete. Political thought, judgment, opinions aren't." Duggan is part of the majority of Americans who don't trust artificial intelligence, chatbots or search results to give them accurate answers, according to a new survey from The Associated Press-NORC Center for Public Affairs Research and USAFacts. About two-thirds of U.S. adults say they're not very or not at all confident that these tools provide reliable and factual information, the poll shows. The findings reveal that even as Americans have started using generative AI-fueled chatbots and search engines in their personal and work lives, most have remained skeptical of these rapidly advancing technologies. That's particularly true when it comes to information about high-stakes events such as elections. Earlier this year, a gathering of election officials and AI researchers found that AI tools did poorly when asked relatively basic questions, such as where to find the nearest polling place. Last month, several secretaries of state warned that the AI chatbot developed for the social media platform X was spreading bogus election information, prompting X to tweak the tool so it would first direct users to a federal government website for reliable information. Large AI models that can generate text, images, videos or audio clips at the click of a button are poorly understood and minimally regulated. Their ability to predict the most plausible next word in a sentence based on vast pools of data allows them to provide sophisticated responses on almost any topic -- but it also makes them vulnerable to errors. Americans are split on whether they think the use of AI will make it more difficult to find accurate information about the 2024 election. About 4 in 10 Americans say the use of AI will make it "much more difficult" or "somewhat more difficult" to find factual information, while another 4 in 10 aren't sure -- saying it won't make it easier or more challenging, according to the poll. A distinct minority, 16%, say AI will make it easier to find accurate information about the election. Griffin Ryan, a 21-year-old college student at Tulane University in New Orleans, said he doesn't know anyone on his campus who uses AI chatbots to find information about candidates or voting. He doesn't use them either, since he's noticed that it's possible to "basically just bully AI tools into giving you the answers that you want." The Democrat from Texas said he gets most of his news from mainstream outlets such as CNN, the BBC, NPR, The New York Times and The Wall Street Journal. When it comes to misinformation in the upcoming election, he's more worried that AI-generated deepfakes and AI-fueled bot accounts on social media will sway voter opinions. "I've seen videos of people doing AI deepfakes of politicians and stuff, and these have all been obvious jokes," Ryan said. "But it does worry me when I see those that maybe someone's going to make something serious and actually disseminate it." A relatively small portion of Americans -- 8% -- think results produced by AI chatbots such as OpenAI's ChatGPT or Anthropic's Claude are always or often based on factual information, according to the poll. They have a similar level of trust in AI-assisted search engines such as Bing or Google, with 12% believing their results are always or often based on facts. There already have been attempts to influence U.S. voter opinions through AI deepfakes, including AI-generated robocalls that imitated President Joe Biden's voice to convince voters in New Hampshire's January primary to stay home from the polls. More commonly, AI tools have been used to create fake images of prominent candidates that aim to reinforce particular negative narratives -- from Vice President Kamala Harris in a communist uniform to former President Donald Trump in handcuffs. Ryan, the Tulane student, said his family is fairly media literate, but he has some older relatives who heeded false information about COVID-19 vaccines on Facebook during the pandemic. He said that makes him concerned that they might be susceptible to false or misleading information during the election cycle. Bevellie Harris, a 71-year-old Democrat from Bakersfield, California, said she prefers getting election information from official government sources, such as the voter pamphlet she receives in the mail ahead of every election. "I believe it to be more informative," she said, adding that she also likes to look up candidate ads to hear their positions in their own words. ___ The poll of 1,019 adults was conducted July 29-Aug. 8, 2024, using a sample drawn from NORC's probability-based AmeriSpeak Panel, which is designed to be representative of the U.S. population. The margin of sampling error for all respondents is plus or minus 4.0 percentage points. The Associated Press receives support from several private foundations to enhance its explanatory coverage of elections and democracy. See more about AP's democracy initiative here. The AP is solely responsible for all content.
[6]
Most Americans don't trust AI, chatbots or search results to give them accurate election information
Jim Duggan uses ChatGPT almost daily to draft marketing emails for his carbon removal credit business in Huntsville, Alabama. But he'd never trust an artificial intelligence chatbot with any questions about the upcoming presidential election. "I just don't think AI produces truth," the 68-year-old political conservative said in an interview. "Grammar and words, that's something that's concrete. Political thought, judgment, opinions aren't." Duggan is part of the majority of Americans who don't trust artificial intelligence, chatbots or search results to give them accurate answers, according to a new survey from The Associated Press-NORC Center for Public Affairs Research and USAFacts. About two-thirds of U.S. adults say they're not very or not at all confident that these tools provide reliable and factual information, the poll shows. The findings reveal that even as Americans have started using generative AI-fueled chatbots and search engines in their personal and work lives, most have remained skeptical of these rapidly advancing technologies. That's particularly true when it comes to information about high-stakes events such as elections. Earlier this year, a gathering of election officials and AI researchers found that AI tools did poorly when asked relatively basic questions, such as where to find the nearest polling place. Last month, several secretaries of state warned that the AI chatbot developed for the social media platform X was spreading bogus election information, prompting X to tweak the tool so it would first direct users to a federal government website for reliable information. Large AI models that can generate text, images, videos or audio clips at the click of a button are poorly understood and minimally regulated. Their ability to predict the most plausible next word in a sentence based on vast pools of data allows them to provide sophisticated responses on almost any topic -- but it also makes them vulnerable to errors. Americans are split on whether they think the use of AI will make it more difficult to find accurate information about the 2024 election. About 4 in 10 Americans say the use of AI will make it "much more difficult" or "somewhat more difficult" to find factual information, while another 4 in 10 aren't sure -- saying it won't make it easier or more challenging, according to the poll. A distinct minority, 16%, say AI will make it easier to find accurate information about the election. Griffin Ryan, a 21-year-old college student at Tulane University in New Orleans, said he doesn't know anyone on his campus who uses AI chatbots to find information about candidates or voting. He doesn't use them either, since he's noticed that it's possible to "basically just bully AI tools into giving you the answers that you want." The Democrat from Texas said he gets most of his news from mainstream outlets such as CNN, the BBC, NPR, The New York Times and The Wall Street Journal. When it comes to misinformation in the upcoming election, he's more worried that AI-generated deepfakes and AI-fueled bot accounts on social media will sway voter opinions. "I've seen videos of people doing AI deepfakes of politicians and stuff, and these have all been obvious jokes," Ryan said. "But it does worry me when I see those that maybe someone's going to make something serious and actually disseminate it." A relatively small portion of Americans -- 8% -- think results produced by AI chatbots such as OpenAI's ChatGPT or Anthropic's Claude are always or often based on factual information, according to the poll. They have a similar level of trust in AI-assisted search engines such as Bing or Google, with 12% believing their results are always or often based on facts. There already have been attempts to influence U.S. voter opinions through AI deepfakes, including AI-generated robocalls that imitated President Joe Biden's voice to convince voters in New Hampshire's January primary to stay home from the polls. More commonly, AI tools have been used to create fake images of prominent candidates that aim to reinforce particular negative narratives -- from Vice President Kamala Harris in a communist uniform to former President Donald Trump in handcuffs. Ryan, the Tulane student, said his family is fairly media literate, but he has some older relatives who heeded false information about COVID-19 vaccines on Facebook during the pandemic. He said that makes him concerned that they might be susceptible to false or misleading information during the election cycle. Bevellie Harris, a 71-year-old Democrat from Bakersfield, California, said she prefers getting election information from official government sources, such as the voter pamphlet she receives in the mail ahead of every election. "I believe it to be more informative," she said, adding that she also likes to look up candidate ads to hear their positions in their own words.
[7]
Most Americans don't trust AI-powered election information: AP-NORC/USAFacts survey
WASHINGTON (AP) -- Jim Duggan uses ChatGPT almost daily to draft marketing emails for his carbon removal credit business in Huntsville, Alabama. But he'd never trust an artificial intelligence chatbot with any questions about the upcoming presidential election. "I just don't think AI produces truth," the 68-year-old political conservative said in an interview. "Grammar and words, that's something that's concrete. Political thought, judgment, opinions aren't." Duggan is part of the majority of Americans who don't trust artificial intelligence, chatbots or search results to give them accurate answers, according to a new survey from The Associated Press-NORC Center for Public Affairs Research and USAFacts. About two-thirds of U.S. adults say they're not very or not at all confident that these tools provide reliable and factual information, the poll shows. The findings reveal that even as Americans have started using generative AI-fueled chatbots and search engines in their personal and work lives, most have remained skeptical of these rapidly advancing technologies. That's particularly true when it comes to information about high-stakes events such as elections. Earlier this year, a gathering of election officials and AI researchers found that AI tools did poorly when asked relatively basic questions, such as where to find the nearest polling place. Last month, several secretaries of state warned that the AI chatbot developed for the social media platform X was spreading bogus election information, prompting X to tweak the tool so it would first direct users to a federal government website for reliable information. Large AI models that can generate text, images, videos or audio clips at the click of a button are poorly understood and minimally regulated. Their ability to predict the most plausible next word in a sentence based on vast pools of data allows them to provide sophisticated responses on almost any topic -- but it also makes them vulnerable to errors. Americans are split on whether they think the use of AI will make it more difficult to find accurate information about the 2024 election. About 4 in 10 Americans say the use of AI will make it "much more difficult" or "somewhat more difficult" to find factual information, while another 4 in 10 aren't sure -- saying it won't make it easier or more challenging, according to the poll. A distinct minority, 16%, say AI will make it easier to find accurate information about the election. Griffin Ryan, a 21-year-old college student at Tulane University in New Orleans, said he doesn't know anyone on his campus who uses AI chatbots to find information about candidates or voting. He doesn't use them either, since he's noticed that it's possible to "basically just bully AI tools into giving you the answers that you want." The Democrat from Texas said he gets most of his news from mainstream outlets such as CNN, the BBC, NPR, The New York Times and The Wall Street Journal. When it comes to misinformation in the upcoming election, he's more worried that AI-generated deepfakes and AI-fueled bot accounts on social media will sway voter opinions. "I've seen videos of people doing AI deepfakes of politicians and stuff, and these have all been obvious jokes," Ryan said. "But it does worry me when I see those that maybe someone's going to make something serious and actually disseminate it." A relatively small portion of Americans -- 8% -- think results produced by AI chatbots such as OpenAI's ChatGPT or Anthropic's Claude are always or often based on factual information, according to the poll. They have a similar level of trust in AI-assisted search engines such as Bing or Google, with 12% believing their results are always or often based on facts. There already have been attempts to influence U.S. voter opinions through AI deepfakes, including AI-generated robocalls that imitated President Joe Biden's voice to convince voters in New Hampshire's January primary to stay home from the polls. More commonly, AI tools have been used to create fake images of prominent candidates that aim to reinforce particular negative narratives -- from Vice President Kamala Harris in a communist uniform to former President Donald Trump in handcuffs. Ryan, the Tulane student, said his family is fairly media literate, but he has some older relatives who heeded false information about COVID-19 vaccines on Facebook during the pandemic. He said that makes him concerned that they might be susceptible to false or misleading information during the election cycle. Bevellie Harris, a 71-year-old Democrat from Bakersfield, California, said she prefers getting election information from official government sources, such as the voter pamphlet she receives in the mail ahead of every election. "I believe it to be more informative," she said, adding that she also likes to look up candidate ads to hear their positions in their own words. ___ The poll of 1,019 adults was conducted July 29-Aug. 8, 2024, using a sample drawn from NORC's probability-based AmeriSpeak Panel, which is designed to be representative of the U.S. population. The margin of sampling error for all respondents is plus or minus 4.0 percentage points. The Associated Press receives support from several private foundations to enhance its explanatory coverage of elections and democracy. See more about AP's democracy initiative here. The AP is solely responsible for all content.
[8]
Most Americans don't trust AI-powered election information: survey
The findings reveal that even as Americans have started using generative AI-fueled chatbots and search engines in their personal and work lives, most have remained skeptical of these rapidly advancing technologies. That's particularly true when it comes to information about high-stakes events such as elections.Jim Duggan uses ChatGPT almost daily to draft marketing emails for his carbon removal credit business in Huntsville, Alabama. But he'd never trust an artificial intelligence chatbot with any questions about the upcoming presidential election. "I just don't think AI produces truth," the 68-year-old political conservative said in an interview. "Grammar and words, that's something that's concrete. Political thought, judgment, opinions aren't." Duggan is part of the majority of Americans who don't trust artificial intelligence, chatbots or search results to give them accurate answers, according to a new survey from The Associated Press-NORC Center for Public Affairs Research and USAFacts. About two-thirds of U.S. adults say they're not very or not at all confident that these tools provide reliable and factual information, the poll shows. The findings reveal that even as Americans have started using generative AI-fueled chatbots and search engines in their personal and work lives, most have remained skeptical of these rapidly advancing technologies. That's particularly true when it comes to information about high-stakes events such as elections. Earlier this year, a gathering of election officials and AI researchers found that AI tools did poorly when asked relatively basic questions, such as where to find the nearest polling place. Last month, several secretaries of state warned that the AI chatbot developed for the social media platform X was spreading bogus election information, prompting X to tweak the tool so it would first direct users to a federal government website for reliable information. Large AI models that can generate text, images, videos or audio clips at the click of a button are poorly understood and minimally regulated. Their ability to predict the most plausible next word in a sentence based on vast pools of data allows them to provide sophisticated responses on almost any topic - but it also makes them vulnerable to errors. Americans are split on whether they think the use of AI will make it more difficult to find accurate information about the 2024 election. About 4 in 10 Americans say the use of AI will make it "much more difficult" or "somewhat more difficult" to find factual information, while another 4 in 10 aren't sure - saying it won't make it easier or more challenging, according to the poll. A distinct minority, 16%, say AI will make it easier to find accurate information about the election. Griffin Ryan, a 21-year-old college student at Tulane University in New Orleans, said he doesn't know anyone on his campus who uses AI chatbots to find information about candidates or voting. He doesn't use them either, since he's noticed that it's possible to "basically just bully AI tools into giving you the answers that you want." The Democrat from Texas said he gets most of his news from mainstream outlets such as CNN, the BBC, NPR, The New York Times and The Wall Street Journal. When it comes to misinformation in the upcoming election, he's more worried that AI-generated deepfakes and AI-fueled bot accounts on social media will sway voter opinions. "I've seen videos of people doing AI deepfakes of politicians and stuff, and these have all been obvious jokes," Ryan said. "But it does worry me when I see those that maybe someone's going to make something serious and actually disseminate it." A relatively small portion of Americans - 8% - think results produced by AI chatbots such as OpenAI's ChatGPT or Anthropic's Claude are always or often based on factual information, according to the poll. They have a similar level of trust in AI-assisted search engines such as Bing or Google, with 12% believing their results are always or often based on facts. There already have been attempts to influence U.S. voter opinions through AI deepfakes, including AI-generated robocalls that imitated President Joe Biden's voice to convince voters in New Hampshire's January primary to stay home from the polls. More commonly, AI tools have been used to create fake images of prominent candidates that aim to reinforce particular negative narratives - from Vice President Kamala Harris in a communist uniform to former President Donald Trump in handcuffs. Ryan, the Tulane student, said his family is fairly media literate, but he has some older relatives who heeded false information about COVID-19 vaccines on Facebook during the pandemic. He said that makes him concerned that they might be susceptible to false or misleading information during the election cycle. Bevellie Harris, a 71-year-old Democrat from Bakersfield, California, said she prefers getting election information from official government sources, such as the voter pamphlet she receives in the mail ahead of every election. "I believe it to be more informative," she said, adding that she also likes to look up candidate ads to hear their positions in their own words.
Share
Share
Copy Link
A recent survey reveals widespread distrust among Americans regarding AI-generated election information. As the 2024 presidential election approaches, concerns about misinformation and the role of AI in shaping public opinion are growing.
As the 2024 U.S. presidential election looms, a new survey conducted by The Associated Press-NORC Center for Public Affairs Research and USAFacts has unveiled a significant lack of trust in artificial intelligence (AI) as a source of election information. The poll found that a mere 5% of adults in the United States have a great deal of confidence in election-related information generated by AI chatbots 1.
The skepticism towards AI-powered election information appears to transcend political affiliations. Only 10% of Democrats and 7% of Republicans expressed even moderate trust in AI chatbots for election-related details. This bipartisan wariness underscores the challenges that tech companies and election officials face in leveraging AI technologies while maintaining public trust 2.
One of the primary concerns driving this distrust is the potential for AI systems to spread misinformation or exhibit bias. Experts warn that AI chatbots, if not properly managed, could become conduits for false or misleading information about candidates, voting procedures, and election outcomes 3.
In response to these concerns, major tech companies are taking proactive measures. Google, Microsoft, and OpenAI have announced plans to watermark AI-generated content and implement safeguards to prevent their chatbots from creating misleading election-related material 4.
Despite the rapid advancement of AI technology, the survey indicates that Americans still place more trust in traditional sources of election information. Local election officials, in particular, enjoy a higher level of confidence, with 16% of respondents expressing strong trust in their information 5.
As the election approaches, the challenge for both tech companies and election officials will be to harness the potential benefits of AI while addressing public concerns about its reliability and impartiality. Educating the public about the capabilities and limitations of AI in the context of election information may be crucial in building trust and ensuring the integrity of the democratic process.
Reference
[4]
U.S. News & World Report
|Most Americans Don't Trust AI-Powered Election Information: AP-NORC/USAFacts Survey[5]
As the 2024 U.S. presidential election approaches, artificial intelligence emerges as a powerful and potentially disruptive force, raising concerns about misinformation, deepfakes, and foreign interference while also offering new campaign tools.
6 Sources
6 Sources
Artificial intelligence poses a significant threat to the integrity of the 2024 US elections. Experts warn about the potential for AI-generated misinformation to influence voters and disrupt the electoral process.
2 Sources
2 Sources
AI-generated responses in Spanish contain more election-related falsehoods than in English, potentially impacting Latino voters. Major tech companies' AI models, including Meta's Llama 3, struggle with accuracy in Spanish-language queries about voting rights and processes.
7 Sources
7 Sources
Recent studies by Adobe and PCMag highlight growing concerns about election-related misinformation. A significant portion of Americans believe false information, while an overwhelming majority express worry about its impact.
2 Sources
2 Sources
A comprehensive look at how AI technologies were utilized in the 2024 global elections, highlighting both positive applications and potential risks.
4 Sources
4 Sources
The Outpost is a comprehensive collection of curated artificial intelligence software tools that cater to the needs of small business owners, bloggers, artists, musicians, entrepreneurs, marketers, writers, and researchers.
© 2025 TheOutpost.AI All rights reserved