Curated by THEOUTPOST
On Sat, 9 Nov, 8:01 AM UTC
9 Sources
[1]
ChatGPT blocked 250,000 AI image requests of US election candidates
More than 250,000 requests to OpenAI platforms to make deepfakes of US election candidates were rejected, the company says. ChatGPT refused more than 250,000 requests to generate images of the US election candidates using their artificial intelligence (AI) platform. OpenAI, the company behind the AI chatbot, said in a blog update on Friday that their platform DALL-E, used to generate images and video, rejected requests to make images of president-elect Donald Trump, his choice for vice president JD Vance, current president Joe Biden, democratic candidate Kamala Harris, and her vice-presidential pick, Tim Walz. The refusals were due to "safety measures" that OpenAI put in place before election day, the blog post said. "These guardrails are especially important in an elections context and are a key part of our broader efforts to prevent our tools being used for deceptive or harmful purposes," the update read. The teams behind OpenAI say they "have not seen evidence" of any US election-related influence operations going viral by using their platforms, the blog continued. The company said in August it stopped an Iranian influence campaign called Storm-2035 from generating articles about US politics and posing as conservative and progressive news outlets. Accounts related to Storm-2035 were later banned from using OpenAI's platforms. Another update in October disclosed that OpenAI disrupted more than "20 operations and deceptive networks," from across the globe that were using their platforms. Of these networks, the US election-related operations they found weren't able to generate "viral engagement," the report found.
[2]
ChatGPT Rejected 'Over 250,000' Requests to Deepfakes US Election Candidates
OpenAI claims ChatGPT rejected over 250,000 requests to generate Dall-E images of candidates in the month before the US presidential election, as part of wider efforts to minimize interference. This figure includes images of President-elect Trump, Vice President Harris, Vice President-elect Vance, President Biden, and Governor Walz. ChatGPT also claims it directed 2 million people looking for answers toward traditional news sources on the day of the election itself, including the Associated Press and Reuters. In addition, the firm said it sent over one million people to CanIVote.org, a website that provides nonbiased advice on how to vote and related administrative issues, in the month leading up to the election. "These guardrails are especially important in the context of an election and are a key part of our broader efforts to prevent our tools from being used for deceptive or harmful purposes," OpenAI said in a blog. However, initiatives like the above haven't stemmed the recent tide of senior AI safety executives leaving the firm. Lilian Weng, a VP of research at OpenAI, announced her departure in a post on X this week, after 7 years with the company. Weng joins co-founder and former chief scientist Ilya Sutskever and former head of AI safety Jan Leike, who both parted ways with the company in 2024. With the election looming, deepfake regulation had attracted serious attention from all corners, including Big Tech and state legislators, in the latter half of 2024. In September, YouTube confirmed it was working on at least two deepfake-detection tools to help creators find videos where AI-generated copies of their voices or faces are being used without proper consent. In the same month, California Governor Gavin Newsom signed three bills aimed at limiting the spread of deepfakes on social media ahead of the election, including criminalizing the intentional spreading of AI-based content meant to influence elections. Newsom said in the announcement that "it's critical that we ensure AI is not deployed to undermine the public's trust through disinformation - especially in today's fraught political climate."
[3]
ChatGPT rejected 250,000 election deepfake requests
OpenAI reveals the results of its strategy for 2024 US presidential elections. A lot of people tried to use OpenAI's DALL-E image generator during the election season, but the company said that it was able to stop them from using it as a tool to create deepfakes. ChatGPT rejected over 250,000 requests to generate images with President Biden, President-elect Trump, Vice President Harris, Vice President-elect Vance and Governor Walz, OpenAI said in a new report. The company explained that it's a direct result of a safety measure it previously implemented so that ChatGPT would refuse to generate images with real people, including politicians. OpenAI has been preparing for the US presidential elections since the beginning of the year. It laid out a strategy that was meant to prevent its tools from being used to help spread misinformation and made sure that people asking ChatGPT about voting in the US are directed to CanIVote.org. OpenAI said 1 million ChatGPT responses directed people to the website in the month leading up to election day. The chatbot also generated 2 million responses on election day and the day after, telling people who ask it for the results to check Associated Press, Reuters and other news sources. OpenAI made sure that ChatGPT's responses "did not express political preferences or recommend candidates even when asked explicitly," as well. Of course, DALL-E isn't the only AI image generator out there, and there are plenty of election-related deepfakes going around social media. One such deepfake featured Kamala Harris in a campaign video altered so that she'd say things she didn't actually say, such as "I was selected because I am the ultimate diversity hire."
[4]
OpenAI's ChatGPT Rejects Over 250K Deepfake Requests In Run-Up To Election Day To Combat Misinformation
To counter AI-driven misinformation, OpenAI's ChatGPT rejected 250,000 deepfake image requests of candidates in the month before the 2024 election. What Happened: Last week, OpenAI said ChatGPT turned down a quarter of a million requests to create deepfake images of candidates using DALL-E, the company's AI art generator. ChatGPT was also programmed to answer logistical queries about voting by directing users to CanIVote.org, a U.S. voting information site run by the National Association of Secretaries of State. See Also: Google Gemini Rolls Out 'Utilities' Extension For Android: Here's What All You Can Do With It ChatGPT provided approximately one million responses directing users to the voting site in the month leading up to Nov. 5. On Election Day, ChatGPT was set to answer questions about election results by referring users to reputable news organizations like the Associated Press. "Around 2 million ChatGPT responses included this message on Election Day and the day following," the platform stated in a blog post. Subscribe to the Benzinga Tech Trends newsletter to get all the latest tech developments delivered to your inbox. Why It Matters: The announcement followed rising fears that AI could disrupt the campaign by generating deepfakes and conspiracy theories for online spread. In January, New Hampshire voters received robocalls with a deepfake voice of President Joe Biden, urging them not to vote in the state's primary. Earlier this year, the Center for Countering Digital Hate raised concerns about the misuse of AI image creation tools from OpenAI and Microsoft Corp. for election-related disinformation. Later in August, OpenAI uncovered and dismantled a covert Iranian influence operation leveraging ChatGPT to manipulate public opinion during the 2024 elections. By October, OpenAI had thwarted over 20 global operations and deceptive networks that sought to misuse its models for election interference, according to a 54-page report published by the company. Check out more of Benzinga's Consumer Tech coverage by following this link. Read Next: Meta CEO Mark Zuckerberg Dodges Personal Liability In Child Social Media Addiction Lawsuits: Federal Judge Cites Lack Of Evidence Disclaimer: This content was partially produced with the help of AI tools and was reviewed and published by Benzinga editors. Photo courtesy: Shutterstock Market News and Data brought to you by Benzinga APIs
[5]
ChatGPT rejected more than 250,000 image generations of presidential candidates prior to Election Day
OpenAI estimates that ChatGPT rejected more than 250,000 requests to generate images of the 2024 U.S. presidential candidates in the lead up to Election Day, the company said in a blog on Friday. The rejections included image-generation requests involving President-elect Donald Trump, Vice President Kamala Harris, President Joe Biden, Minnesota Gov. Tim Walz and Vice President-elect JD Vance, OpenAI said. The rise of generative artificial intelligence has led to concerns about how misinformation created using the technology could affect the numerous elections taking place around the world in 2024. The number of deepfakes has increased 900% year over year, according to data from Clarity, a machine learning firm. Some included videos that were created or paid for by Russians seeking to disrupt the U.S. elections, U.S. intelligence officials say. In a 54-page October report, OpenAI said it had disrupted "more than 20 operations and deceptive networks from around the world that attempted to use our models." The threats ranged from AI-generated website articles to social media posts by fake accounts, the company wrote. None of the election-related operations were able to attract "viral engagement," the report noted. In its Friday blog, OpenAI said it hadn't seen any evidence that covert operations aiming to influence the outcome of the U.S. election using the company's products were able to successfully go viral or build "sustained audiences." Lawmakers have been particularly concerned about misinformation in the age of generative AI, which took off in late 2022 with the launch of ChatGPT. Large language models are still new and routinely spit out inaccurate and unreliable information. "Voters categorically should not look to AI chatbots for information about voting or the election -- there are far too many concerns about accuracy and completeness," Alexandra Reeve Givens, CEO of the Center for Democracy & Technology, told CNBC last week.
[6]
ChatGPT rejected more than 250,000 image generations of presidential candidates prior to Election Day
The rise of generative artificial intelligence has led to concerns about how misinformation created using the technology could affect the numerous elections in 2024. OpenAI estimates that ChatGPT rejected more than 250,000 requests to generate images of the 2024 U.S. presidential candidates in the lead up to Election Day, the company said in a blog on Friday. The rejections included image-generation requests involving President-elect Donald Trump, Vice President Kamala Harris, President Joe Biden, Minnesota Gov. Tim Walz and Vice President-elect JD Vance, OpenAI said. The rise of generative artificial intelligence has led to concerns about how misinformation created using the technology could affect the numerous elections taking place around the world in 2024. The number of deepfakes has increased 900% year over year, according to data from Clarity, a machine learning firm. Some included videos that were created or paid for by Russians seeking to disrupt the U.S. elections, U.S. intelligence officials say. In a 54-page October report, OpenAI said it had disrupted "more than 20 operations and deceptive networks from around the world that attempted to use our models." The threats ranged from AI-generated website articles to social media posts by fake accounts, the company wrote. None of the election-related operations were able to attract "viral engagement," the report noted. In its Friday blog, OpenAI said it hadn't seen any evidence that covert operations aiming to influence the outcome of the U.S. election using the company's products were able to successfully go viral or build "sustained audiences." Lawmakers have been particularly concerned about misinformation in the age of generative AI, which took off in late 2022 with the launch of ChatGPT. Large language models are still new and routinely spit out inaccurate and unreliable information. "Voters categorically should not look to AI chatbots for information about voting or the election -- there are far too many concerns about accuracy and completeness," Alexandra Reeve Givens, CEO of the Center for Democracy & Technology, told CNBC last week.
[7]
ChatGPT rejected more than 250,000 image generations of presidential candidates prior to Election Day
The rise of generative artificial intelligence has led to concerns about how misinformation created using the technology could affect the numerous elections in 2024. OpenAI estimates that ChatGPT rejected more than 250,000 requests to generate images of the 2024 U.S. presidential candidates in the lead up to Election Day, the company said in a blog on Friday. The rejections included image-generation requests involving President-elect Donald Trump, Vice President Kamala Harris, President Joe Biden, Minnesota Gov. Tim Walz and Vice President-elect JD Vance, OpenAI said. The rise of generative artificial intelligence has led to concerns about how misinformation created using the technology could affect the numerous elections taking place around the world in 2024. The number of deepfakes has increased 900% year over year, according to data from Clarity, a machine learning firm. Some included videos that were created or paid for by Russians seeking to disrupt the U.S. elections, U.S. intelligence officials say. In a 54-page October report, OpenAI said it had disrupted "more than 20 operations and deceptive networks from around the world that attempted to use our models." The threats ranged from AI-generated website articles to social media posts by fake accounts, the company wrote. None of the election-related operations were able to attract "viral engagement," the report noted. In its Friday blog, OpenAI said it hadn't seen any evidence that covert operations aiming to influence the outcome of the U.S. election using the company's products were able to successfully go viral or build "sustained audiences." Lawmakers have been particularly concerned about misinformation in the age of generative AI, which took off in late 2022 with the launch of ChatGPT. Large language models are still new and routinely spit out inaccurate and unreliable information. "Voters categorically should not look to AI chatbots for information about voting or the election -- there are far too many concerns about accuracy and completeness," Alexandra Reeve Givens, CEO of the Center for Democracy & Technology, told CNBC last week.
[8]
ChatGPT told 2M people to get their election news elsewhere -- and rejected 250K deepfakes
Now that the election is over, the dissection can begin. As this is the first election in which AI chatbots played a significant part of voters' information diets, even approximate numbers are interesting to think about. For instance, OpenAI has stated that it told around 2 million users of ChatGPT to go look somewhere else. It didn't just give them a cold shoulder, but recommended some trusted news sources like Reuters and the Associated Press. ChatGPT gave this type of "I'm just an AI, go read the actual news" response over 2 million times on Election Day and the day after, OpenAI explained in an update to a blog post on its elections approach. In the month leading up to election, ChatGPT sent around a million people to CanIVote.org when they asked questions specific to voting. And interestingly, it also rejected some 250,000 requests to generate images of the candidates over the same period. For comparison, Perplexity, the AI search engine, made a major push to promote its own election information hub, resulting in some 4 million page views, the company claimed (per Bloomberg). It's difficult to say whether these numbers are low or high. Certainly they are nowhere near leaders in the news: CNN's digital properties saw around 67 million unique visitors on Election Day and a similar amount the day after. But traffic is a tricky metric at the best of times. What matters this year is not that CNN got 10 times the traffic as these two AI platforms put together, but that it got only 10 times the traffic. Millions of people were interested enough, and trusted AI companies enough, to at least ask or give their election knowledge a shot. While OpenAI's play was the safe one, and Perplexity may have pulled off a risky bet, the AI industry in general is probably ecstatic about the fact that there was no serious gaffe by any of the big brands (except xAI, of course) and that users considered these chatbots and AI-powered platforms valuable as Election Day resources. Luckily for them, this particular election, though controversial in its own way, was relatively decisive and resulted in very few grey areas like disputed results, recounts, and lawsuits. If the 2020 election had occurred this week, they may not have fared so well.
[9]
OpenAI says over 2 million people consulted ChatGPT for the 2024 election
Ahead of the election, OpenAI took steps to stop the spread of election misinformation on the platform. It banned the use of ChatGPT to impersonate candidates or governments, misrepresent how voting works, or discourage voting. It also digitally watermarked AI images created using DALL-E to make it easier to identify images created using AI. Additionally, the company partnered with the National Association of Secretaries of State to provide accurate answers and direct users to CanIVote.org, a nonpartisan hub of voting information. According to a blog post in the month leading up to the election one million ChatGPT responses directed users to CanIVote.org and it rejected over 250,000 requests for deepfakes of President-elect Donald Trump, Vice President Kamala Harris, Vice President-elect JD Vance, and Governor Tim Walz. On election day and the day after two million ChatGPT responses encouraged users to look to the Associated Press and Reuters for election results. Despite the steps taken by OpenAI to stop the spread of election misinformation, the Bipartisan Policy Center still had concerns after asking ChatGPT a variety of voting-related questions. The Bipartisan Policy Center cautioned that is still "important to exercise discretion regarding its applications, especially when there are significant implications on our democracy." "It is easy to mislead users when solely relying on unconfirmed sources, such as ChatGPT and other chatbots, for answers," the policy center's report continued. "The bot has limitations of prompt length and information training and often does not answer with complete or consistent information. We would caution users to check ChatGPT answers with reliable resources such as government websites or their local election boards." This was the first presidential election where voters could turn to ChatGPT for election information and per the Bipartisan Policy Center there's room for improvements before the midterms in 2026.
Share
Share
Copy Link
OpenAI's ChatGPT rejected more than 250,000 requests to generate images of U.S. election candidates in the lead-up to Election Day, as part of efforts to prevent AI-driven misinformation and election interference.
In a significant move to combat AI-driven misinformation during the 2024 U.S. presidential election, OpenAI, the company behind ChatGPT, implemented robust safety measures. The AI firm revealed that it blocked over 250,000 requests to generate images of election candidates using its DALL-E platform in the month leading up to Election Day 123.
The rejected image generation requests included prominent political figures such as President-elect Donald Trump, Vice President Kamala Harris, President Joe Biden, Vice President-elect JD Vance, and Governor Tim Walz 14. OpenAI emphasized that these guardrails were crucial in preventing their tools from being used for deceptive or harmful purposes, especially in the context of elections 2.
OpenAI's strategy extended beyond image generation restrictions:
Voting Information: ChatGPT directed approximately 1 million users to CanIVote.org, a non-partisan voting information website, in the month before the election 23.
Election Results: On Election Day and the day after, ChatGPT generated about 2 million responses directing users to reputable news sources like Associated Press and Reuters for election results 24.
Neutrality: OpenAI ensured that ChatGPT's responses remained politically neutral, avoiding expressing preferences or recommending candidates 3.
The rise of generative AI has intensified concerns about election interference. Deepfakes have increased by 900% year-over-year, according to machine learning firm Clarity 5. In response to these threats:
OpenAI disrupted over 20 global operations and deceptive networks attempting to misuse their models for election interference 14.
The company found no evidence of successful viral engagement from covert operations using their platforms to influence the U.S. election 15.
The threat of AI-generated misinformation has prompted action from various stakeholders:
Legislation: California Governor Gavin Newsom signed three bills aimed at limiting the spread of deepfakes on social media 2.
Tech Industry: YouTube is developing at least two deepfake-detection tools to help creators identify unauthorized AI-generated copies of their voices or faces 2.
Expert Concerns: Alexandra Reeve Givens, CEO of the Center for Democracy & Technology, warned against relying on AI chatbots for voting information due to accuracy concerns 5.
Despite OpenAI's efforts, the AI industry faces ongoing challenges:
Widespread Deepfakes: Election-related deepfakes continue to circulate on social media, highlighting the need for broader solutions 3.
Leadership Changes: OpenAI has experienced departures of senior AI safety executives, including VP of research Lilian Weng, co-founder Ilya Sutskever, and former head of AI safety Jan Leike 2.
As AI technology continues to evolve, the battle against misinformation and the protection of election integrity remain critical challenges for tech companies, policymakers, and society at large.
Reference
[1]
[4]
OpenAI reports multiple instances of ChatGPT being used by cybercriminals to create malware, conduct phishing attacks, and attempt to influence elections. The company has disrupted over 20 such operations in 2024.
15 Sources
15 Sources
OpenAI has taken action against Iranian hacker groups using ChatGPT to influence the US presidential elections. The company has blocked several accounts and is working to prevent further misuse of its AI technology.
19 Sources
19 Sources
OpenAI has banned multiple accounts for misusing ChatGPT in surveillance and influence campaigns, highlighting the ongoing challenge of preventing AI abuse while maintaining its benefits for legitimate users.
15 Sources
15 Sources
As the 2024 U.S. presidential election approaches, artificial intelligence emerges as a powerful and potentially disruptive force, raising concerns about misinformation, deepfakes, and foreign interference while also offering new campaign tools.
6 Sources
6 Sources
Meta claims that AI-generated content played a minimal role in election misinformation on its platforms in 2024, contrary to widespread concerns about AI's potential impact on global elections.
14 Sources
14 Sources
The Outpost is a comprehensive collection of curated artificial intelligence software tools that cater to the needs of small business owners, bloggers, artists, musicians, entrepreneurs, marketers, writers, and researchers.
© 2025 TheOutpost.AI All rights reserved