The Outpost is a comprehensive collection of curated artificial intelligence software tools that cater to the needs of small business owners, bloggers, artists, musicians, entrepreneurs, marketers, writers, and researchers.
© 2025 TheOutpost.AI All rights reserved
Curated by THEOUTPOST
On Tue, 5 Nov, 4:05 PM UTC
3 Sources
[1]
Rapid spread of election disinformation stokes alarm
Experts and political figures are sounding the alarm on the spread of election disinformation on social media, putting leading platforms under intense scrutiny in the final days of the presidential race. Between investigations into leading social media companies to prominent figures voicing concerns about false election claims, the past week saw increased discussion around the topic as some brace for postelection disinformation. The long-lasting falsehoods over the 2020 election have made voters and election watchers more attuned to the potential for disinformation, though experts said recent technology advances are making it more difficult for users to discern fake content. "We are seeing new formats, new modalities of manipulation of some sort including ... this use of generative AI [artificial intelligence], the use of these mock news websites to preach more fringe stories and, most importantly perhaps, the fact that now these campaigns span the entire media ecosystem online," said Emilio Ferrara, professor of computer science and communication at the University of Southern California. "And they are not just limited to perhaps one mainstream platform like we [saw] in 2020 or even in 2016," said Ferrara, who co-authored a study that discovered a multiplatform network amplifying "conservative narratives" and former President Trump's 2024 campaign. False content has emerged online throughout this election cycle, often in the form of AI-generated deepfakes. The images have sparked a flurry of warnings from lawmakers and strategists about attempts to influence the race's outcome or sow chaos and distrust in the electoral process. Just last week, a video falsely depicting individuals claiming to be from Haiti and voting illegally in multiple Georgia counties circulated across social media, prompting Georgia Secretary of State Brad Raffensperger (R) to ask X and other social platforms to remove the content. Intelligence agencies later determined Russian influence actors were behind the video. Thom Shanker, director of the Project for Media and National Security at George Washington University, noted the fake content used in earlier cycles was "sort of clumsy and obvious," unlike newer, AI-generated content. "Unless you really are applying attention and concentration and media literacy, a casual viewer would say, 'Well, that certainty looks real to me,'" he said, adding, "And of course, they are spreading at internet speeds." Over the weekend, the FBI said it is "aware" of two fake videos claiming to be from the agency about the election. Attempts to deceive the public "undermines our democratic process and aims to erode trust in the electoral system," the agency said. News outlets are also trying to debunk fake content before it reaches large audiences. A video recently circulated showing a fake CBS News banner claiming the FBI warned citizens "to vote with caution due to high terrorist threat level." CBS said the screenshot "was manipulated with a fabricated banner that never aired on any CBS News platform." Another screenshot showing a CNN "race alert" with Vice President Harris ahead of Trump in Texas reportedly garnered millions of views over the weekend before the network confirmed the image was "completely fabricated and manipulated." In one since-deleted post of the fake CNN screenshot, a user wrote, "Hey Texas, looks like they are stealing your election." False content like this can go unchecked for longer periods of time, as they often posted into an "echo chamber," and shown only to users with similar interests and algorithms, said Sandra Matz, a professor at Columbia Business School. "It's not necessarily that there's more misinformation, it's also that it's hidden," Matz said, warning it is not possible for experts to "easily access the full range of content that is shown to different people." Social media companies have faced even more scrutiny after four news outlets released last week separate investigations into X, YouTube and Meta -- the parent company for Facebook and Instagram. All of the probes say those major companies failed to stop some content containing election misinformation before it went live. Since purchasing X, Elon Musk and the company have faced repeated criticism for scaling back content moderation features and reinstating several conspiracy theorists' accounts. Concerns over disinformation on the platform increased earlier this year when the billionaire became a vocal surrogate for Trump and ramped up his sharing of false or misleading claims. The Center for Countering Digital Hate (CCDH), an organization tracking online hate speech and misinformation, released a report Monday finding Musk's political posts garnered 17.1 billion views since endorsing Trump, more than twice as many views as the U.S. "political campaigning ads" recorded by X in the same period. Musk's X Corp. filed a lawsuit against the CCDH last year. "It used to be that Twitter at least TRIED to police disinformation. Now its owner TRAFFICS in it, all as he invests hundreds of millions of dollars to elect Trump -- and make himself a power-wielding oligarch," Democratic strategist David Axelrod wrote Monday in a post on X. Former Rep. Liz Cheney (R-Wyo.), one of the most vocal GOP critics of Trump, predicted last week that X will be a "major channel" for those claiming the election was stolen and called the platform a "cesspool" under Musk's leadership. An X spokesperson sent The Hill a list of actions it is taking to prevent false or fake claims from spreading, including the implementation of its "Community Notes" feature intended to fact-check false or misleading posts. ProPublica published a report Thursday finding eight "deceptive advertising networks" placed more than 160,000 election and social issue ads across more than 340 Facebook pages. Meta removed some of the ads after initially approving them but did not catch some with similar or identical content, the report stated. Forbes also reported Facebook allowed hundreds of ads falsely claiming the election may be rigged or postponed to run on its website. "We welcome investigation into this scam activity, which includes deceptive ads," Meta spokesperson Ryan Daniels told The Hill. "This is a highly-adversarial space. We continuously update our enforcement systems to respond to evolving scammer behavior and review and remove any ads that violate our policies." Facebook has faced intense scrutiny in recent election cycles over its handling of political misinformation. In response, Meta has invested millions in its election fact-checking and media literacy initiatives and prohibits ads that discourage users from voting, question the election's legitimacy or feature premature victory claims. Daniels said Meta has about 40,000 people globally working on safety and security, more than the company had in 2020. Meta has "grown our fact checking program to more than 100 independent partners, and taken down over 200 covert coordinated influence operations," Daniels said. "Our integrity efforts continue to lead the industry, and with each election we incorporate the lessons we've learned to help stay ahead of emerging threats." A separate report published last week from The New York Times and progressive watchdog Media Matters for America claimed YouTube in June 2023 "decided to stop fighting" the false claim that President Biden stole the 2020 election. This included allowing more than 280 videos containing election misinformation from an estimated 30 conservative channels. "The ability to openly debate political ideas, even those that are controversial, is an important value -- especially in the midst of election season," a YouTube spokesperson said in response to the report. "And when it comes to what content can monetize, we strike a balance between allowing creators to express differing perspectives and upholding the higher bar that we and our advertisers have for where ads run." YouTube said the platform has a multilayered approach to connect users with authoritative news and information while ensuring a variety of viewpoints are represented. This includes policies against certain election misinformation, defined by YouTube as content "that can cause real-world harm, like certain types of technically manipulated content, and content interfering with democratic processes." Sacha Haworth, the executive director of the Tech Oversight Project, a nonprofit advocating for reining in tech giants' market power, said she was not surprised to see the flurry of reports. "We as a public, as lawmakers, as policy makers, must understand that this has to be the last time we allow them to do this to our elections," Haworth said. "They are never going to self-regulate."
[2]
How election misinformation thrived online during the 2024 presidential race -- and will likely continue in the days to come
The rapid rise of generative artificial intelligence and deep-fake technologies had many dreading the year of the AI election. And as the 2024 campaign winds down, misinformation and disinformation about the election is everywhere online, from foreign bot accounts to billionaire-backed ad campaigns and the candidates themselves. Since the 2020 presidential election, the volume and methods for disinformation campaigns have grown, says Cliff Steinhauer, director of information security and engagement at the National Cybersecurity Alliance. "The biggest red flag is that they're going to try to make you reactionary and get you out of your critical thinking process and into your reactionary, emotional thinking process," he told Fortune. A recent survey of voters by YouGov and Tech Policy Press found 65% of Americans believe election-related misinformation on social media has worsened since 2020. Fake bot accounts and deepfakes are on the rise, often proliferated by foreign adversaries, while platforms like X have loosened efforts to limit misleading content. Ahead of Election Day, U.S. intelligence officials debunked social media videos, some of which originated in Russia, claiming to show election interference in Pennsylvania and Georgia. Meanwhile, former president Donald Trump and his running mate JD Vance repeated false claims about Haitian immigrants in Ohio that began online. And Elon Musk has boosted right-wing conspiracies about immigrant voting, often via his social media platform. The methods of misinformation Musk's X was under increased scrutiny for amplifying right-wing misinformation in the days leading up to the election. The billionaire tech CEO claimed he was committed to uncensored freedom of speech when he bought the platform formerly known as Twitter for $44 billion in 2022. Reports show that false claims run rampant under X's community notes feature -- a crowdsourced method of fact-checking meant to replace the watchdogs laid off by Musk. Lately, with Musk all-in for Trump, that includes Musk's inescapable self-promotion of right-wing rhetoric and political advertising, such as his false claims that Democrats "have imported massive numbers of illegals to swing states." X's chatbot, Grok, also proliferated incorrect information about voting. The company only changed its answers to redirect users to Vote.gov after five secretaries of state flagged the misinformation. The election officials estimated that millions of people viewed incorrect responses to their questions about the election. Some election officials have reportedly hand-delivered personal warnings about spreading election misinformation to Musk. A spokesperson for X told Fortune that the company has communicated with election regulators and other stakeholders to address threats, noting that its civic integrity policy will remain in place through the inauguration. Since Musk purchased Twitter, that policy no longer explicitly bans content that seeks to undermine the outcome of an election. But X is far from the only platform harboring misinformation. Meta's Facebook made $1 million from misleading advertisements that suggest the election might be delayed or rigged, which Meta has since removed. A deceptive ad series appearing to come from the Harris campaign, but backed by Musk and other conservatives, also thrived on Facebook. The company blocked new ads about social issues, elections, or politics for the final week of the election -- a policy it's had since 2020. In anticipation of misinformation about vote counts and election integrity after the polls close, Meta told advertisers Monday that it will extend its ban until later this week. A spokesperson for Meta said the company employs around 40,000 people globally working on safety and security. On the encrypted messaging app Telegram, more than 500,000 right-wing users across 50 channels stoked claims that they would question and dispute any outcome other than a Trump victory. Foreign sources of misinformation The FBI on Saturday warned voters about two deepfake videos that emerged ahead of Tuesday's election. Both falsely claimed to come from the FBI, with one concerning ballot fraud and another about Kamala Harris's husband, Doug Emhoff. The BBC reported that those deepfakes were part of a larger Russian operation. Federal agencies say Russia, China, and Iran are the most prominent foreign nations spreading disinformation in the U.S. While Russia favors Trump, Iran promotes Harris. China has spread false information about both candidates. U.S. intelligence officials also identified a fake video of a Haitian man claiming to cast multiple votes in Georgia as a product of Russian influence. Another video that appeared to show a Pennsylvania poll worker destroying mail-in ballots was similarly attributed to Russian actors. Some of the other 300-plus videos impersonated news organizations and posted false claims about Harris and content about unrest and "civil war," the BBC found. But it added that most of these videos did not get significant views from real people on X, and instead showed tens of thousands of views that came mainly from bot accounts. Despite the "unprecedented" surge in disinformation from foreign actors, Jen Easterly, the director of the U.S. Cybersecurity and Infrastructure Security Agency, said Monday that she hasn't seen evidence of adversaries affecting the election. Still, she expects deceptive information about election integrity to spread online in the coming days, until Congress certifies the results on Jan. 6.
[3]
As Americans head to the polls, social media companies are in the spotlight
With Americans heading to the polls on Election Day, social media companies like Meta, TikTok, X and YouTube are under intense pressure to handle what's expected to be a flood of disinformation, heightened by the rise of artificial intelligence. It's been a huge issue since the 2016 presidential election cycle, when foreign adversaries abused social platforms in an effort to sway the outcome. Most notably, Russian operatives flooded Facebook with posts promoting false information about Democratic nominee Hillary Clinton. Meta says it's invested more than $20 billion around safety and security for global elections since 2016, and has more recently deprioritized political content on Instagram and Threads. The company has also been working with fact-checkers, amplifying verified voting resources and labeling AI generated content ahead of election day. There's only so much the companies can do. Jen Easterly, director of the Cybersecurity and Infrastructure Security Agency, told reporters in an October briefing that foreign actors from Russia, Iran and China have managed to launch viral disinformation campaigns. Russia was behind a fake video that showed a person ripping up ballots in Pennsylvania last month, according to a joint statement from CISA, the FBI and the Office of the Director of National Intelligence. The video amassed hundreds of thousands of views within hours after it was posted on Elon Musk's social media platform X. "This Russian activity is part of Moscow's broader effort to raise unfounded questions about the integrity of the U.S. election and stoke divisions among Americans," the statement said. Foreign actors aren't the only perpetrators. In late September, CNBC informed Meta about a series of Facebook posts containing misinformation on voting in North Carolina. On X, a beta feature in the "explore" section was spreading voter fraud conspiracy theories through the platform's AI software last month, a report from NBC News found. And TikTok failed to catch ads containing false election information despite its ban on political advertising, according to an October report from Global Witness. "There's a lot of information out there, and frankly, a firehose of disinformation," Easterly said during the briefing. Here's how social media companies have been preparing for election day.
Share
Share
Copy Link
As the 2024 U.S. presidential election approaches, experts warn of an unprecedented surge in AI-generated disinformation across social media platforms, posing significant challenges to election integrity and voter trust.
As the United States approaches the 2024 presidential election, experts and political figures are raising alarms about the rapid spread of election disinformation on social media platforms. The rise of generative artificial intelligence (AI) and deep-fake technologies has significantly amplified concerns, with many dubbing 2024 as the "year of the AI election" 2.
The volume and sophistication of disinformation campaigns have grown substantially since the 2020 presidential election. Emilio Ferrara, a professor at the University of Southern California, notes, "We are seeing new formats, new modalities of manipulation... including this use of generative AI, the use of these mock news websites to preach more fringe stories, and most importantly perhaps, the fact that now these campaigns span the entire media ecosystem online" 1.
AI-generated deepfakes have emerged as a particularly potent form of false content. These highly realistic fabricated videos and images have sparked warnings from lawmakers and strategists about attempts to influence election outcomes or sow distrust in the electoral process. Thom Shanker, director of the Project for Media and National Security at George Washington University, observes that unlike earlier "clumsy and obvious" fake content, newer AI-generated material is much harder to distinguish from reality 1.
U.S. intelligence agencies have identified Russia, China, and Iran as prominent foreign nations spreading disinformation. A recent incident involved a fake video, attributed to Russian influence actors, falsely depicting individuals claiming to vote illegally in Georgia 1. Domestically, social media platforms, particularly X (formerly Twitter), have faced criticism for amplifying right-wing misinformation. Elon Musk, X's owner, has been accused of personally promoting false claims about immigrant voting 2.
Major social media companies like Meta, X, YouTube, and TikTok are facing intense pressure to handle the expected flood of disinformation. Meta claims to have invested over $20 billion in safety and security measures for global elections since 2016 3. However, recent investigations have revealed shortcomings in content moderation across these platforms.
The scale and speed of disinformation spread pose significant challenges for content moderation. Sandra Matz, a professor at Columbia Business School, points out that false content can go unchecked for longer periods, often circulating in "echo chambers" shown only to users with similar interests and algorithms 1. This makes it difficult for experts to access and assess the full range of content shown to different users.
A recent survey by YouGov and Tech Policy Press found that 65% of Americans believe election-related misinformation on social media has worsened since 2020 2. In response, platforms have implemented various measures. X has introduced its "Community Notes" feature for fact-checking, while Meta has blocked new ads about social issues, elections, or politics in the final week before the election 23.
As Election Day unfolds, the battle against AI-powered disinformation continues, with cybersecurity experts, election officials, and social media companies working to maintain the integrity of the democratic process in an increasingly complex digital landscape.
As the U.S. presidential election approaches, foreign interference and disinformation campaigns from Russia, China, and Iran have become more sophisticated and pervasive, posing significant challenges to election integrity and public trust.
8 Sources
8 Sources
Artificial intelligence poses a significant threat to the integrity of the 2024 US elections. Experts warn about the potential for AI-generated misinformation to influence voters and disrupt the electoral process.
2 Sources
2 Sources
Meta claims that AI-generated content played a minimal role in election misinformation on its platforms in 2024, contrary to widespread concerns about AI's potential impact on global elections.
14 Sources
14 Sources
Major tech companies, including Meta, Google, and X (formerly Twitter), faced a Senate hearing on their efforts to combat foreign election interference. The companies outlined their strategies to protect the 2024 US elections from disinformation and manipulation.
2 Sources
2 Sources
Elon Musk's social media activity and platform policies have sparked debates about misinformation and election integrity. His actions on X (formerly Twitter) are under scrutiny as the 2024 US presidential election approaches.
6 Sources
6 Sources