Curated by THEOUTPOST
On Thu, 24 Oct, 12:04 AM UTC
8 Sources
[1]
Voting officials face 'an uphill battle' to fight election lies
A man votes at a public library turned into an early voting polling station in Black Mountain, N.C., on Oct. 29. Yasuyoshi Chiba/AFP via Getty Images hide caption Last week, a video began circulating on X, formerly Twitter, purporting to show a person in Pennsylvania ripping up ballots marked for former President Donald Trump and leaving alone those marked for Vice President Harris. The person curses the former president multiple times and at one point says, "Vote Harris." The video is a fake. The envelopes and ballots shown don't match what that county actually uses to vote. U.S. officials said it was created and spread by Russia to sow doubt in the election. But the incident showed what has been clear for some time now: Online in 2024, the deck is stacked against voting officials, maybe even more so than in 2020. The phony video was viewed hundreds of thousands of times shortly after it was posted. A statement from Bucks County debunking it three hours later was shared on X fewer than 100 times. "They're fighting an uphill battle," said Darren Linvill, co-director of Clemson University's Media Forensics Hub, which tracks election influence campaigns. "I'm sure that they often feel like they're trying to put their finger in the dike before it bursts." Linvill traced the video back to a Russian propaganda operation, first identified by Clemson, that has also spread faked videos targeting Harris and her running mate Tim Walz in recent weeks. With less than a week left of voting, the election cycle has entered a fraught stage in which rumors, misleading claims and conspiracy theories are surging. And election administrators, intelligence officials and researchers don't expect that to end when polls close. They are bracing for what is expected to be a contentious period of counting and certifying votes, in which discord fueled by foreign and domestic sources could be corrosive to democracy. Perhaps the biggest factor is former President Donald Trump, who continues to falsely assert he won the 2020 election, despite courts and investigations finding no evidence of fraud. He has already set the stage to reject the results should he lose again this year. "If I lose -- I'll tell you what, it's possible. Because they cheat. That's the only way we're gonna lose, because they cheat," Trump said at a September rally in Michigan. Despite the lack of evidence, his claims have been embraced by many Republicans and eroded confidence in voting among a wide swath of Americans. "We've already set the standard that you are allowed to doubt the results on Election Day," said Linvill. "And that just doesn't bode well." America's geopolitical adversaries -- particularly Russia, Iran and China -- are expected to seize on election fraud claims, however unfounded, and generate their own material undermining the results, as part of their larger goals to sow chaos and discredit democracy. Officials charged with safeguarding voting say they've learned and are applying many lessons from 2016 and 2020 -- but are also confronting a new set of challenges this year, including advances in artificial intelligence that make it easier and cheaper to generate realistic but fake images, video, audio and text, and the emergence of X owner Elon Musk as a leading Trump surrogate, donor and amplifier of election fraud conspiracy theories. "Going into the 2024 election cycle, we are arguably facing the most complex threat environment compared to a prior cycle," said Cait Conley, who oversees election security efforts within the Department of Homeland Security's cyber agency, in an interview with NPR. To meet that heightened risk, government officials are counting on transparency and warnings to help Americans gird themselves against manipulation. Federal intelligence and law enforcement officials are taking a more aggressive approach this year in calling out foreign meddling. It's a stark difference from 2016, when the Obama administration was reticent to make public information about the full scope of Russia's efforts favoring Trump until after the election. This year, Russia is angling to boost Trump, as it did in the previous two presidential elections, while Iran is trying to undermine the former president, the intelligence community and private sector researchers say. China is targeting down-ballot races but does not appear to have a preference in the presidential election. All three regularly seize on divisive issues, from immigration to abortion to Israel's war in Gaza, to exacerbate discord among Americans. And they've all experimented with using AI to churn out more misleading content. "Our adversaries know that misinformation and disinformation is cheap and effective," said Sen. Mark Warner, D-Va., who chairs the Senate intelligence committee, in an interview with NPR. The federal government moved quickly to publicly attribute the fake Pennsylvania ballot video to Russia the day after the video first appeared on X -- a notably rapid turnaround for intelligence and law enforcement officials. And they warned they expect more such fakes in the coming days and weeks. In September, the Justice Department seized web domains it says Russian operatives used to spoof American news outlets and spread fake stories, indicted employees of Kremlin-backed broadcaster RT in a scheme to fund right-wing pro-Trump American influencers, and brought criminal charges against Iranian hackers accused of targeting the Trump campaign. DHS's Cybersecurity and Infrastructure Security Agency and the FBI have issued joint public service announcements alerting Americans to tactics foreign actors might use to discredit the election, including ransomware attacks or falsely claiming to have hacked voter registration systems. Officials say, so far, there's no sign that foreign adversaries have breached any election or voting systems. But attackers don't have to succeed in order to undermine confidence, Conley, the election security expert, said. "While these things like a [distributed denial of service] attack or ransomware could be disruptive to the elections process, it's not going to undermine the security or integrity of the vote casting or counting process," Conley said. "But our adversaries may try to convince the American people otherwise." At all levels of government, the message is consistent: Turn to the people running local elections for authoritative voting information. "What we are really trying to encourage them to do is to know that it is your state and local election official that is the signal through that noise," Conley said. And local election officials are making more of an effort than ever before to seek out media attention and educate the public on their processes. State election officials in a number of swing states have started holding multiple press conferences per week leading up to Nov. 5. "It's really important for us to get the message out there first and be as proactive as possible," said Isaac Cramer, who runs elections in Charleston County, S.C. The very subject of election "disinformation" itself has been turned into a partisan fight. A coordinated Republican legal and political campaign to cast efforts to mitigate or track the spread of falsehoods online as "censorship" has undercut the work of government agencies, online platforms and researchers, and driven one institution out of the field. Last month, Warner wrote an open letter urging CISA to do more to help state and local governments identify and respond to election misinformation and disinformation campaigns, and to coordinate communications between the government, tech companies and researchers. "The government needs to get this information out as quickly as possible, because literally the stakes are nothing less than our democracy," Warner told NPR. Cramer, the election official in South Carolina, said one challenge for local governments dealing with false information online is how splintered the internet is. He's recently started seeing a lot more wrong voting information on NextDoor, for instance. "We can't possibly have eyes on every platform and see everything that is being posted," Cramer said. Increasingly election officials are thinking outside the box to reach voters, because trying to fight fire with fire on social media has felt like a losing battle for years now, says Carolina Lopez, a former election official from Miami-Dade County, Fla. "Election officers around the country spend a whole lot of time producing content for social media, and it always kills me when I see, like, three likes and it's usually themselves, their [spokesperson] and their mom," said Lopez, who now runs the Partnership for Large Election Jurisdictions. "Election officials are trying to figure out, 'Well, what else can I do to be heard?'" In Montgomery County, Pa., Neil Makhija fashioned a voting ice cream truck to travel his county and help people vote. Cramer, in Charleston County, co-wrote a children's book. Derek Bowens, in Durham County, N.C., created an app that could deliver accurate election information directly to people there. One of the loudest voices elevating unverified rumors and outright falsehoods about the 2024 election also controls a major communications platform. Musk, the world's richest man, took control of Twitter two years ago and has remade the site, now called X, into a pro-Trump megaphone. Musk has become a major vector for baseless claims that Democrats are bringing in immigrants to illegally vote for them -- a conspiracy theory Trump and other Republicans have embraced and are using to lay the groundwork to claim the election was stolen should he lose. When election officials try to correct Musk's false claims, he has lashed out. Michigan Secretary of State Jocelyn Benson told CBS News that she and her staff received threats and harassing messages after Musk called her a liar when she fact-checked his claim that the state has more registered voters than eligible citizens. Musk has also shared AI-generated content on X without disclosure, including images and videos of Harris doing and saying things she didn't. Musk's America PAC is inviting users to share "potential incidents of voter fraud or irregularities" on an "Election Integrity Community" on X that has 13,000 members. The feed is filled with allegations that voting machines are switching votes from Trump to Harris and posts casting doubt on the security of mail-in ballots -- both frequently debunked narratives in 2020 -- as well as copies of the fake Bucks County video. Danielle Lee Tomson, research manager at the University of Washington's Center for an Informed Public, said such "evidence generation infrastructure" is more robust this year. Even when these efforts identify real issues with voting, they tend to ignore the checks in the system that catch problems, she said. "If you see something seemingly suspicious, and then you take a picture of it and post it online, that can be decontextualized so quickly and not take into account all of the various remedies or the fact that there's nothing suspicious there at all," she said. Local election officials are making their processes more transparent than ever this year, including livestreaming counting facilities, and welcoming record numbers of election monitors. But such openness is a double-edged sword: Video feeds provide more material for content creators who may use it to push their own narratives of malfeasance -- such as the false claims against Georgia election workers amplified by Trump in 2020. That leaves officials operating with the knowledge that their every move will be scrutinized. "We try to not commit unforced errors," said Stephen Richer, the Republican recorder in Maricopa County, Ariz., who has been an outspoken debunker of election lies. "But at the end of the day, if somebody really wants to make something look weird, I think they can do it, unfortunately." In 2020, major social media platforms proactively boosted election officials as authoritative sources of information, made misleading posts about voting less visible, and added warning labels to false claims. Now, Musk has cut most of X's team policing the platform and removed many guardrails against false and misleading content. X is the most glaring example, but other platforms have also backed away from the more aggressive stance they took in 2020, cut back on the number of people working on trust and safety, and are generally more quiet about the work they are doing. Meta now lets Facebook and Instagram users opt out of some features of its fact-checking program, while its text-based social network, Threads, has deemphasized news and politics. Warner told NPR he's concerned social media platforms have stepped back at a time when threats, including from AI-generated content, are more urgent. "Think about the devastating effect it'd have if somebody uses an AI image of what looks like an election official somehow destroying ballots or, you know, breaking into a drop box," he said. "That kind of imagery could literally spark violence in a close election after the fact and again undermine Americans' confidence in our system." In the face of that landscape, election officials say they are controlling what they can control. They have spent countless hours reaching out to skeptical voters over the past four years, and they're now clinging onto hope that work will make a difference in people's willingness to accept election results. Michael Adams, the Republican secretary of state of Kentucky, is hoping the novelty of election denial will start wearing off, as well. "For a while there, every six months, they'd come up with a new conspiracy theory. It would be debunked. They'd have egg on their face. They go back in their hole for six months and then come back," Adams said. "You only get so many bites at that apple."
[2]
How Russia, China and Iran are interfering in the presidential election
When Russia interfered in the 2016 U.S. presidential election, spreading divisive and inflammatory posts online to stoke outrage, its posts were brash and riddled with spelling errors and strange syntax. They were designed to get attention by any means necessary. "Hillary is a Satan," one Russian-made Facebook post read. Now, eight years later, foreign interference in U.S. elections has become far more sophisticated, and far more difficult to track. Disinformation from abroad -- particularly from Russia, China and Iran -- has matured into a consistent and pernicious threat, as the countries test, iterate and deploy increasingly nuanced tactics, according to U.S. intelligence and defense officials, tech companies and academic researchers. The ability to sway even a small pocket of Americans could have outsize consequences for the presidential election, which polls generally consider a neck-and-neck race. Russia, according to American intelligence assessments, aims to bolster the candidacy of former President Donald Trump, while Iran favors his opponent, Vice President Kamala Harris. China appears to have no preferred outcome. But the broad goal of these efforts has not changed: to sow discord and chaos in hopes of discrediting American democracy in the eyes of the world. The campaigns, though, have evolved, adapting to a changing media landscape and the proliferation of new tools that make it easy to fool credulous audiences. Here are the ways that foreign disinformation has evolved: Now, disinformation is basically everywhere Russia was the primary architect of American election-related disinformation in 2016, and its posts ran largely on Facebook. Now, Iran and China are engaging in similar efforts to influence American politics, and all three are scattering their efforts across dozens of platforms, from small forums where Americans chat about local weather to messaging groups united by shared interests. The countries are taking cues from one another, although there is debate over whether they have directly cooperated on strategies. There are hordes of Russian accounts on Telegram seeding divisive, sometimes vitriolic videos, memes and articles about the presidential election. There are at least hundreds more from China that mimicked students to inflame the tensions on American campuses this summer over the war in the Gaza Strip. Both countries also have accounts on Gab, a less-prominent social media platform favored by the far right, where they have worked to promote conspiracy theories. Russian operatives have also tried to support Trump on Reddit and forum boards favored by the far right, targeting voters in six swing states along with Hispanic Americans, video gamers and others identified by Russia as potential Trump sympathizers, according to internal documents disclosed in September by the Department of Justice. One campaign linked to China's state influence operation, known as Spamouflage, operated accounts using a name, Harlan, to create the impression that the source of the conservative-leaning content was an American, on four platforms: YouTube, X, Instagram and TikTok. The content is far more targeted The new disinformation being peddled by foreign nations aims not just at swing states, but also at specific districts within them, and at particular ethnic and religious groups within those districts. The more targeted the disinformation is, the more likely it is to take hold, according to researchers and academics who have studied the new influence campaigns. "When disinformation is custom-built for a specific audience by preying on their interests or opinions, it becomes more effective," said Melanie Smith, the research director for the Institute for Strategic Dialogue, a research organization based in London. "In previous elections, we were trying to determine what the big false narrative was going to be. This time, it is subtle, polarized messaging that strokes the tension." Iran in particular has spent its resources setting up covert disinformation efforts to draw in niche groups. A website titled "Not Our War," which aimed to draw in U.S. military veterans, interspersed articles about the lack of support for active-duty soldiers with virulently anti-American views and conspiracy theories. Other sites included "Afro Majority," which created content aimed at Black Americans, and "Savannah Time," which sought to sway conservative voters in the swing state of Georgia. In Michigan, another swing state, Iran created an online outlet called "Westland Sun" to cater to Arab Americans in suburban Detroit. "That Iran would target Arab and Muslim populations in Michigan shows that Iran has a nuanced understanding of the political situation in America and is deftly maneuvering to appeal to a key demographic to influence the election in a targeted fashion," said Max Lesser, a senior analyst at the Foundation for Defense of Democracies. China and Russia have followed a similar pattern. On the social platform X this year, Chinese state media spread false narratives in Spanish about the Supreme Court, which Spanish-speaking users on Facebook and YouTube then circulated further, according to Logically, an organization that monitors disinformation online. Experts on Chinese disinformation said that inauthentic social media accounts linked to Beijing had become more convincing and engaging and that they now included first-person references to being an American or a military veteran. In recent weeks, according to a report from Microsoft's Threat Analysis Center, inauthentic accounts linked to China's Spamouflage targeted House and Senate Republicans seeking reelection in Alabama, Tennessee and Texas. Artificial intelligence is propelling this evolution Recent advances in artificial intelligence have boosted disinformation capabilities beyond what was possible in previous elections, allowing state agents to create and distribute their campaigns with more finesse and efficiency. OpenAI, whose ChatGPT tool popularized the technology, reported this month that it had disrupted more than 20 foreign operations that had used the company's products between June and September. They included efforts by Russia, China, Iran and other countries to create and fill websites and to spread propaganda or disinformation on social media -- and even to analyze and reply to specific posts. (The New York Times sued OpenAI and Microsoft last year for copyright infringement of news content; both companies have denied the claims.) "AI capabilities are being used to exacerbate the threats that we expected and the threats that we're seeing," Jen Easterly, the director of the Cybersecurity and Infrastructure Security Agency, said in an interview. "They're essentially lowering the bar for a foreign actor to conduct more sophisticated influence campaigns." The utility of commercially available AI tools can be seen in the efforts of John Mark Dougan, a former deputy sheriff in Florida who now lives in Russia after fleeing criminal charges in the United States. Working from an apartment in Moscow, he has created scores of websites posing as American news outlets and used them to publish disinformation, effectively doing by himself the work that, eight years ago, would have involved an army of bots. Dougan's sites have circulated several disparaging claims about Harris and her running mate, Gov. Tim Walz of Minnesota, according to NewsGuard, a company that has tracked them in detail. China, too, has deployed an increasingly advanced tool kit that includes AI-manipulated audio files, damaging memes and fabricated voter polls in campaigns around the world. This year, a deepfake video of a Republican member of Congress from Virginia circulated on TikTok, accompanied by a Chinese caption falsely claiming that the politician was soliciting votes for a critic of Beijing who sought (and later won) the Taiwanese presidency. It's becoming much harder to identify disinformation All three countries are also becoming better at covering their tracks. Last month, Russia was caught obscuring its attempts to influence Americans by secretly backing a group of conservative American commentators employed through Tenet Media, a digital platform created in Tennessee in 2023. The company served as a seemingly legitimate facade for publishing scores of videos with pointed political commentary as well as conspiracy theories about election fraud, COVID-19, immigrants and Russia's war with Ukraine. Even the influencers who were covertly paid for their appearances on Tenet said they did not know the money came from Russia. In an echo of Russia's scheme, Chinese operatives have been cultivating a network of foreign influencers to help spread its narratives, creating a group described as "foreign mouths," "foreign pens" and "foreign brains," according to a report last fall by the Australian Strategic Policy Institute. The new tactics have made it harder for government agencies and tech companies to find and remove the influence campaigns -- all while emboldening other hostile states, said Graham Brookie, the senior director at the Atlantic Council's Digital Forensic Research Lab. "Where there is more malign foreign influence activity, it creates more surface area, more permission for other bad actors to jump into that space," he said. "If all of them are doing it, then the cost for exposure is not as high." Technology companies aren't doing as much to stop disinformation The foreign disinformation has exploded as tech giants have all but given up their efforts to combat disinformation. The largest companies, including Meta, Google, OpenAI and Microsoft, have scaled back their attempts to label and remove disinformation since the last presidential elections. Others have no teams in place at all. The lack of cohesive policy among the tech companies has made it impossible to form a united front against foreign disinformation, security officials and executives at tech companies said. "These alternative platforms don't have the same degree of content moderation and robust trust and safety practices that would potentially mitigate these campaigns," said Lesser, of the Foundation for Defense of Democracies. He added that even larger platforms such as X, Facebook and Instagram were trapped in an eternal game of Whac-a-Mole as foreign state operatives quickly rebuilt influence campaigns that had been removed. Alethea, a company that tracks online threats, recently discovered that an Iranian disinformation campaign that used accounts named after hoopoes, the colorful bird, recently resurfaced on X despite having been banned twice before.
[3]
As Election Looms, Disinformation 'Has Never Been Worse'
The reporter has reported extensively since 2022 on the effect of disinformation on the country's electoral politics. The Democratic Party's vice-presidential nominee has been falsely accused of sexually molesting students. The claims have been spread by a former deputy sheriff from Florida, now openly working in Moscow for Russia's propaganda apparatus, on dozens of social media platforms and fake news outlets. A faked video purporting to show one victim -- creating fake people is a recurring Russian tactic -- received more than 5 million views on X, a platform owned by the world's richest man, Elon Musk. Mr. Musk has not only leaned all in for the Republican nominee, former President Donald J. Trump, but he also used his platform to reanimate discredited claims about the validity of the election's outcome. Smears, lies and dirty tricks -- what we call disinformation today -- have long been a feature of American presidential election campaigns. Two weeks before this year's vote, however, the torrent of half-truths, lies and fabrications, both foreign and homegrown, has exceeded anything that came before, according to officials and researchers who document disinformation. The effect on the outcome on Nov. 5 remains to be seen, but it has already debased what passes for political debate about the two major party candidates, Mr. Trump and Vice President Kamala Harris. It has also corroded the foundations of the country's democracy, undermining what was once a shared confidence that the country's elections, regardless of who won, have been free and fair. Russia, as well as Iran and China, have gleefully stoked many of the narratives to portray American democracy as dysfunctional and untrustworthy. Politicians and influential media figures have in turn given foreign adversaries plenty of fodder to work with, inciting and amplifying divisiveness for partisan advantage. "They do have different tactics and different approaches to influence operations, but their goals are the same," Jen Easterly, the director of the Cybersecurity and Infrastructure Security Agency in Washington, said in an interview, referring to foreign adversaries. "Very simply, they're looking to undermine American trust in our democratic institutions and the election specifically, and to sow partisan discord." Numerous factors have contributed to the surge in disinformation, which Ms. Easterly and other officials have warned will continue far beyond Election Day. Social media platforms have helped to harden media ecosystems into distinct, disparate partisan enclaves where facts contradicting preconceived narratives are often unwelcome. Artificial intelligence has become an accelerant, making fake or fanciful content ubiquitous online with merely a few keystrokes. In today's political debate, it seems, facts matter less than feelings, which are easily manipulated online. It all played out in full in recent weeks, after two devastating hurricanes killed hundreds across the Southeast and prompted outlandish conspiracy theories and violent threats to rescue workers. A fictitious image of a girl clutching a puppy in a life raft so moved Amy Kremer, the chairwoman in Georgia for the Republican National Committee who posted it this month, that she stood by it even after she learned it was not real. "Y'all, I don't know where this photo came from and honestly, it doesn't matter," she wrote on X, where her initial post received more than 3 million views. "It is seared into my mind forever." Mr. Trump's running mate, Senator JD Vance, essentially used the same excuse after facing criticism for popularizing a racially tinged fiction that Haitian immigrants in Springfield, Ohio, were eating the city's cats and dogs. He argued he was reflecting local residents' actual concerns, if not actual facts. (Mr. Trump, for his part, stood by the original claims in an interview with Fox News's Howard Kurtz on Monday. "What about the goose, the geese, what about the geese, what happened there?" he said. "They were all missing.") In much the same way, Mr. Trump has succeeded in reviving allegations that the outcome of the race against President Joseph R. Biden Jr. in 2020 was not legitimate -- simply by flatly refusing to concede otherwise. Election officials, as well as numerous courts, have said repeatedly that there was no election fraud in 2020. A concerted conservative legal and political campaign that went all the way to the Supreme Court has abetted the falsehoods about election fraud anyway. The project has undercut government agencies, universities and research organizations that once worked with the social media giants -- especially Facebook and Twitter -- to slow the spread of disinformation about voting. In hindsight, the efforts to challenge the results four years ago were haphazard, even farcical, compared with what is happening now. At one point the people pushing claims about election fraud mistakenly chose Four Seasons Total Landscaping, a small family business in Philadelphia, as a venue for a news conference instead of the more famous hotel downtown. Even so, Mr. Trump's challenge culminated in the violence on Capitol Hill on Jan. 6, 2021. This year's efforts to discredit the election, many officials and experts say, could do greater harm. "Now, that same election denial impulse is far more organized, far more strategic and far better funded," said Michael Waldman, the chief executive of the Brennan Center for Justice and New York University School of Law, a nonpartisan legal and policy institute. "And now it is something that tens of millions of people believe and share." Perhaps the single biggest factor in today's disinformation landscape has been Mr. Musk's ownership of Twitter, which he bought two years after the 2020 election and rebranded as X. Twitter's previous owner, Jack Dorsey, along with Mark Zuckerberg, the head of Meta, the owner of Facebook and Instagram, faced public and government pressure to enforce their own policies against intentionally false or harmful content, especially around the Covid pandemic and the 2020 election. In August, Mr. Zuckerberg wrote a mea culpa to Representative Jim Jordan of Ohio, the Republican chairman of the House Judiciary Committee, who has led the conservative charge against moderation by the major social media platforms. Mr. Zuckerberg said that, in hindsight, Facebook had wrongly restricted access to some content about the pandemic and the laptop belonging to Mr. Biden's son, Hunter. "We've changed our policies and processes to make sure this doesn't happen again -- for instance, we no longer temporarily demote things in the U.S. while waiting for fact-checkers," he wrote. Meta's stance signaled a desire to step back from America's fractious political debate, though the company says it continues to moderate false election content. Mr. Musk has by contrast used X to thrust himself square into the middle of it. He dismantled the platform's teams that flagged false or hateful content and welcomed back scores of users who had been banned for violating company rules. He has raised millions of dollars for Mr. Trump's bid and campaigned for him in appearances in Pennsylvania. In posts to his 200 million followers -- more than Mr. Trump had in his heyday on the platform -- he has also repeated unsubstantiated claims that the Democrats are recruiting ineligible immigrants to register to vote. Last week, he echoed the refuted assertion that Dominion Voting Systems rigged the count in 2020, a falsehood that resulted in a $787.5 million settlement paid by Fox News. Mr. Musk also, according to a recent study, played an outsize role in amplifying content promoted by Tenet Media, a news outlet that the Justice Department accused last month of covertly using $10 million in laundered funds from Russia to pay right-wing media personalities like Tim Pool, Benny Johnson and Dave Rubin. It is not clear whether Mr. Musk knew of the Russian links -- the influencers claimed they did not. He certainly engaged regularly with Tenet Media's content, though, and Tenet regularly tagged him, presumably to draw his attention, according to the study, published by Reset Tech, a nonprofit research and policy organization based in London. At least 70 times from September 2023 to September 2024, he responded to or shared accounts linked to Tenet and its influencers to his followers on X -- many of them relating to this year's election, the study found. X did not respond to a request for comment. The disinformation challenge has grown even as government officials have become more attentive and, as this election approached, more proactive than in previous election cycles. The Office of the Director of National Intelligence, the F.B.I. and the Cybersecurity and Infrastructure Security Agency have issued regular updates on intelligence collected about interference from foreign actors, principally Russia, Iran and China. The goal is to focus public attention on foreign attempts to manipulate the election, but it is not clear that such efforts -- themselves criticized as partisan -- can have a significant impact on views at home. One of the trailblazers in fact-checking in the United States has been PolitiFact, which the journalist Bill Adair founded in 2007 to measure the claims politicians make on a scale from true to mostly true, mostly false to "pants on fire." Mr. Adair now says that the effort has done little or nothing to stem the flow of lies that cloud the nation's political debates. "It's never been worse," he said an interview following the publication a new book about his fact-checking life, "Beyond the Big Lie." The problem, he said, is not fact-checking itself but that even the act of calling out falsehoods has been characterized by some as a political exercise. While "all politicians lie" might be a common lament, Mr. Adair said that the blame has tilted significantly to the Republican Party. "You have a convergence of a politician and a party that believe they can benefit from lying," he said. John Mark Dougan, the former sheriff's deputy from Florida now working for Moscow's propaganda apparatus, has previously declined to comment on his connections with Russia's disinformation campaigns, but his contributions are clear. He appeared in a video on the platform Rumble earlier this month, detailing what he and the host claimed was an account by an exchange student from Kazakhstan accusing Gov. Tim Walz of Minnesota, Ms. Harris's running mate, of sexual abuse. He has spread that and the other smears on multiple social media platforms and in scores of news outlets he has created from his apartment in Moscow. In a text message, he reacted angrily to questions about making false accusations against Mr. Walz. "What about E Jean Carrols claims?" he wrote, imprecisely, about E. Jean Carroll, the woman who accused Mr. Trump of sexual assault. Referring to her vulgarly, he said she "didn't have any evidence whatsoever," even though a jury in New York ordered Mr. Trump to pay her $83 million for defaming her in 2019 after she came forward with her accusation. Mr. Dougan then shared links to Hindustan Times, an English-language news outlet in Delhi, and to two sites that he created, Patriot Pioneer and State Stage, both included on a list of websites the F.B.I. and the Cybersecurity and Infrastructure Security Agency cited last week as platforms for Russian disinformation campaigns. "Lots of publications have been writing about this," he wrote.
[4]
As election looms, disinformation has 'never been worse' | Analysis
The Democratic Party's vice presidential nominee has been falsely accused of sexually molesting students. The claims have been spread by a former deputy sheriff from Florida, now openly working in Moscow for Russia's propaganda apparatus, on dozens of social media platforms and fake news outlets. A faked video purporting to show one victim -- creating fake people is a recurring Russian tactic -- received more than 5 million views on X, a social platform owned by the world's richest man, Elon Musk. Musk has not only leaned all in for the Republican nominee, former President Donald Trump, but he also used his platform to reanimate discredited claims about the validity of the election's outcome. Smears, lies and dirty tricks -- what we call disinformation today -- have long been a feature of American presidential election campaigns. Two weeks before this year's vote, however, the torrent of half-truths, lies and fabrications, both foreign and homegrown, has exceeded anything that came before, according to officials and researchers who document disinformation. The effect on the outcome on Nov. 5 remains to be seen, but it has already debased what passes for political debate about the two major party candidates, Trump and Vice President Kamala Harris. It has also corroded the foundations of the country's democracy, undermining what was once a shared confidence that the country's elections, regardless of who won, have been free and fair. Russia, as well as Iran and China, have gleefully stoked many of the narratives to portray American democracy as dysfunctional and untrustworthy. Politicians and influential media figures have in turn given foreign adversaries plenty of fodder to work with, inciting and amplifying divisiveness for partisan advantage. "They do have different tactics and different approaches to influence operations, but their goals are the same," Jen Easterly, the director of the Cybersecurity and Infrastructure Security Agency in Washington, said in an interview, referring to foreign adversaries. "Very simply, they're looking to undermine American trust in our democratic institutions and the election specifically, and to sow partisan discord." Numerous factors have contributed to the surge in disinformation, which Easterly and other officials have warned will continue far beyond Election Day. Social media platforms have helped to harden media ecosystems into distinct, disparate partisan enclaves where facts contradicting preconceived narratives are often unwelcome. Artificial intelligence has become an accelerant, making fake or fanciful content ubiquitous online with merely a few keystrokes. In today's political debate, it seems, facts matter less than feelings, which are easily manipulated online. It all played out in full in recent weeks, after two devastating hurricanes killed hundreds across the Southeast and prompted outlandish conspiracy theories and violent threats to rescue workers. A fictitious image of a girl clutching a puppy in a life raft so moved Amy Kremer, the chair in Georgia for the Republican National Committee who posted it this month, that she stood by it even after she learned it was not real. "Y'all, I don't know where this photo came from and honestly, it doesn't matter," she wrote on X, where her initial post received more than 3 million views. "It is seared into my mind forever." Trump's running mate, Sen. JD Vance, essentially used the same excuse after facing criticism for popularizing a racist fiction that Haitian immigrants in Springfield, Ohio, were eating the city's cats and dogs. He argued he was reflecting local residents' actual concerns, if not actual facts. (Trump, for his part, stood by the original claims in an interview with Fox News' Howard Kurtz on Monday. "What about the goose, the geese, what about the geese, what happened there?" he said. "They were all missing.") In much the same way, Trump has succeeded in reviving allegations that the outcome of the race against President Joe Biden in 2020 was not legitimate -- simply by flatly refusing to concede otherwise. Election officials, as well as numerous courts, have said repeatedly that there was no election fraud in 2020. A concerted conservative legal and political campaign that went all the way to the Supreme Court has abetted the falsehoods about election fraud anyway. The project has undercut government agencies, universities and research organizations that once worked with the social media giants -- especially Facebook and Twitter -- to slow the spread of disinformation about voting. In hindsight, the efforts to challenge the results four years ago were haphazard, even farcical, compared with what is happening now. At one point the people pushing claims about election fraud mistakenly chose Four Seasons Total Landscaping, a small family business in Philadelphia, as a venue for a news conference instead of the more famous hotel downtown. Even so, Trump's challenge culminated in the violence on Capitol Hill on Jan. 6, 2021. This year's efforts to discredit the election, many officials and experts say, could do greater harm. "Now, that same election denial impulse is far more organized, far more strategic and far better funded," said Michael Waldman, the CEO of the Brennan Center for Justice and New York University School of Law, a nonpartisan legal and policy institute. "And now it is something that tens of millions of people believe and share." Perhaps the single biggest factor in today's disinformation landscape has been Musk's ownership of Twitter, which he bought two years after the 2020 election and rebranded as X. Twitter's previous CEO, Jack Dorsey, along with Mark Zuckerberg, the head of Meta, the owner of Facebook and Instagram, faced public and government pressure to enforce their own policies against intentionally false or harmful content, especially around the COVID pandemic and the 2020 election. In August, Zuckerberg wrote a mea culpa to Rep. Jim Jordan, R-Ohio, the chair of the House Judiciary Committee, who has led the conservative charge against moderation by the major social media platforms. Zuckerberg said that, in hindsight, Facebook had wrongly restricted access to some content about the pandemic and the laptop belonging to Biden's son, Hunter. "We've changed our policies and processes to make sure this doesn't happen again -- for instance, we no longer temporarily demote things in the U.S. while waiting for fact-checkers," he wrote. Meta's stance signaled a desire to step back from America's fractious political debate, though the company says it continues to moderate false election content. Musk has by contrast used X to thrust himself square into the middle of it. He dismantled the platform's teams that flagged false or hateful content and welcomed back scores of users who had been banned for violating company rules. He has raised millions of dollars for Trump's bid and campaigned for him in appearances in Pennsylvania. In posts to his 200 million followers -- more than Trump had in his heyday on the platform -- he has also repeated unsubstantiated claims that the Democrats are recruiting ineligible immigrants to register to vote. Last week, he echoed the refuted assertion that Dominion Voting Systems rigged the count in 2020, a falsehood that resulted in a $787.5 million settlement paid by Fox News. Musk also, according to a recent study, played an outsize role in amplifying content promoted by Tenet Media, a news outlet that the Justice Department accused last month of covertly using $10 million in laundered funds from Russia to pay right-wing media personalities like Tim Pool, Benny Johnson and Dave Rubin. It is not clear whether Musk knew of the Russian links -- the influencers claimed they did not. He certainly engaged regularly with Tenet Media's content, though, and Tenet regularly tagged him, presumably to draw his attention, according to the study, published by Reset Tech, a nonprofit research and policy organization based in London. At least 70 times from September 2023 to September 2024, he responded to or shared accounts linked to Tenet and its influencers to his followers on X -- many of them relating to this year's election, the study found. X did not respond to a request for comment. The disinformation challenge has grown even as government officials have become more attentive and, as this election approached, more proactive than in previous election cycles. The Office of the Director of National Intelligence, the FBI and the Cybersecurity and Infrastructure Security Agency have issued regular updates on intelligence collected about interference from foreign actors, principally Russia, Iran and China. The goal is to focus public attention on foreign attempts to manipulate the election, but it is not clear that such efforts -- themselves criticized as partisan -- can have a significant impact on views at home. One of the trailblazers in fact-checking in the United States has been PolitiFact, which journalist Bill Adair founded in 2007 to measure the claims politicians make on a scale from true to mostly true, mostly false to "pants on fire." Adair now says that the effort has done little or nothing to stem the flow of lies that cloud the nation's political debates. "It's never been worse," he said an interview following the publication of a new book about his fact-checking life, "Beyond the Big Lie." The problem, he said, is not fact-checking itself but that even the act of calling out falsehoods has been characterized by some as a political exercise. While "all politicians lie" might be a common lament, Adair said that the blame has tilted significantly to the Republican Party. "You have a convergence of a politician and a party that believe they can benefit from lying," he said. John Mark Dougan, the former sheriff's deputy from Florida now working for Moscow's propaganda apparatus, has previously declined to comment on his connections with Russia's disinformation campaigns, but his contributions are clear. He appeared in a video on the platform Rumble earlier this month, detailing what he and the host claimed was an account by an exchange student from Kazakhstan accusing Gov. Tim Walz of Minnesota, Harris' running mate, of sexual abuse. He has spread that and the other smears on multiple social media platforms and in scores of news outlets he has created from his apartment in Moscow. In a text message, he reacted angrily to questions about making false accusations against Walz. "What about E Jean Carrols claims?" he wrote, imprecisely, about E. Jean Carroll, the woman who accused Trump of sexual assault. Referring to her vulgarly, he said she "didn't have any evidence whatsoever," even though a jury in New York ordered Trump to pay her $83 million for defaming her in 2019 after she came forward with her accusation. Dougan then shared links to Hindustan Times, an English-language news outlet in Delhi, and to two sites that he created, Patriot Pioneer and State Stage, both included on a list of websites the FBI and the Cybersecurity and Infrastructure Security Agency cited last week as platforms for Russian disinformation campaigns. "Lots of publications have been writing about this," he wrote.
[5]
How Russia, China and Iran Are Interfering in the Presidential Election
When Russia interfered in the 2016 U.S. presidential election, spreading divisive and inflammatory posts online to stoke outrage, its posts were brash and riddled with spelling errors and strange syntax. They were designed to get attention by any means necessary. "Hillary is a Satan," one Russian-made Facebook post read. Now, eight years later, foreign interference in American elections has become far more sophisticated, and far more difficult to track. Disinformation from abroad -- particularly from Russia, China and Iran -- has matured into a consistent and pernicious threat, as the countries test, iterate and deploy increasingly nuanced tactics, according to U.S. intelligence and defense officials, tech companies and academic researchers. The ability to sway even a small pocket of Americans could have outsize consequences for the presidential election, which polls generally consider a neck-and-neck race. Russia, according to American intelligence assessments, aims to bolster the candidacy of former President Donald J. Trump, while Iran favors his opponent, Vice President Kamala Harris. China appears to have no preferred outcome. But the broad goal of these efforts has not changed: to sow discord and chaos in hopes of discrediting American democracy in the eyes of the world. The campaigns, though, have evolved, adapting to a changing media landscape and the proliferation of new tools that make it easy to fool credulous audiences. Here are the ways that foreign disinformation has evolved: Now, disinformation is basically everywhere. Russia was the primary architect of American election-related disinformation in 2016, and its posts ran largely on Facebook. Now, Iran and China are engaging in similar efforts to influence American politics, and all three are scattering their efforts across dozens of platforms, from small forums where Americans chat about local weather to messaging groups united by shared interests. The countries are taking cues from one another, although there is debate over whether they have directly cooperated on strategies. There are hordes of Russian accounts on Telegram seeding divisive, sometimes vitriolic videos, memes and articles about the presidential election. There are at least hundreds more from China that mimicked students to inflame the tensions on American campuses this summer over the war in Gaza. Both countries also have accounts on Gab, a less prominent social media platform favored by the far right, where they have worked to promote conspiracy theories. Russian operatives have also tried to support Mr. Trump on Reddit and forum boards favored by the far right, targeting voters in six swing states along with Hispanic Americans, video gamers and others identified by Russia as potential Trump sympathizers, according to internal documents disclosed in September by the Department of Justice. One campaign linked to China's state influence operation, known as Spamouflage, operated accounts using a name, Harlan, to create the impression that the source of the conservative-leaning content was an American, on four platforms: YouTube, X, Instagram and TikTok. The content is far more targeted. The new disinformation being peddled by foreign nations aims not just at swing states, but also at specific districts within them, and at particular ethnic and religious groups within those districts. The more targeted the disinformation is, the more likely it is to take hold, according to researchers and academics who have studied the new influence campaigns. "When disinformation is custom-built for a specific audience by preying on their interests or opinions, it becomes more effective," said Melanie Smith, the research director for the Institute for Strategic Dialogue, a research organization based in London. "In previous elections, we were trying to determine what the big false narrative was going to be. This time, it is subtle polarized messaging that strokes the tension." Iran in particular has spent its resources setting up covert disinformation efforts to draw in niche groups. A website titled "Not Our War," which aimed to draw in American military veterans, interspersed articles about the lack of support for active-duty soldiers with virulently anti-American views and conspiracy theories. Other sites included "Afro Majority," which created content aimed at Black Americans, and "Savannah Time," which sought to sway conservative voters in the swing state of Georgia. In Michigan, another swing state, Iran created an online outlet called "Westland Sun" to cater to Arab Americans in suburban Detroit. "That Iran would target Arab and Muslim populations in Michigan shows that Iran has a nuanced understanding of the political situation in America and is deftly maneuvering to appeal to a key demographic to influence the election in a targeted fashion," said Max Lesser, a senior analyst at the Foundation for Defense of Democracies. China and Russia have followed a similar pattern. On X this year, Chinese state media spread false narratives in Spanish about the Supreme Court, which Spanish-speaking users on Facebook and YouTube then circulated further, according to Logically, an organization that monitors disinformation online. Experts on Chinese disinformation said that inauthentic social media accounts linked to Beijing had become more convincing and engaging and that they now included first-person references to being an American or a military veteran. In recent weeks, according to a report from Microsoft's Threat Analysis Center, inauthentic accounts linked to China's Spamouflage targeted House and Senate Republicans seeking re-election in Alabama, Tennessee and Texas. Artificial intelligence is propelling this evolution. Recent advances in artificial intelligence have boosted disinformation capabilities beyond what was possible in previous elections, allowing state agents to create and distribute their campaigns with more finesse and efficiency. OpenAI, whose ChatGPT tool popularized the technology, reported this month that it had disrupted more than 20 foreign operations that had used the company's products between June and September. They included efforts by Russia, China, Iran and other countries to create and fill websites and to spread propaganda or disinformation on social media -- and even to analyze and reply to specific posts. (The New York Times sued OpenAI and Microsoft last year for copyright infringement of news content; both companies have denied the claims.) "A.I. capabilities are being used to exacerbate the threats that we expected and the threats that we're seeing," Jen Easterly, the director of the Cybersecurity and Infrastructure Security Agency, said in an interview. "They're essentially lowering the bar for a foreign actor to conduct more sophisticated influence campaigns." The utility of commercially available A.I. tools can be seen in the efforts of John Mark Dougan, a former deputy sheriff in Florida who now lives in Russia after fleeing criminal charges in the United States. Working from an apartment in Moscow, he has created scores of websites posing as American news outlets and used them to publish disinformation, effectively doing by himself the work that, eight years ago, would have involved an army of bots. Mr. Dougan's sites have circulated several disparaging claims about Ms. Harris and her running mate, Gov. Tim Walz of Minnesota, according to NewsGuard, a company that has tracked them in detail. China, too, has deployed an increasingly advanced tool kit that includes A.I.-manipulated audio files, damaging memes and fabricated voter polls in campaigns around the world. This year, a deepfake video of a Republican congressman from Virginia circulated on TikTok, accompanied by a Chinese caption falsely claiming that the politician was soliciting votes for a critic of Beijing who sought (and later won) the Taiwanese presidency. It's becoming much harder to identify disinformation. All three countries are also becoming better at covering their tracks. Last month, Russia was caught obscuring its attempts to influence Americans by secretly backing a group of conservative American commentators employed through Tenet Media, a digital platform created in Tennessee in 2023. The company served as a seemingly legitimate facade for publishing scores of videos with pointed political commentary as well as conspiracy theories about election fraud, Covid-19, immigrants and Russia's war with Ukraine. Even the influencers who were covertly paid for their appearances on Tenet said they did not know the money came from Russia. In an echo of Russia's scheme, Chinese operatives have been cultivating a network of foreign influencers to help spread its narratives, creating a group described as "foreign mouths," "foreign pens" and "foreign brains," according to a report last fall by the Australian Strategic Policy Institute. The new tactics have made it harder for government agencies and tech companies to find and remove the influence campaigns -- all while emboldening other hostile states, said Graham Brookie, the senior director at the Atlantic Council's Digital Forensic Research Lab. "Where there is more malign foreign influence activity, it creates more surface area, more permission for other bad actors to jump into that space," he said. "If all of them are doing it, then the cost for exposure is not as high." Technology companies aren't doing as much to stop disinformation. The foreign disinformation has exploded as tech giants have all but given up their efforts to combat disinformation. The largest companies, including Meta, Google, OpenAI and Microsoft, have scaled back their attempts to label and remove disinformation since the last presidential elections. Others have no teams in place at all. The lack of cohesive policy among the tech companies has made it impossible to form a united front against foreign disinformation, security officials and executives at tech companies said. "These alternative platforms don't have the same degree of content moderation and robust trust and safety practices that would potentially mitigate these campaigns," said Mr. Lesser of the Foundation for Defense of Democracies. He added that even larger platforms such as X, Facebook and Instagram were trapped in an eternal game of Whac-a-Mole as foreign state operatives quickly rebuilt influence campaigns that had been removed. Alethea, a company that tracks online threats, recently discovered that an Iranian disinformation campaign that used accounts named after hoopoes, the colorful bird, recently resurfaced on X despite having been banned twice before.
[6]
How Russia, China and Iran are interfering in the US presidential election
Foreign disinformation campaigns from Russia, China, and Iran in U.S. elections have become more advanced and difficult to trace. They target specific groups and districts, using AI and new tactics to hide their origins. Tech companies have reduced efforts to combat this, making it harder to stop these sophisticated influence campaigns.When Russia interfered in the 2016 U.S. presidential election, spreading divisive and inflammatory posts online to stoke outrage, its posts were brash and riddled with spelling errors and strange syntax. They were designed to get attention by any means necessary. "Hillary is a Satan," one Russian-made Facebook post read. Now, eight years later, foreign interference in U.S. elections has become far more sophisticated, and far more difficult to track. Disinformation from abroad -- particularly from Russia, China and Iran -- has matured into a consistent and pernicious threat, as the countries test, iterate and deploy increasingly nuanced tactics, according to U.S. intelligence and defense officials, tech companies and academic researchers. The ability to sway even a small pocket of Americans could have outsize consequences for the presidential election, which polls generally consider a neck-and-neck race. Russia, according to American intelligence assessments, aims to bolster the candidacy of former President Donald Trump, while Iran favors his opponent, Vice President Kamala Harris. China appears to have no preferred outcome. But the broad goal of these efforts has not changed: to sow discord and chaos in hopes of discrediting American democracy in the eyes of the world. The campaigns, though, have evolved, adapting to a changing media landscape and the proliferation of new tools that make it easy to fool credulous audiences. Here are the ways that foreign disinformation has evolved: Now, disinformation is basically everywhere. Russia was the primary architect of American election-related disinformation in 2016, and its posts ran largely on Facebook. Now, Iran and China are engaging in similar efforts to influence American politics, and all three are scattering their efforts across dozens of platforms, from small forums where Americans chat about local weather to messaging groups united by shared interests. The countries are taking cues from one another, although there is debate over whether they have directly cooperated on strategies. There are hordes of Russian accounts on Telegram seeding divisive, sometimes vitriolic videos, memes and articles about the presidential election. There are at least hundreds more from China that mimicked students to inflame the tensions on American campuses this summer over the war in the Gaza Strip. Both countries also have accounts on Gab, a less prominent social media platform favored by the far right, where they have worked to promote conspiracy theories. Russian operatives have also tried to support Trump on Reddit and forum boards favored by the far right, targeting voters in six swing states along with Hispanic Americans, video gamers and others identified by Russia as potential Trump sympathizers, according to internal documents disclosed in September by the Department of Justice. One campaign linked to China's state influence operation, known as Spamouflage, operated accounts using a name, Harlan, to create the impression that the source of the conservative-leaning content was an American, on four platforms: YouTube, X, Instagram and TikTok. The content is far more targeted. The new disinformation being peddled by foreign nations aims not just at swing states, but also at specific districts within them, and at particular ethnic and religious groups within those districts. The more targeted the disinformation is, the more likely it is to take hold, according to researchers and academics who have studied the new influence campaigns. "When disinformation is custom-built for a specific audience by preying on their interests or opinions, it becomes more effective," said Melanie Smith, the research director for the Institute for Strategic Dialogue, a research organization based in London. "In previous elections, we were trying to determine what the big false narrative was going to be. This time, it is subtle polarized messaging that strokes the tension." Iran in particular has spent its resources setting up covert disinformation efforts to draw in niche groups. A website titled "Not Our War," which aimed to draw in U.S. military veterans, interspersed articles about the lack of support for active-duty soldiers with virulently anti-American views and conspiracy theories. Other sites included "Afro Majority," which created content aimed at Black Americans, and "Savannah Time," which sought to sway conservative voters in the swing state of Georgia. In Michigan, another swing state, Iran created an online outlet called "Westland Sun" to cater to Arab Americans in suburban Detroit. "That Iran would target Arab and Muslim populations in Michigan shows that Iran has a nuanced understanding of the political situation in America and is deftly maneuvering to appeal to a key demographic to influence the election in a targeted fashion," said Max Lesser, a senior analyst at the Foundation for Defense of Democracies. China and Russia have followed a similar pattern. On the social platform X this year, Chinese state media spread false narratives in Spanish about the Supreme Court, which Spanish-speaking users on Facebook and YouTube then circulated further, according to Logically, an organization that monitors disinformation online. Experts on Chinese disinformation said that inauthentic social media accounts linked to Beijing had become more convincing and engaging and that they now included first-person references to being an American or a military veteran. In recent weeks, according to a report from Microsoft's Threat Analysis Center, inauthentic accounts linked to China's Spamouflage targeted House and Senate Republicans seeking reelection in Alabama, Tennessee and Texas. Artificial intelligence is propelling this evolution. Recent advances in artificial intelligence have boosted disinformation capabilities beyond what was possible in previous elections, allowing state agents to create and distribute their campaigns with more finesse and efficiency. OpenAI, whose ChatGPT tool popularized the technology, reported this month that it had disrupted more than 20 foreign operations that had used the company's products between June and September. They included efforts by Russia, China, Iran and other countries to create and fill websites and to spread propaganda or disinformation on social media -- and even to analyze and reply to specific posts. (The New York Times sued OpenAI and Microsoft last year for copyright infringement of news content; both companies have denied the claims.) "AI capabilities are being used to exacerbate the threats that we expected and the threats that we're seeing," Jen Easterly, the director of the Cybersecurity and Infrastructure Security Agency, said in an interview. "They're essentially lowering the bar for a foreign actor to conduct more sophisticated influence campaigns." The utility of commercially available AI tools can be seen in the efforts of John Mark Dougan, a former deputy sheriff in Florida who now lives in Russia after fleeing criminal charges in the United States. Working from an apartment in Moscow, he has created scores of websites posing as American news outlets and used them to publish disinformation, effectively doing by himself the work that, eight years ago, would have involved an army of bots. Dougan's sites have circulated several disparaging claims about Harris and her running mate, Gov. Tim Walz of Minnesota, according to NewsGuard, a company that has tracked them in detail. China, too, has deployed an increasingly advanced tool kit that includes AI-manipulated audio files, damaging memes and fabricated voter polls in campaigns around the world. This year, a deepfake video of a Republican member of Congress from Virginia circulated on TikTok, accompanied by a Chinese caption falsely claiming that the politician was soliciting votes for a critic of Beijing who sought (and later won) the Taiwanese presidency. It's becoming much harder to identify disinformation. All three countries are also becoming better at covering their tracks. Last month, Russia was caught obscuring its attempts to influence Americans by secretly backing a group of conservative American commentators employed through Tenet Media, a digital platform created in Tennessee in 2023. The company served as a seemingly legitimate facade for publishing scores of videos with pointed political commentary as well as conspiracy theories about election fraud, COVID-19, immigrants and Russia's war with Ukraine. Even the influencers who were covertly paid for their appearances on Tenet said they did not know the money came from Russia. In an echo of Russia's scheme, Chinese operatives have been cultivating a network of foreign influencers to help spread its narratives, creating a group described as "foreign mouths," "foreign pens" and "foreign brains," according to a report last fall by the Australian Strategic Policy Institute. The new tactics have made it harder for government agencies and tech companies to find and remove the influence campaigns -- all while emboldening other hostile states, said Graham Brookie, the senior director at the Atlantic Council's Digital Forensic Research Lab. "Where there is more malign foreign influence activity, it creates more surface area, more permission for other bad actors to jump into that space," he said. "If all of them are doing it, then the cost for exposure is not as high." Technology companies aren't doing as much to stop disinformation. The foreign disinformation has exploded as tech giants have all but given up their efforts to combat disinformation. The largest companies, including Meta, Google, OpenAI and Microsoft, have scaled back their attempts to label and remove disinformation since the last presidential elections. Others have no teams in place at all. The lack of cohesive policy among the tech companies has made it impossible to form a united front against foreign disinformation, security officials and executives at tech companies said. "These alternative platforms don't have the same degree of content moderation and robust trust and safety practices that would potentially mitigate these campaigns," said Lesser of the Foundation for Defense of Democracies. He added that even larger platforms such as X, Facebook and Instagram were trapped in an eternal game of Whac-a-Mole as foreign state operatives quickly rebuilt influence campaigns that had been removed. Alethea, a company that tracks online threats, recently discovered that an Iranian disinformation campaign that used accounts named after hoopoes, the colorful bird, recently resurfaced on X despite having been banned twice before. This article originally appeared in The New York Times.
[7]
AI's Underwhelming Impact On the 2024 Elections
Early this year, watchdogs and technologists warned that artificial intelligence would sow chaos into the 2024 U.S. elections, spreading misinformation via deepfakes and personalized political advertising campaigns. Those fears spread to the public: More than half of U.S. adults are "extremely or very concerned" about AI's negative impacts on the election, according to a recent Pew poll. Yet with the election one week away, fears of the election being derailed or defined by AI now appear to have been overblown. Political deepfakes have been shared across social media, but have been just a small part of larger misinformation campaigns. The U.S. Intelligence Community wrote in September that while foreign actors like Russia were using generative AI to "improve and accelerate" attempts to influence voters, the tools did not "revolutionize such operations." Tech insiders acknowledge 2024 was not a breakthrough year for generative AI in politics. "There are a lot of campaigns and organizations using AI in some way or another. But in my view, it did not reach the level of impact that people anticipated or feared," says Betsy Hoover, the founder of Higher Ground Labs, a venture fund that invests in political technology. At the same time, researchers warn that the impacts of generative AI on this election cycle have yet to be fully understood, especially because of their deployment on private messaging platforms. They also contend that even if the impact of AI on this campaign seems underwhelming, it is likely to balloon in coming elections as the technology improves and its usage grows among the general public and political operatives. "I'm sure in another year or two the AI models will get better," says Sunny Gandhi, the vice president of political affairs at Encode Justice. "So I'm pretty worried about what it will look like in 2026 and definitely 2028." Generative AI has already had a clear impact on global politics. In countries across South Asia, candidates used artificial intelligence to flood the public with articles, images and video deepfakes. In February, an audio deepfake was disseminated that falsely purported to depict London Mayor Sadiq Khan making inflammatory comments before a major pro-Palestinian march. Khan says that the audio clip inflamed violent clashes between protestors and counter-protestors. There have been examples in the U.S. too. In February, New Hampshire residents received voicemails from an audio deepfake of Joe Biden, in which the President appeared to discourage them from voting. The FCC promptly banned robocalls containing AI-generated voices, and the Democratic political consultant who created the voicemails was indicted on criminal charges, sending a strong warning to others who might try similar tactics. Still, Ppolitical deepfakes were elevated by politicians, including former President Donald Trump. In August, Trump posted AI images of Taylor Swift endorsing him, as well as Kamala Harris communist garb. In September, a video that was linked back to a Russian disinformation campaign accused Harris of being involved in a hit-and-run accident, and was seen on social media millions of times. Read More: There's Another Important Message in Taylor Swift's Harris Endorsement. Russia has been a particular hotbed for malicious uses of AI, with state actors generating text, images, audio, and video it has put to use in the U.S., often to amplify fears around immigration. It's unclear whether these campaigns have had much of an impact on voters. The Justice Department said it disrupted one of those campaigns, known as Doppelganger, in September. The U.S. Intelligence Community wrote the same month that these foreign actors faced several challenges in spreading these videos, including the need to "overcome restrictions built into many AI tools." Independent researchers have also worked to track the spread and impact of AI creations. Early this year, a group of researchers at Purdue created an incidents database of political deepfakes, which has since since logged more than 500 incidents. Surprisingly, a majority of those videos have not been created to deceive people, but rather are satire, education, or political commentary, says researcher Christina Walker. However, Walker says that these video's meanings to viewers often change as they spread across different political circles. "One person posts a deepfake and writes, 'This is a deepfake. I created it to show X, Y and Z.' Twenty retweets later, someone else is sharing it as if it's real," Walker says. Daniel Schiff, another researcher on the project, says many deepfakes are likely designed to reinforce the opinions of people who were already predisposed to believe their messaging. Other studies suggest that most forms of political persuasion have very small effects at best -- and that voters actively dislike political messages that are personally tailored to them. That might render moot one of AI's primary powers: to create targeted messages cheaply. In August, Meta reported that generative AI-driven tactics have provided "only incremental productivity and content-generation gains" to influence campaigns. The company concluded that the tech industry's strategies to neutralize their spread "appear effective at this time." Other researchers are less confident. Mia Hoffmann, a research fellow at Georgetown's Center for Security and Emerging Technology, says it's difficult to ascertain AI's influence on voters for several reasons. One is that major tech companies have limited the amount of data they share about posts. Twitter ended free access to its API, and Meta recently shut down Crowdtangle on Facebook and Instagram, making it harder for researchers to track hate speech and misinformation across those platforms. "We're at the mercy of what these companies share with us," Hoffmann says. Hoffmann also worries that AI-created misinformation is proliferating on closed messaging platforms like WhatsApp, which are especially popular with diasporic immigrant communities in the U.S. It's possible that robust AI efforts are being deployed to influence voters in swing states, but that we may not know about their effectiveness until after the election, she adds. "As the electoral importance of these groups has grown, they are increasingly being targeted with tailored influence campaigns that aim to suppress their votes and sway their opinions," Hoffmann says. "And because of the encryption of the apps, misinformation is more hidden from fact-checking efforts." Other political actors are attempting to wield generative AI tools in more mundane ways. Campaigns can use AI tools to trawl the web to see how a candidate is being perceived in different social and economic circles, conduct opposition research, summarize dozens of news articles, and write social media copy tailored to different audiences. Many campaigns are short-staffed, have tight budgets and are short on time. AI, the theory goes, could replace some of the low-level work typically done by interns. A spokesperson for the Democratic National Committee told TIME that members of the organization were using generative AI to make their work "more efficient while maintaining strong safeguards" -- including to help officials draft fundraising emails, write code, and spot unusual patterns of voter removals in public data records. A spokesperson for the Republican National Committee did not respond to a request for comment. A variety of startups have started offering AI tools for political campaigns. They include BattleGroundAI, which can write copy for hundreds of political ads "in minutes," the company says, as well as Grow Progress, which runs a chatbot tool that helps people generate and tweak persuasion tactics and messages to potential voters. Josh Berezin, a co-founder at Grow Progress, says that dozens of campaigns have "experimented" with their chatbot this year to create ads. But Berezin says that the adoption of those AI tools has been slow. Political campaigns are often risk-averse, and many strategists have been hesitant to jump in, especially given the public's negative perception of the use of generative AI in politics. The New York Times reported in August that only a handful of candidates were using AI -- and several of those that did employ tech wanted to hide the fact from the public. "If someone was saying, 'This is the AI election,' I haven't really seen that," Berezin says. "I've seen a few people explore using some of these new tools with a lot of gusto, but it's not universal." However, it's likely that the role of generative AI will only expand in future elections. Improved technology will allow campaigns to create messaging and fundraise more quickly and inexpensively. AI could also aid the bureaucracy of vote processing. Automated signature verification -- in which a mail voter's signature is matched with their signature on file -- was used in several counties in 2020, for instance. But improved AI technology will also lead to more believable deepfake video and audio clips, likely leading both to the spread of disinformation and an increasing distrust in all political messaging and its veracity. "This is a threat that's going to be increasing," says Hoffmann, the Georgetown researcher. "Debunking and identifying these influence campaigns is going to become even more time-consuming and resource-consuming."
[8]
American creating deep fakes targeting Harris works with Russian intel, documents show
Russian documents reviewed by The Post expose the workings of a Moscow network that has become a potent source of fake news targeting American voters. A former deputy Palm Beach County sheriff who fled to Moscow and became one of the Kremlin's most prolific propagandists is working directly with Russian military intelligence to pump out deepfakes and circulate misinformation that targets Vice President Kamala Harris's campaign, according to Russian documents obtained by a European intelligence service and reviewed by The Washington Post. The documents show that John Mark Dougan, who also served in the U.S. Marines and has long claimed to be working independently of the Russian government, was provided funding by an officer from the GRU, the country's military intelligence service. Some of the payments were made after fake news sites he created began to have difficulty accessing Western artificial intelligence systems this spring and he needed an AI generator -- a tool that can be prompted to create text, photos and video. Dougan's liaison at the GRU is a senior figure in Russian military intelligence working under the cover name Yury Khoroshevsky, the documents show. The officer's real name is Yury Khoroshenky, though he is only referred to as Khoroshevsky in the documents, and he serves in the GRU's Unit 29155, which oversees sabotage, political interference operations and cyberwarfare targeting the West, according to two European security officials who spoke on the condition of anonymity to discuss sensitive intelligence. The more than 150 documents -- which were shared with The Post to demonstrate the extent of Russian interference through Dougan and focus mostly on the period between March 2021 and August 2024 -- for the first time expose some of the inner workings of a network that researchers and intelligence officials say has become the most potent source of fake news emanating from Russia and targeting American voters over the past year. Disinformation researchers say Dougan's network was probably behind a recent viral fake video smearing Democratic vice-presidential nominee Tim Walz, which U.S. intelligence officials on Tuesday said was created by Russia. It received nearly 5 million views on X in less than 24 hours, Microsoft said. Since September 2023, posts, articles and videos generated by Dougan and some of the Russians who work with him have garnered 64 million views, said McKenzie Sadeghi, who has closely followed Dougan's sites and is a researcher at NewsGuard, a company that tracks disinformation online. "Compared with other Russian disinformation campaigns, Dougan has a clear understanding of what would resonate with Western audiences and the political atmosphere, which I think has made this more effective," Sadeghi said. The documents show that Dougan is also subsidized and directed by a Moscow institute founded by Alexander Dugin, a far-right imperialist ideologue sometimes referred to as "Putin's brain" because of his influence on the revanchist thinking of the Russian president; Dugin's ideas became a driving force behind Russia's Ukraine invasion. One 2022 document shows that Dugin's Eurasia movement, which promotes his theories of a Russian empire, "actively cooperates with the Russian Defense Ministry." Dougan's contact at the Moscow institute, the Center for Geopolitical Expertise, is its head, Valery Korovin. According to Korovin's social media page on the Russian version of Facebook, he was awarded a medal by Russian President Vladimir Putin in 2023 for "services to the Fatherland" for "carrying out special tasks." Korovin also works closely with Khoroshenky, who under his cover name serves as the institute's deputy director, the documents show. The documents show payments directly from Khoroshenky to Dougan's bank account in Moscow starting in April 2022 and frequent meetings between Khoroshenky, Dougan and Korovin. "We will not be beaten," said Khoroshenky in one discussion with Korovin, the documents show, after a new server was launched this summer allowing Dougan to add to the myriad sites he'd already created and to restart one of the domains that had been blocked. Dougan is responsible for content on dozens of fake news sites with names such as DC Weekly, Chicago Chronicle and Atlanta Observer, according to the documents and disinformation researchers. In the months that followed Dougan's reboot with the GRU-facilitated new server and AI generator, the sites and fake news videos spread by Dougan and his associates have produced some of the most viral Russian disinformation targeting Harris, according to Microsoft and NewsGuard, including a deepfake audio issued in August that purported to show Barack Obama implying the Democrats ordered the assassination attempt on Donald Trump. Most recently, Dougan was the initial source for a false claim behind the fake video that went viral alleging Walz abused a student at the high school where he taught, and NewsGuard believes his network may be behind its further dissemination. Eleven days before a video appeared with what NewsGuard says was probably an AI-generated persona claiming to be a former Walz student, Dougan appeared on a podcast making a similar but separate false claim, presenting an anonymous man claiming to be a former exchange student from Kazakhstan. Other Kremlin-directed efforts to sway the U.S. presidential election have included the Doppelgänger campaign run by Kremlin political strategists recently targeted by the Department of Justice for its cloning of legitimate news outlets including Fox News and The Washington Post -- a Russian operation that had been previously reported on by The Post. The Justice Department has also accused RT, the Russian state media outlet, of funneling hundreds of thousands of dollars to American social media influencers to parrot Kremlin talking points. In a telephone interview with The Post, Dougan denied being behind sites such as DC Weekly, and he said he didn't know Korovin or Khoroshenky or have any connections with Russian military intelligence or the Russian government. Dougan insisted he operated independently and said "no one sends me money for anything." He later claimed he worked as an IT consultant for an American company, and said the documents The Post referred to must have been fabricated. "I will tell you hypothetically, if they were my sites," he said, "then I am merely fighting fire with fire because the West is f------ lying about everything that's happening. They are lying about everything." Korovin said he was an academic who was interested only in thoughts, ideas and philosophy, adding that the claims related to the documents appear to represent "a collection of accidentally combined moments of information taken from who knows where, most of which seem absurd and ridiculous," and many of which he said he was "hearing for the first time." Dugin said "any suggestion about our supposed affiliation with the GRU or to any attempts to manipulate foreign journalists or influence the political landscape in the U.S. are completely unjustified." He said his Eurasian movement did not participate in any official partnerships with Russian government organizations, including the Defense Ministry. Khoroshenky did not respond to requests for comment. Outlandish claims Dougan's use of websites to attack perceived enemies stretches back to his time in law enforcement in the United States. He said he clashed with people in the Palm Beach County Sheriff's Office after he said he'd complained about abuses by a sergeant in his unit who boasted on Facebook about beating people he arrested. Dougan had worked at the sheriff's office in Palm Beach from 2005 to 2008 but faced over 11 internal affairs investigations before he left, according to The Palm Beach Post. A jury also awarded a fellow Palm Beach sheriff's deputy $275,000 after it found that Dougan had pepper-sprayed and arrested the officer without cause. Dougan claimed the internal affairs investigations were a result of his whistleblowing over the sergeant's alleged assaults. After Dougan resigned his post in Palm Beach, he moved to Maine, where he was soon was dismissed from a police department there over alleged sexual harassment complaints, officials in Maine said. In the Marine Corps, he also had a checkered career. Dougan served from May 1996 to July 1998, an abbreviated stint as most Marines serve at least four years. He also left as a lance corporal, a rank most Marines attain after just a few months, and he never deployed, according to the Pentagon, which wouldn't characterize his discharge status, citing privacy concerns. Dougan's rank as he was discharged and the date at which he became a lance corporal, in April 1998, nearly two years into his time in uniform, are "indicative of the fact that the character of his service was incongruent with the Marine Corps' expectations and standards," said Yvonne Carlock, a service spokeswoman. After returning to Florida from Maine, Dougan created PBSOTalk, a site he said he intended as a place to air complaints by other deputies about the Palm Beach County Sheriff's Office but which soon became home to corruption allegations and smears involving his former superiors. In 2016, Dougan posted confidential data about thousands of police officers, federal agents and judges on PBSOTalk, prompting the FBI and local police to search Dougan's home. The next year, he was indicted on 21 state charges of extortion and wiretapping. By then he had fled to Moscow, a city he said he had visited several times before after establishing an online relationship with a Russian woman. It's not clear how Dougan first came to the attention of Russia's propagandists, but some of the skills he homed in Florida are a hallmark of his work in Moscow, researchers say -- using an online authentic gloss to make outlandish claims. As early as June 2019 -- more than two years before the invasion of Ukraine -- Korovin had proposed in a letter to Russia's Ministry of Defense that his center organize "an internet war against the U.S. on its territory." "The possibilities posed by internet wars really are limitless, and only with their help can we assert complete strategic parity with our geopolitical opponents," Korovin wrote in the letter, which was part of the trove of documents reviewed by The Post. Dugin, the Russian ideologue who is Korovin's boss, had earlier called for "geopolitical war with America ... to weaken, demoralize, deceive and, in the end, beat our opponent to the maximum," the documents show. Dozens of the documents show that Korovin's center has worked closely with a string of "independent" foreign journalists who have wound up in Moscow, and it paid some of them, including Dougan. In March 2021, Korovin said he and Dougan were "one team" and that Korovin would provide as much support as possible, one of the documents shows. All the while, Khoroshenky sent instructions to Korovin outlining tasks for Dougan and other reporters' coverage of the war in Ukraine. In one example, Khoroshenky demanded the journalists, including Dougan, publish "within one hour" reports stating that Russian troops had killed foreign mercenaries in Ukraine, the documents show. "Then we will give bonuses to everyone," he said. Korovin and the GRU's Khoroshenky ostensibly supported Dougan as he sought to parlay the political asylum he won in Russia in 2017 into Russian citizenship, while pointing out that since he was wanted in the United States, Dougan had few other options, the documents show. The process continued until summer 2023, when Dougan finally obtained citizenship; at one point a frustrated Dougan said he was on the verge of going to the Chinese embassy to seek Beijing's support, the documents show. "The time comes when it's enough," Dougan said, according to one document. By then, Dougan felt he had established his worth. Before the Russian invasion, he had traveled to Ukraine and posted a video on YouTube that the United States was running bioweapons labs there, a false claim that Russia used as one of the pretexts for its war. As Russian forces foundered in the first weeks of the invasion, Dougan told Korovin he felt he would be of greater assistance using his background in the Marines to train Russian troops. Korovin told him he would achieve more in securing "our victory" by promoting his fake biolabs report, the documents show. That summer, Dougan traveled to Azovstal, the vast Ukrainian steel plant in Mariupol that was the scene of heavy Russian bombardment. Dougan produced a 30-minute report from the ruins as a foreign correspondent for "One America News," the American far-right TV network. In the report, Dougan alleged Ukrainian President Volodymyr Zelensky was to blame for the deaths of thousands of innocent people, saying "he betrayed his country for his U.S. masters." Dougan suggested the death and destruction in the city was caused entirely by Ukrainian troops, without mentioning the relentless Russian bombing, or even its invasion of Ukraine. OAN ran a headline with his piece saying the Western media was covering up atrocities by Ukrainian troops against civilians. A spokesperson for One America News said Dougan only appeared on the network once and was not paid for the report, adding that the network has since cut all ties with him. His co-reporter on the trip was Daria Dugina, Dugin's daughter, who claimed Ukrainians were "carpet-bombing their own people." A few months later, Dugina was killed in a car bomb just outside Moscow. Dougan told The Post that Dugina was a "wonderful lady" while he agreed with many of the points made by her father about the necessity for a multipolar world in which the United States would not "dictate everything to everyone." By mid-2023, Dougan was generating material for the DC Weekly site, boasting to Korovin that it was already garnering hundreds of thousands of views every month, the documents show. He explained he was using artificial intelligence to populate the site with Russian news articles translated into English and to emphasize a tone critical of NATO and the U.S. government. The quality is "superlative," Dougan said. In October 2023, he garnered his first viral hit: an article on DC Weekly alleging that Zelensky's wife, Olena Zelenska, had spent $1.1 million in Cartier during an official visit to New York. He bragged to Korovin that the story had wide pickup. The article had cited a fake video interview with an alleged former employee of the Cartier shop who weeks later was identified as a St. Petersburg student and beauty salon manager. Another early fake article traced to Dougan said that Zelensky has used U.S. aid to buy two luxury yachts. The false claim was cited by several senior Republicans as a reason to halt funding for Ukraine. But Dougan's success also brought growing scrutiny. Researchers at Clemson University traced DC Weekly's IP address back to other domains that it said were affiliated with Dougan, while disinformation researchers at Microsoft and NewsGuard were soon highlighting the links too. By spring of this year, several of Dougan's fake news sites were experiencing technical difficulties. One domain, the Chicago Chronicle, was blocked, and Dougan had to find a new domain for DC Weekly. Dougan began advocating with Korovin for funding to build a powerful new server that would generate its own AI content, ending dependence on Western technology. Dougan "is experienced in the technical details of information technology and knows that the more his infrastructure and content is produced in-house, the less likely that he'll be detected conducting his operations or restricted from using outside services," said Clint Watts, head of Microsoft's Threat Analysis Center. The new server led to an explosion of new output and an increase in the number of sites, while Dougan also began registering some new domains in Iceland to further conceal his fingerprints, Newsguard's Sadeghi said. At the same time, audience reach grew dramatically from 37.7 million in May to 64 million by October, Sadeghi said. "The substantial increase in the network's views and narratives shows that despite being repeatedly exposed and reported on, the falsehoods have continued to reach a large audience," she said. For now, Dougan and his associates appear to be focused on smearing Harris. But concerns are growing that they could soon switch to producing deepfakes that question the integrity of the U.S. election. "If they shift from trying to influence the outcome of the election to interfering in the conduct of the election, this would be very concerning as Election Day nears," Watts said. Dan Lamothe and Cate Brown contributed to this report.
Share
Share
Copy Link
As the U.S. presidential election approaches, foreign interference and disinformation campaigns from Russia, China, and Iran have become more sophisticated and pervasive, posing significant challenges to election integrity and public trust.
As the 2024 U.S. presidential election approaches, foreign interference and disinformation campaigns have evolved into a more sophisticated and pervasive threat. Intelligence officials, researchers, and tech companies report that Russia, China, and Iran are deploying increasingly nuanced tactics to influence American voters and undermine trust in democratic institutions 123.
The landscape of election interference has changed dramatically since 2016. Foreign actors have adapted their strategies to the current media environment:
Widespread platform usage: Disinformation is no longer confined to major social media platforms. Foreign operatives now spread content across numerous platforms, including messaging apps and niche forums 24.
Targeted campaigns: Foreign actors are creating highly targeted content for specific demographics, regions, and even districts within swing states 24.
Improved content quality: Unlike the obvious errors in 2016, current disinformation is more sophisticated and harder to detect 24.
Each foreign actor has distinct goals and methods:
AI has become a significant factor in the creation and dissemination of disinformation. It enables the rapid production of convincing fake content, including images, videos, and text, making it increasingly difficult for users to distinguish between real and fabricated information 35.
Domestic political figures and media personalities often amplify foreign-originated disinformation, whether knowingly or unknowingly. This includes the spread of election fraud claims and other divisive narratives 34.
U.S. government agencies have become more proactive in identifying and publicizing foreign interference attempts. The Cybersecurity and Infrastructure Security Agency (CISA) and other bodies are working to alert the public about potential tactics 13.
However, changes in social media policies and ownership, particularly Elon Musk's acquisition of Twitter (now X), have complicated efforts to combat disinformation 34.
The flood of disinformation has significantly impacted political discourse in the U.S. It has eroded public trust in democratic institutions and processes, with many Americans now questioning the legitimacy of election outcomes 34.
As the election approaches, officials warn that these disinformation efforts are likely to intensify, posing ongoing challenges to maintaining the integrity of the democratic process 135.
Reference
[2]
[3]
[4]
[5]
As the 2024 U.S. presidential election approaches, experts warn of an unprecedented surge in AI-generated disinformation across social media platforms, posing significant challenges to election integrity and voter trust.
3 Sources
3 Sources
A synthesis of editorial opinions from various U.S. news outlets, covering topics ranging from political developments to social issues and economic concerns.
2 Sources
2 Sources
Artificial intelligence poses a significant threat to the integrity of the 2024 US elections. Experts warn about the potential for AI-generated misinformation to influence voters and disrupt the electoral process.
2 Sources
2 Sources
Elon Musk's social media activity and platform policies have sparked debates about misinformation and election integrity. His actions on X (formerly Twitter) are under scrutiny as the 2024 US presidential election approaches.
6 Sources
6 Sources
As the 2024 U.S. presidential election approaches, artificial intelligence emerges as a powerful and potentially disruptive force, raising concerns about misinformation, deepfakes, and foreign interference while also offering new campaign tools.
6 Sources
6 Sources
The Outpost is a comprehensive collection of curated artificial intelligence software tools that cater to the needs of small business owners, bloggers, artists, musicians, entrepreneurs, marketers, writers, and researchers.
© 2025 TheOutpost.AI All rights reserved