Artificial intelligence-created content is flooding the web and making it less clear than ever what's real this election. From former president Donald Trump falsely claiming images from a Vice President Kamala Harris rally were AI-generated to a spoofed robocall of President Joe Biden telling voters not to cast their ballot, the rise of AI is fueling rampant misinformation.
Deepfake detectors have been marketed as a silver bullet for identifying AI fakes, or "deepfakes." Social media giants use them to label fake content on their platforms. Government officials are pressuring the private sector to pour millions into building the software, fearing deepfakes could disrupt elections or allow foreign adversaries to incite domestic turmoil.
But the science of detecting manipulated content is in its early stages. An April study by the Reuters Institute for the Study of Journalism found that many deepfake detector tools can be easily duped with simple software tricks or editing techniques.
Meanwhile, deepfakes and manipulated video are proliferating.
This video of Harris resurfaced on X the day Biden dropped out of the race, quickly gaining over 2 million views. In the clip, she seems to ramble incoherently. But it's digitally altered.
How can you know for sure? The Washington Post talked with AI experts, computer scientists and representatives from deepfake detection companies to find out how the technology works -- and where it falls short.
Here are several key techniques deepfake detectors use to analyze content.
If deepfake detection tools functioned properly, they could provide real-time fact-checking on platforms like Instagram, TikTok and X, eradicating AI-generated fake political ads, deceptive marketing ploys and misinformation before they take hold.
Policymakers from Washington to Brussels have grown increasingly concerned about the impact of deepfakes and are rallying around detectors as a solution. Europe's landmark AI legislation attempts to stem the impact of fake imagery through mandates that would help the public identify deepfakes, including through detection technology. The White House and top E.U. officials have been pressuring the tech industry to invest in new ways to detect AI-generated content in an effort to create online labels.
But deepfake detectors have significant flaws. Last year, researchers from universities and companies in the United States, Australia and India analyzed detection techniques and found their accuracy ranged from 82 percent to just 25 percent. That means detectors often misidentify fake or manipulated clips as real -- and flag real clips as fake.
Hany Farid, a computer science professor at the University of California at Berkeley, said the algorithms that fuel deepfake detectors are only as good as the data they train on. The datasets are largely composed of deepfakes created in a lab environment and don't accurately mimic the characteristics of deepfakes that show up on social media. Detectors are also poor at spotting abnormal patterns in the physics of lighting or body movement, Farid said.
Detectors are better at spotting images that are common in their training data, researchers at the Reuters Institute for the Study of Journalism said. That means detectors may accurately flag deepfakes of Russian President Vladimir Putin while struggling with images of Estonian President Alar Karis, for example.
They also are less accurate when images contain dark-skinned people. And if people manipulate AI-generated photos using Photoshop techniques such as blurring or file compression, they can fool the tools. Deepfakers are also adept at creating images that are one step ahead of detection technology, AI experts said.
Since detectors are far from perfect, humans can employ old-school methods to spot fake images and video online, experts said. Zooming in on photos and videos allows people to check for abnormal artifacts, such as disfigured hands or odd shapes in background details, such as floor tiles or walls. In fake audio, the absence of pauses and vocal inflections can make a person sound robotic.
It's also crucial to analyze suspected deepfakes for contextual clues. In the Harris video, for example, the lectern sign reads "Ramble Rants," which is the account name of the deepfaker.
Dozens of companies in Silicon Valley have dedicated themselves to spotting AI deepfakes, but most methods have fallen short.
Until now, the industry's biggest hope was watermarking, a process that embeds images with an invisible mark that only computers can detect. But watermarks can be easily tampered with or duplicated, confusing the software meant to read them.
Last year, leading companies including OpenAI, Google, Microsoft and Meta signed a voluntary pledge that requires them to develop tools to help the public detect AI-generated images. But tech executives are skeptical about the project.
"I received mixed answers from Big Tech," Věra Jourová, a top European Union official, told The Post at an Atlantic Council event. "Some platforms told me this was impossible."
The inability to detect AI-generated images can have real-world consequences. In late July, an image of a Sikh man urinating into a cup at a Canadian gas station went viral on X, fueling anti-immigrant rhetoric. But according to a post on X, the owner of the gas station claimed the incident never happened.
The Post uploaded the image into a popular deepfake detection tool from the nonprofit AI company TrueMedia. The image was labeled as having "little evidence of manipulation," indicating it may be real.
Later, The Post received an email saying a human analyst at the company had found "substantial evidence of manipulation." Oren Etzioni, founder of TrueMedia, said its AI detectors are "not 100 percent accurate" and rely on human analysts to review results. Corrected results are used to "further train the AI detectors and improve performance," he said.
Farid said such inconsistent results are dangerous because people are "weaponizing" them to alter society's concept of reality.
"It's making it so that we don't trust or believe anything or anybody," he said. "It's a free-for-all now."