Curated by THEOUTPOST
On Thu, 19 Sept, 4:06 PM UTC
6 Sources
[1]
Elon Musk's reposts of Kamala Harris deepfakes may not fly under new California law | TechCrunch
California's newest law could land social media users who post, or repost, AI deepfakes that deceive voters about upcoming elections in legal trouble. Governor Gavin Newsom suggests that AB 2839, which went into effect immediately after he signed it on Tuesday, could be used to reel in Elon Musk's retweets, among others who spread deceptive content. "I just signed a bill to make this illegal in the state of California," said Newsom in a tweet, referencing an AI deepfake Musk reposted earlier this year, appearing as if Kamala Harris called herself an incompetent candidate and a diversity hire (she did not). "You can no longer knowingly distribute an ad or other election communications that contain materially deceptive content -- including deepfakes," Newsom said later in the tweet. California's new law targets the distributors of AI deepfakes, specifically if the post resembles a candidate on California ballots, and the poster knows it's a fake that will cause confusion. AB 2839 is unique because it doesn't go after the creators of AI deepfakes, nor the platforms they appear on, but rather those who maliciously spread them. Anyone who sees an AI deepfake on social media can now file for injunctive relief, meaning a judge could order the poster to take it down, or issue monetary damages against the person who posted it. It's one of America's strongest laws against election-related AI deepfakes heading into the 2024 presidential election. A sponsor which helped draft AB 2839, California Initiative for Technology and Democracy (CITED), tells TechCrunch this law can impact any social media user -- not just Musk -- who posts or reposts election-related AI deepfakes with malice. "Malice" means the poster knew it was false and would confuse voters. "[AB 2839] goes after the creators or distributors of content, if the content falls within the terms of the bill," said CITED's policy director, Leora Gershenzon, in an interview with TechCrunch. "This is materially deceptive content that is distributed knowing it's false, with reckless disregard of the truth, and is likely to influence the election." When asked whether Musk could face legal action for reposting deepfakes, Newsom did not rule out the possibility. "I think Mr. Musk has missed the punchline," said Governor Newsom at a press conference Thursday. "Parody is still alive and well in California, but deepfakes and manipulations of elections -- that hurts democracy." Specifically, the new law bans election-related AI deepfakes from TV, radio, phone, texts, or any communication "distributed through the internet." The bill is not exclusive to political campaign ads, which other laws have focused on, but also posts from everyday people. AB 2839 creates a window -120 days before a California election and 60 days after -- where there are stricter rules about what you can, and can not, post about political candidates on social media. "The real goal is actually neither the damages or the injunctive relief," said Gershenzon. "It's just to have people not do it in the first place. That actually would be the best outcome... to just have these deepfakes not fraudulently impact our elections." This law pertains to candidates for state and local elections in California, as well as federal candidates that will appear on California's ballot, such as Kamala Harris and Donald Trump. If there's an obvious disclaimer on an AI deepfake, stating that it has been digitally altered, then AB 2839 does not apply to it. Musk is already trying to test the will to enforce California's new law. Musk reposted the deepfake resembling Kamala Harris that Newsom referenced in his tweet on Tuesday, amassing more than 31 million impressions on X. Musk also reposted an AI deepfake resembling Governor Newsom on Wednesday, which received more than 7 million impressions. Musk and X are facing other legal problems related to moderation. For instance, a Brazilian Supreme Court judge fined the X Corporation on Thursday for skirting the country's ban on the platform. The judge previously said X's failure to combat fake news and hate speech is harming Brazil's democracy.
[2]
2 of California's 3 new deepfake laws are being challenged in court by creator of Kamala Harris parody videos
California now has some of the toughest laws in the United States to crack down on election deepfakes ahead of the 2024 election after Gov. Gavin Newsom signed three landmark proposals this week at an artificial intelligence conference in San Francisco. The state could be among the first to test out such legislation, which bans the use of AI to create and circulate false images and videos in political ads close to Election Day. But now, two of the three laws, including one that was designed to curb the practice in the 2024 election, are being challenged in court through a lawsuit filed Tuesday in Sacramento. Those include one that takes effect immediately that allows any individual to sue for damages over election deepfakes, while the other requires large online platforms, like X, to remove the deceptive material starting next year. The lawsuit, filed by a person who created parody videos featuring altered audios of Vice President and Democratic presidential nominee Kamala Harris, says the laws censor free speech and allow anybody to take legal action over content they dislike. At least one of his videos was shared by Elon Musk, owner of the social media platform X, which then prompted Newsom to vow to ban such content on a post on X. The governor's office said the law doesn't ban satire and parody content. Instead, it requires the disclosure of the use of AI to be displayed within the altered videos or images. "It's unclear why this conservative activist is suing California," Newsom spokesperson Izzy Gardon said in a statement. "This new disclosure law for election misinformation isn't any more onerous than laws already passed in other states, including Alabama." Theodore Frank, an attorney representing the complainant, said the California laws are too far reaching and are designed to "force social media companies to censor and harass people." "I'm not familiar with the Alabama law. On the other hand, the governor of Alabama had hasn't threatened our client the way the governor of California did," he told The Associated Press. The lawsuit appears to be among the first legal challenges over such legislation in the U.S. Frank told the AP he is planning to file another lawsuit over similar laws in Minnesota. State lawmakers in more than a dozen states have advanced similar proposals after the emergence of AI began supercharging the threat of election disinformation worldwide. Among the three law signed by Newsom on Tuesday, one takes effect immediately to prevent deepfakes surrounding the 2024 election and is the most sweeping in scope. It targets not only materials that could affect how people vote but also any videos and images that could misrepresent election integrity. The law also covers materials depicting election workers and voting machines, not just political candidates. The law makes it illegal to create and publish false materials related to elections 120 days before Election Day and 60 days thereafter. It also allows courts to stop the distribution of the materials, and violators could face civil penalties. The law exempts parody and satire. The goal, Newsom and lawmakers said, is to prevent the erosion of public trust in U.S. elections amid a "fraught political climate." But critics such as free speech advocates and Musk called the new California law unconstitutional and an infringement on the First Amendment. Hours after they were signed into law, Musk on Tuesday night elevated a post on X sharing an AI-generated video featuring altered audios of Harris. "The governor of California just made this parody video illegal in violation of the Constitution of the United States. Would be a shame if it went viral," Musk wrote of the AI-generated video, which has a caption identifying the video as a parody. It is not clear how effective these laws are in stopping election deepfakes, said Ilana Beller of Public Citizen, a nonprofit consumer advocacy organization. The group tracks state legislation related to election deepfakes. None of the law has been tested in a courtroom, Beller said. The law's effectiveness could be blunted by the slowness of the courts against a technology that can produce fake images for political ads and disseminate them at warp speed. It could take several days for a court to order injunctive relief to stop the distribution of the content, and by then, damages to a candidate or to an election could have been already done, Beller said. "In an ideal world, we'd be able to take the content down the second it goes up," she said. "Because the sooner you can take down the content, the less people see it, the less people proliferate it through reposts and the like, and the quicker you're able to dispel it." Still, having such a law on the books could serve as a deterrent for potential violations, she said. Assemblymember Gail Pellerin declined to comment on the lawsuit, but said the law she authored is a simple tool to avoid misinformation. "What we're saying is, hey, just mark that video as digitally altered for parody purposes," Pellerin said. "And so it's very clear that it's for satire or for parody." Newsom on Tuesday also signed another law to require campaigns to start disclosing AI-generated materials starting next year, after the 2024 election.
[3]
New California laws cracking down on AI election deepfakes face legal challenges
California now has some of the toughest laws in the United States to crack down on election deepfakes ahead of the 2024 election after Gov. Gavin Newsom signed three landmark proposals this week at an artificial intelligence conference in San Francisco. The state could be among the first to test out such legislation, which bans the use of AI to create and circulate false images and videos in political ads close to Election Day. But now, two of the three laws, including one that was designed to curb the practice in the 2024 election, are being challenged in court through a lawsuit filed Tuesday in Sacramento. Those include one that takes effect immediately that allows any individual to sue for damages over election deepfakes, while the other requires large online platforms, like X, to remove the deceptive material starting next year. The lawsuit, filed by a person who created parody videos featuring altered audios of Vice President and Democratic presidential nominee Kamala Harris, says the laws censor free speech and allow anybody to take legal action over content they dislike. At least one of his videos was shared by Elon Musk, owner of the social media platform X, which then prompted Newsom to vow to ban such content on a post on X. The governor's office said the law doesn't ban satire and parody content. Instead, it requires the disclosure of the use of AI to be displayed within the altered videos or images. "It's unclear why this conservative activist is suing California," Newsom spokesperson Izzy Gardon said in a statement. "This new disclosure law for election misinformation isn't any more onerous than laws already passed in other states, including Alabama." Theodore Frank, an attorney representing the complainant, said the California laws are too far reaching and are designed to "force social media companies to censor and harass people." "I'm not familiar with the Alabama law. On the other hand, the governor of Alabama had hasn't threatened our client the way the governor of California did," he told The Associated Press. The lawsuit appears to be among the first legal challenges over such legislation in the U.S. Frank told the AP he is planning to file another lawsuit over similar laws in Minnesota. State lawmakers in more than a dozen states have advanced similar proposals after the emergence of AI began supercharging the threat of election disinformation worldwide. Among the three laws signed by Newsom on Tuesday, one takes effect immediately to prevent deepfakes surrounding the 2024 election and is the most sweeping in scope. It targets not only materials that could affect how people vote but also any videos and images that could misrepresent election integrity. The law also covers materials depicting election workers and voting machines, not just political candidates. The law makes it illegal to create and publish false materials related to elections 120 days before Election Day and 60 days thereafter. It also allows courts to stop the distribution of the materials, and violators could face civil penalties. The law exempts parody and satire. The goal, Newsom and lawmakers said, is to prevent the erosion of public trust in U.S. elections amid a "fraught political climate." But critics such as free speech advocates and Musk called the new California law unconstitutional and an infringement on the First Amendment. Hours after they were signed into law, Musk on Tuesday night elevated a post on X sharing an AI-generated video featuring altered audios of Harris. "The governor of California just made this parody video illegal in violation of the Constitution of the United States. Would be a shame if it went viral," Musk wrote of the AI-generated video, which has a caption identifying the video as a parody. It is not clear how effective these laws are in stopping election deepfakes, said Ilana Beller of Public Citizen, a nonprofit consumer advocacy organization. The group tracks state legislation related to election deepfakes. None of the law has been tested in a courtroom, Beller said. The law's effectiveness could be blunted by the slowness of the courts against a technology that can produce fake images for political ads and disseminate them at warp speed. It could take several days for a court to order injunctive relief to stop the distribution of the content, and by then, damages to a candidate or to an election could have been already done, Beller said. "In an ideal world, we'd be able to take the content down the second it goes up," she said. "Because the sooner you can take down the content, the less people see it, the less people proliferate it through reposts and the like, and the quicker you're able to dispel it." Still, having such a law on the books could serve as a deterrent for potential violations, she said. Assemblymember Gail Pellerin declined to comment on the lawsuit, but said the law she authored is a simple tool to avoid misinformation. "What we're saying is, hey, just mark that video as digitally altered for parody purposes," Pellerin said. "And so it's very clear that it's for satire or for parody." Newsom on Tuesday also signed another law to require campaigns to start disclosing AI-generated materials starting next year, after the 2024 election.
[4]
California law cracking down on election deepfakes by AI to be tested
State lawmakers in more than a dozen states have advanced similar proposals after the emergence of AI began supercharging the threat of election disinformation worldwide, with the new California law being the most sweeping in scope. It targets not only materials that could affect how people vote but also any videos and images that could misrepresent election integrity.California now has some of the toughest laws in the United States to crack down on election deepfakes ahead of the 2024 election after Gov. Gavin Newsom signed three landmark proposals this week at an artificial intelligence conference in San Francisco. The state could be among the first to test out such legislation, which bans the use of AI to create false images and videos in political ads close to Election Day. State lawmakers in more than a dozen states have advanced similar proposals after the emergence of AI began supercharging the threat of election disinformation worldwide, with the new California law being the most sweeping in scope. It targets not only materials that could affect how people vote but also any videos and images that could misrepresent election integrity. The law also covers materials depicting election workers and voting machines, not just political candidates. Among the three law signed by Newsom on Tuesday, only one takes effect immediately to prevent deepfakes surrounding the 2024 election. It makes it illegal to create and publish false materials related to elections 120 days before Election Day and 60 days thereafter. It also allows courts to stop the distribution of the materials, and violators could face civil penalties. The law exempts parody and satire. The goal, Newsom and lawmakers said, is to prevent the erosion of public trust in U.S. elections amid a "fraught political climate." The legislation is already drawing fierce criticism from free speech advocates and social media platform operators. Elon Musk, owner of the social media platform X, called the new California law unconstitutional and an infringement on the First Amendment. Hours after they were signed into law, Musk on Tuesday night elevated a post on X sharing an AI-generated video featuring altered audios of Vice President and Democratic presidential nominee Kamala Harris. His post of another deepfake featuring Harris prompted Newsom to vow to pass legislation cracking down on the practice in July. "The governor of California just made this parody video illegal in violation of the Constitution of the United States. Would be a shame if it went viral," Musk wrote of the AI-generated video, which has the caption identifying the video as a parody. But it's not clear how effective these laws are in stopping election deepfakes, said Ilana Beller of Public Citizen, a nonprofit consumer advocacy organization. The group tracks state legislation related to election deepfakes. None of the law has been tested in a courtroom, Beller said. The law's effectiveness could be blunted by the slowness of the courts against a technology that can produce fake images for political ads and disseminate them at warp speed. It could take several days for a court to order injunctive relief to stop the distribution of the content, and by then, damages to a candidate or to an election could have been already done, Beller said. "In an ideal world, we'd be able to take the content down the second it goes up," she said. "Because the sooner you can take down the content, the less people see it, the less people proliferate it through reposts and the like, and the quicker you're able to dispel it." Still, having such a law on the books could serve as a deterrent for potential violations, she said. Newsom's office didn't immediately respond to questions about whether Musk's post violated the new state law. Assemblymember Gail Pellerin, author of the law, wasn't immediately available Wednesday to comment. Newsom on Tuesday also signed two other laws, built upon some of the first-in-the-nation legislation targeting election deepfakes enacted in California in 2019, to require campaigns to start disclosing AI-generated materials and mandate online platforms, like X, to remove the deceptive material. Those laws will take effect next year, after the 2024 election.
[5]
California Tackles AI Election Deepfakes
California has enacted some of the nation's strictest measures to combat the spread of deepfakes in elections ahead of the 2024 vote. Gov. Gavin Newsom signed a series of bills at an AI conference in San Francisco. New policies include a law targeting AI-generated fake political ads and materials that could mislead the electorate. This law, which took effect immediately, allows individuals to sue for damages if they have been harmed by deepfake content. It also empowers courts to order the removal of misleading AI-generated materials that misrepresent candidates, election processes, or even election workers. Gov. Newsom said these measures are vital for preserving public trust in elections at a time when AI technologies are advancing rapidly. They will also position the state at the forefront of addressing artificial intelligence's potential impact on election integrity. "This is about protecting democracy, ensuring that Californians get the truth, not manipulated fabrications that could sway how people vote," he said. However, the new legislation is already facing legal opposition. A lawsuit was filed in Sacramento by a political activist who had created parody videos featuring altered audio clips of Vice President Kamala Harris. This individual, whose work has been shared by Elon Musk, claims the new laws infringe on First Amendment rights. His complaint argues that the laws are too broad and could be used to censor free speech under the guise of regulating AI-generated content. "The governor of California just made this parody video illegal in violation of the Constitution of the United States," Musk wrote on X, formerly known as Twitter, referring to one of the parody videos shared on his platform. Musk has been one of Newsom's more vocal critics, notably mocking the governor's AI policies in a tweet referencing the satirical persona "Professor Suggon Deeznutz." State officials argue that the legislation does not target satire or parody, but rather deceptive materials that mislead voters without clear labeling that AI was involved. "This new disclosure law for election misinformation isn't any more onerous than laws already passed in other states," Newsom's spokesperson Izzy Gardon said in response to the lawsuit. Experts on both sides of the debate are watching California closely, as the state's approach could set a national precedent. Theodore Frank, the attorney representing the complainant, warned that the law could open the door for social media companies to "censor and harass people" based on subjective interpretations of AI-created content. Public Citizen, a consumer advocacy organization, tracks state laws on election deepfakes. Its representative, Ilana Beller, said that although California's new law has potential as a deterrent, its real-world effectiveness will depend on how quickly courts can act to stop the spread of misleading content. "In an ideal world, we'd be able to take the content down the second it goes up," she said. "Because the sooner you can take down the content, the less people see it, the less people proliferate it through reposts and the like, and the quicker you're able to dispel it."
[6]
Creator of fake Kamala Harris video Musk boosted sues Calif. over deepfake laws
Online influencer "Mr Reagan" accuses California of bullying humorists. After California passed laws cracking down on AI-generated deepfakes of election-related content, a popular conservative influencer promptly sued, accusing California of censoring protected speech, including satire and parody. In his complaint, Christopher Kohls -- who is known as "Mr Reagan" on YouTube and X (formerly Twitter) -- said that he was suing "to defend all Americans' right to satirize politicians." He claimed that California laws, AB 2655 and AB 2839, were urgently passed after X owner Elon Musk shared a partly AI-generated parody video on the social media platform that Kohls created to "lampoon" presidential hopeful Kamala Harris. AB 2655, known as the "Defending Democracy from Deepfake Deception Act," prohibits creating "with actual malice" any "materially deceptive audio or visual media of a candidate for elective office with the intent to injure the candidate's reputation or to deceive a voter into voting for or against the candidate, within 60 days of the election." It requires social media platforms to block or remove any reported deceptive material and label "certain additional content" deemed "inauthentic, fake, or false" to prevent election interference. The other law at issue, AB 2839, titled "Elections: deceptive media in advertisements," bans anyone from "knowingly distributing an advertisement or other election communication" with "malice" that "contains certain materially deceptive content" within 120 days of an election in California and, in some cases, within 60 days after an election. Both bills were signed into law on September 17, and Kohls filed his complaint that day, alleging that both must be permanently blocked as unconstitutional. Elon Musk called out for boosting Kohls' video Kohls' video that Musk shared seemingly would violate these laws by using AI to make Harris appear to give speeches that she never gave. The manipulated audio sounds like Harris, who appears to be mocking herself as a "diversity hire" and claiming that any critics must be "sexist and racist." "Making fun of presidential candidates and other public figures is an American pastime," Kohls said, defending his parody video. He pointed to a long history of political cartoons and comedic impressions of politicians, claiming that "AI-generated commentary, though a new mode of speech, falls squarely within this tradition." While Kohls' post was clearly marked "parody" in the YouTube title and in his post on X, that "parody" label did not carry over when Musk re-posted the video. This lack of a parody label on Musk's post -- which got approximately 136 million views, roughly twice as many as Kohls' post -- set off California governor Gavin Newsom, who immediately blasted Musk's post and vowed on X to make content like Kohls' video "illegal." In response to Newsom, Musk poked fun at the governor, posting that "I checked with renowned world authority, Professor Suggon Deeznutz, and he said parody is legal in America." For his part, Kohls put up a second parody video targeting Harris, calling Newsom a "bully" in his complaint and claiming that he had to "punch back." Shortly after these online exchanges, California lawmakers allegedly rushed to back the governor, Kohls' complaint said. They allegedly amended the deepfake bills to ensure that Kohls' video would be banned when the bills were signed into law, replacing a broad exception for satire in one law with a narrower safe harbor that Kohls claimed would chill humorists everywhere. "For videos," his complaint said, disclaimers required under AB 2839 must "appear for the duration of the video" and "must be in a font size 'no smaller than the largest font size of other text appearing in the visual media.'" For a satirist like Kohls who uses large fonts to optimize videos for mobile, this "would require the disclaimer text to be so large that it could not fit on the screen," his complaint said. On top of seeming impractical, the disclaimers would "fundamentally" alter "the nature of his message" by removing the comedic effect for viewers by distracting from what allegedly makes the videos funny -- "the juxtaposition of over-the-top statements by the AI-generated 'narrator,' contrasted with the seemingly earnest style of the video as if it were a genuine campaign ad," Kohls' complaint alleged. Imagine watching Saturday Night Live with prominent disclaimers taking up your TV screen, his complaint suggested. It's possible that Kohls' concerns about AB 2839 are unwarranted. Newsom spokesperson Izzy Gardon told Politico that Kohls' parody label on X was good enough to clear him of liability under the law. "Requiring them to use the word 'parody' on the actual video avoids further misleading the public as the video is shared across the platform," Gardon said. "It's unclear why this conservative activist is suing California. This new disclosure law for election misinformation isn't any more onerous than laws already passed in other states, including Alabama."
Share
Share
Copy Link
California's recently enacted law targeting AI-generated deepfakes in elections is being put to the test, as Elon Musk's reposting of Kamala Harris parody videos sparks debate and potential legal challenges.
In a bold move to combat misinformation, California has implemented a new law aimed at regulating the use of artificial intelligence (AI) in creating deceptive content during elections. The legislation, which took effect in January 2024, requires clear disclosures on AI-generated content that could mislead voters about candidates or ballot measures 1.
The law's effectiveness is being put to the test as tech mogul Elon Musk has recently reposted parody videos of Vice President Kamala Harris on his social media platform, X (formerly Twitter). These videos, which use AI to manipulate Harris's voice and appearance, have sparked a heated debate about the boundaries of free speech and the potential for voter manipulation 2.
Critics of the law, including some legal experts, argue that it may infringe on First Amendment rights. They contend that parody and satire, even when created using AI, should be protected forms of political speech. The creator of the Harris parody videos has filed a lawsuit challenging the constitutionality of the law, setting the stage for a potential legal showdown 3.
The new legislation places significant responsibility on social media platforms to enforce the disclosure requirements. Companies like X, Facebook, and YouTube may need to implement new content moderation policies and technologies to comply with the law. This has raised questions about the practicality of enforcement and the potential impact on user-generated content 4.
As California often sets trends in tech regulation, other states are closely watching the implementation and legal challenges to this deepfake law. Federal lawmakers are also considering similar legislation to address the growing concern over AI-generated misinformation in elections. The outcome of this case could have far-reaching implications for how AI-generated content is regulated across the United States 5.
Proponents of the law argue that it is necessary to protect the integrity of elections in the age of advanced AI technology. They emphasize that the law does not ban deepfakes outright but simply requires transparency. However, critics worry that overly broad regulations could stifle innovation and legitimate uses of AI in political discourse 1.
As the legal battle unfolds, the tech industry, politicians, and voters alike are grappling with the complex challenge of maintaining free speech while safeguarding democratic processes from the potential misuse of AI technology. The resolution of this conflict will likely shape the future of political communication in the digital age.
Reference
[1]
[4]
[5]
California Governor Gavin Newsom signs new laws to address the growing threat of AI-generated deepfakes in elections. The legislation aims to protect voters from misinformation and maintain election integrity.
39 Sources
39 Sources
Elon Musk's social media platform X has filed a lawsuit against California's new law targeting AI-generated deepfakes in elections, claiming it violates free speech protections.
7 Sources
7 Sources
A man who created an AI-generated parody video of Vice President Kamala Harris is suing California over new deepfake laws, claiming they violate free speech rights.
2 Sources
2 Sources
A federal judge has granted a preliminary injunction against California's new law allowing individuals to sue for damages over election deepfakes. The judge ruled that the law likely violates the First Amendment, despite acknowledging the risks posed by AI and deepfakes.
14 Sources
14 Sources
California Governor Gavin Newsom has signed multiple AI-related bills into law, addressing concerns about deepfakes, actor impersonation, and AI regulation. These new laws aim to protect individuals and establish guidelines for AI use in various sectors.
5 Sources
5 Sources
The Outpost is a comprehensive collection of curated artificial intelligence software tools that cater to the needs of small business owners, bloggers, artists, musicians, entrepreneurs, marketers, writers, and researchers.
© 2025 TheOutpost.AI All rights reserved