The Outpost is a comprehensive collection of curated artificial intelligence software tools that cater to the needs of small business owners, bloggers, artists, musicians, entrepreneurs, marketers, writers, and researchers.
© 2024 TheOutpost.AI All rights reserved
Curated by THEOUTPOST
On September 19, 2024
3 Sources
[1]
California law cracking down on election deepfakes by AI to be tested
State lawmakers in more than a dozen states have advanced similar proposals after the emergence of AI began supercharging the threat of election disinformation worldwide, with the new California law being the most sweeping in scope. It targets not only materials that could affect how people vote but also any videos and images that could misrepresent election integrity.California now has some of the toughest laws in the United States to crack down on election deepfakes ahead of the 2024 election after Gov. Gavin Newsom signed three landmark proposals this week at an artificial intelligence conference in San Francisco. The state could be among the first to test out such legislation, which bans the use of AI to create false images and videos in political ads close to Election Day. State lawmakers in more than a dozen states have advanced similar proposals after the emergence of AI began supercharging the threat of election disinformation worldwide, with the new California law being the most sweeping in scope. It targets not only materials that could affect how people vote but also any videos and images that could misrepresent election integrity. The law also covers materials depicting election workers and voting machines, not just political candidates. Among the three law signed by Newsom on Tuesday, only one takes effect immediately to prevent deepfakes surrounding the 2024 election. It makes it illegal to create and publish false materials related to elections 120 days before Election Day and 60 days thereafter. It also allows courts to stop the distribution of the materials, and violators could face civil penalties. The law exempts parody and satire. The goal, Newsom and lawmakers said, is to prevent the erosion of public trust in U.S. elections amid a "fraught political climate." The legislation is already drawing fierce criticism from free speech advocates and social media platform operators. Elon Musk, owner of the social media platform X, called the new California law unconstitutional and an infringement on the First Amendment. Hours after they were signed into law, Musk on Tuesday night elevated a post on X sharing an AI-generated video featuring altered audios of Vice President and Democratic presidential nominee Kamala Harris. His post of another deepfake featuring Harris prompted Newsom to vow to pass legislation cracking down on the practice in July. "The governor of California just made this parody video illegal in violation of the Constitution of the United States. Would be a shame if it went viral," Musk wrote of the AI-generated video, which has the caption identifying the video as a parody. But it's not clear how effective these laws are in stopping election deepfakes, said Ilana Beller of Public Citizen, a nonprofit consumer advocacy organization. The group tracks state legislation related to election deepfakes. None of the law has been tested in a courtroom, Beller said. The law's effectiveness could be blunted by the slowness of the courts against a technology that can produce fake images for political ads and disseminate them at warp speed. It could take several days for a court to order injunctive relief to stop the distribution of the content, and by then, damages to a candidate or to an election could have been already done, Beller said. "In an ideal world, we'd be able to take the content down the second it goes up," she said. "Because the sooner you can take down the content, the less people see it, the less people proliferate it through reposts and the like, and the quicker you're able to dispel it." Still, having such a law on the books could serve as a deterrent for potential violations, she said. Newsom's office didn't immediately respond to questions about whether Musk's post violated the new state law. Assemblymember Gail Pellerin, author of the law, wasn't immediately available Wednesday to comment. Newsom on Tuesday also signed two other laws, built upon some of the first-in-the-nation legislation targeting election deepfakes enacted in California in 2019, to require campaigns to start disclosing AI-generated materials and mandate online platforms, like X, to remove the deceptive material. Those laws will take effect next year, after the 2024 election.
[2]
California Tackles AI Election Deepfakes
California has enacted some of the nation's strictest measures to combat the spread of deepfakes in elections ahead of the 2024 vote. Gov. Gavin Newsom signed a series of bills at an AI conference in San Francisco. New policies include a law targeting AI-generated fake political ads and materials that could mislead the electorate. This law, which took effect immediately, allows individuals to sue for damages if they have been harmed by deepfake content. It also empowers courts to order the removal of misleading AI-generated materials that misrepresent candidates, election processes, or even election workers. Gov. Newsom said these measures are vital for preserving public trust in elections at a time when AI technologies are advancing rapidly. They will also position the state at the forefront of addressing artificial intelligence's potential impact on election integrity. "This is about protecting democracy, ensuring that Californians get the truth, not manipulated fabrications that could sway how people vote," he said. However, the new legislation is already facing legal opposition. A lawsuit was filed in Sacramento by a political activist who had created parody videos featuring altered audio clips of Vice President Kamala Harris. This individual, whose work has been shared by Elon Musk, claims the new laws infringe on First Amendment rights. His complaint argues that the laws are too broad and could be used to censor free speech under the guise of regulating AI-generated content. "The governor of California just made this parody video illegal in violation of the Constitution of the United States," Musk wrote on X, formerly known as Twitter, referring to one of the parody videos shared on his platform. Musk has been one of Newsom's more vocal critics, notably mocking the governor's AI policies in a tweet referencing the satirical persona "Professor Suggon Deeznutz." State officials argue that the legislation does not target satire or parody, but rather deceptive materials that mislead voters without clear labeling that AI was involved. "This new disclosure law for election misinformation isn't any more onerous than laws already passed in other states," Newsom's spokesperson Izzy Gardon said in response to the lawsuit. Experts on both sides of the debate are watching California closely, as the state's approach could set a national precedent. Theodore Frank, the attorney representing the complainant, warned that the law could open the door for social media companies to "censor and harass people" based on subjective interpretations of AI-created content. Public Citizen, a consumer advocacy organization, tracks state laws on election deepfakes. Its representative, Ilana Beller, said that although California's new law has potential as a deterrent, its real-world effectiveness will depend on how quickly courts can act to stop the spread of misleading content. "In an ideal world, we'd be able to take the content down the second it goes up," she said. "Because the sooner you can take down the content, the less people see it, the less people proliferate it through reposts and the like, and the quicker you're able to dispel it."
[3]
2 of California's 3 new deepfake laws are being challenged in court by creator of Kamala Harris parody videos
California now has some of the toughest laws in the United States to crack down on election deepfakes ahead of the 2024 election after Gov. Gavin Newsom signed three landmark proposals this week at an artificial intelligence conference in San Francisco. The state could be among the first to test out such legislation, which bans the use of AI to create and circulate false images and videos in political ads close to Election Day. But now, two of the three laws, including one that was designed to curb the practice in the 2024 election, are being challenged in court through a lawsuit filed Tuesday in Sacramento. Those include one that takes effect immediately that allows any individual to sue for damages over election deepfakes, while the other requires large online platforms, like X, to remove the deceptive material starting next year. The lawsuit, filed by a person who created parody videos featuring altered audios of Vice President and Democratic presidential nominee Kamala Harris, says the laws censor free speech and allow anybody to take legal action over content they dislike. At least one of his videos was shared by Elon Musk, owner of the social media platform X, which then prompted Newsom to vow to ban such content on a post on X. The governor's office said the law doesn't ban satire and parody content. Instead, it requires the disclosure of the use of AI to be displayed within the altered videos or images. "It's unclear why this conservative activist is suing California," Newsom spokesperson Izzy Gardon said in a statement. "This new disclosure law for election misinformation isn't any more onerous than laws already passed in other states, including Alabama." Theodore Frank, an attorney representing the complainant, said the California laws are too far reaching and are designed to "force social media companies to censor and harass people." "I'm not familiar with the Alabama law. On the other hand, the governor of Alabama had hasn't threatened our client the way the governor of California did," he told The Associated Press. The lawsuit appears to be among the first legal challenges over such legislation in the U.S. Frank told the AP he is planning to file another lawsuit over similar laws in Minnesota. State lawmakers in more than a dozen states have advanced similar proposals after the emergence of AI began supercharging the threat of election disinformation worldwide. Among the three law signed by Newsom on Tuesday, one takes effect immediately to prevent deepfakes surrounding the 2024 election and is the most sweeping in scope. It targets not only materials that could affect how people vote but also any videos and images that could misrepresent election integrity. The law also covers materials depicting election workers and voting machines, not just political candidates. The law makes it illegal to create and publish false materials related to elections 120 days before Election Day and 60 days thereafter. It also allows courts to stop the distribution of the materials, and violators could face civil penalties. The law exempts parody and satire. The goal, Newsom and lawmakers said, is to prevent the erosion of public trust in U.S. elections amid a "fraught political climate." But critics such as free speech advocates and Musk called the new California law unconstitutional and an infringement on the First Amendment. Hours after they were signed into law, Musk on Tuesday night elevated a post on X sharing an AI-generated video featuring altered audios of Harris. "The governor of California just made this parody video illegal in violation of the Constitution of the United States. Would be a shame if it went viral," Musk wrote of the AI-generated video, which has a caption identifying the video as a parody. It is not clear how effective these laws are in stopping election deepfakes, said Ilana Beller of Public Citizen, a nonprofit consumer advocacy organization. The group tracks state legislation related to election deepfakes. None of the law has been tested in a courtroom, Beller said. The law's effectiveness could be blunted by the slowness of the courts against a technology that can produce fake images for political ads and disseminate them at warp speed. It could take several days for a court to order injunctive relief to stop the distribution of the content, and by then, damages to a candidate or to an election could have been already done, Beller said. "In an ideal world, we'd be able to take the content down the second it goes up," she said. "Because the sooner you can take down the content, the less people see it, the less people proliferate it through reposts and the like, and the quicker you're able to dispel it." Still, having such a law on the books could serve as a deterrent for potential violations, she said. Assemblymember Gail Pellerin declined to comment on the lawsuit, but said the law she authored is a simple tool to avoid misinformation. "What we're saying is, hey, just mark that video as digitally altered for parody purposes," Pellerin said. "And so it's very clear that it's for satire or for parody." Newsom on Tuesday also signed another law to require campaigns to start disclosing AI-generated materials starting next year, after the 2024 election.
Share
Share
Copy Link
California's new law regulating AI-generated deepfakes in elections is set to be tested in court. The legislation aims to combat misinformation but faces opposition on First Amendment grounds.
In a bold move to combat the spread of misinformation, California has implemented a new law targeting AI-generated deepfakes in elections 1. The legislation, which took effect in January 2024, requires clear disclosures on election-related audio and video content created using artificial intelligence. This pioneering effort aims to preserve the integrity of the democratic process in an era of rapidly advancing technology.
The law's effectiveness is now set to be tested in court, as it faces a significant legal challenge. A content creator who produces AI-generated parody videos of Vice President Kamala Harris has filed a lawsuit, arguing that the legislation infringes upon First Amendment rights 3. This case highlights the delicate balance between combating disinformation and protecting free speech.
Under the new regulations, AI-generated content related to elections must include a clear and conspicuous disclosure stating that the material has been created or altered using artificial intelligence 2. This requirement applies to various forms of media, including audio recordings, videos, and images that feature candidates or ballot measures.
The law's implementation comes at a crucial time, with the 2024 presidential election on the horizon. Political campaigns and content creators are now grappling with the implications of these regulations. Some argue that the law will help voters distinguish between authentic and manipulated content, while others fear it may stifle creative expression and political commentary.
California's initiative reflects a growing trend of attempts to regulate AI technology in various sectors. As artificial intelligence continues to evolve, lawmakers and tech experts are increasingly concerned about its potential to mislead voters and undermine democratic processes. This case could set a precedent for how other states and countries approach the regulation of AI-generated content in political contexts.
One of the primary challenges facing the implementation of this law is enforcement. With the vast amount of content circulating online, identifying and prosecuting violations may prove difficult. Additionally, the global nature of the internet raises questions about jurisdictional authority and the effectiveness of state-level regulations in a borderless digital landscape.
Social media companies are likely to play a crucial role in the enforcement of this law. Platforms like Facebook, Twitter, and YouTube may need to implement new systems to detect and label AI-generated content related to California elections. This could potentially lead to broader changes in how these platforms handle synthetic media across their global user base.
California Governor Gavin Newsom signs new laws to address the growing threat of AI-generated deepfakes in elections. The legislation aims to protect voters from misinformation and maintain election integrity.
39 Sources
California's legislature has passed a series of bills aimed at regulating artificial intelligence, including a ban on deepfakes in elections and measures to protect workers from AI-driven discrimination. These laws position California as a leader in AI regulation in the United States.
7 Sources
A man who created an AI-generated parody video of Vice President Kamala Harris is suing California over new deepfake laws, claiming they violate free speech rights.
2 Sources
A Berkeley-based non-profit organization is taking proactive steps to counter the threat of AI-generated disinformation in upcoming elections. The group is developing tools to detect and combat fake content that could mislead voters.
2 Sources
California Governor Gavin Newsom signs a new law aimed at safeguarding actors from unauthorized AI-generated replicas. The legislation requires consent and compensation for digital recreations of performers.
2 Sources