Curated by THEOUTPOST
On Wed, 18 Sept, 12:05 AM UTC
39 Sources
[1]
California governor signs laws to crack down on election deepfakes created by AI
SACRAMENTO, Calif. (AP) -- California Gov. Gavin Newsom signed three bills Tuesday to crack down on the use of artificial intelligence to create false images or videos in political ads ahead of the 2024 election. A new law, set to take effect immediately, makes it illegal to create and publish deepfakes related to elections 120 days before Election Day and 60 days thereafter. It also allows courts to stop distribution of the materials and impose civil penalties. "Safeguarding the integrity of elections is essential to democracy, and it's critical that we ensure AI is not deployed to undermine the public's trust through disinformation -- especially in today's fraught political climate," Newsom said in a statement. "These measures will help to combat the harmful use of deepfakes in political ads and other content, one of several areas in which the state is being proactive to foster transparent and trustworthy AI." Large social media platforms are also required to remove the deceptive material under a first-in-the-nation law set to be enacted next year. Newsom also signed a bill requiring political campaigns to publicly disclose if they are running ads with materials altered by AI. The governor signed the bills at an event hosted by Salesforce, a major software company, in San Francisco. The new laws reaffirms California's position as a leader in regulating AI in the U.S., especially in combating election deepfakes. The state was the first in the U.S. to ban manipulated videos and pictures related to elections in 2019. Measures in technology and AI proposed by California lawmakers have been used as blueprints for legislators across the country, industry experts said. With AI supercharging the threat of election disinformation worldwide, lawmakers across the country have raced to address the issue over concerns the manipulated materials could erode the public's trust in what they see and hear. "With fewer than 50 days until the general election, there is an urgent need to protect against misleading, digitally-altered content that can interfere with the election," Assemblymember Gail Pellerin, author of the law banning election deepfakes, said in a statement. "California is taking a stand against the manipulative use of deepfake technology to deceive voters." Newsom's decision followed his vow in July to crack down on election deepfakes in response to a video posted by X-owner Elon Musk featuring altered images of Vice President and Democratic presidential nominee Kamala Harris. The new California laws come the same day as members of Congress unveiled federal legislation aiming to stop election deepfakes. The bill would give the Federal Election Commission the power to regulate the use of AI in elections in the same way it has regulated other political misrepresentation for decades. The FEC has started to consider such regulations after outlawing AI-generated robocalls aimed to discourage voters in February. Newsom has touted California as an early adopter as well as regulator, saying the state could soon deploy generative AI tools to address highway congestion and provide tax guidance, even as his administration considers new rules against AI discrimination in hiring practices. He also signed two other bills Tuesday to protect Hollywood performers from unauthorized AI use against their consent.
[2]
California Governor Signs Laws to Crack Down on Election Deepfakes Created by AI
SACRAMENTO, Calif. (AP) -- California Gov. Gavin Newsom signed three bills Tuesday to crack down on the use of artificial intelligence to create false images or videos in political ads ahead of the 2024 election. A new law, set to take effect immediately, makes it illegal to create and publish deepfakes related to elections 120 days before Election Day and 60 days thereafter. It also allows courts to stop distribution of the materials and impose civil penalties. "Safeguarding the integrity of elections is essential to democracy, and it's critical that we ensure AI is not deployed to undermine the public's trust through disinformation -- especially in today's fraught political climate," Newsom said in a statement. "These measures will help to combat the harmful use of deepfakes in political ads and other content, one of several areas in which the state is being proactive to foster transparent and trustworthy AI." Large social media platforms are also required to remove the deceptive material under a first-in-the-nation law set to be enacted next year. Newsom also signed a bill requiring political campaigns to publicly disclose if they are running ads with materials altered by AI. The governor signed the bills at an event hosted by Salesforce, a major software company, in San Francisco. The new laws reaffirms California's position as a leader in regulating AI in the U.S., especially in combating election deepfakes. The state was the first in the U.S. to ban manipulated videos and pictures related to elections in 2019. Measures in technology and AI proposed by California lawmakers have been used as blueprints for legislators across the country, industry experts said. With AI supercharging the threat of election disinformation worldwide, lawmakers across the country have raced to address the issue over concerns the manipulated materials could erode the public's trust in what they see and hear. "With fewer than 50 days until the general election, there is an urgent need to protect against misleading, digitally-altered content that can interfere with the election," Assemblymember Gail Pellerin, author of the law banning election deepfakes, said in a statement. "California is taking a stand against the manipulative use of deepfake technology to deceive voters." Newsom's decision followed his vow in July to crack down on election deepfakes in response to a video posted by X-owner Elon Musk featuring altered images of Vice President and Democratic presidential nominee Kamala Harris. The new California laws come the same day as members of Congress unveiled federal legislation aiming to stop election deepfakes. The bill would give the Federal Election Commission the power to regulate the use of AI in elections in the same way it has regulated other political misrepresentation for decades. The FEC has started to consider such regulations after outlawing AI-generated robocalls aimed to discourage voters in February. Newsom has touted California as an early adopter as well as regulator, saying the state could soon deploy generative AI tools to address highway congestion and provide tax guidance, even as his administration considers new rules against AI discrimination in hiring practices. He also signed two other bills Tuesday to protect Hollywood performers from unauthorized AI use against their consent. Copyright 2024 The Associated Press. All rights reserved. This material may not be published, broadcast, rewritten or redistributed.
[3]
California governor signs laws to crack down on election deepfakes created by AI
SACRAMENTO, Calif. (AP) -- California Gov. Gavin Newsom signed three bills Tuesday to crack down on the use of artificial intelligence to create false images or videos in political ads ahead of the 2024 election. A new law, set to take effect immediately, makes it illegal to create and publish deepfakes related to elections 120 days before Election Day and 60 days thereafter. It also allows courts to stop distribution of the materials and impose civil penalties. "Safeguarding the integrity of elections is essential to democracy, and it's critical that we ensure AI is not deployed to undermine the public's trust through disinformation -- especially in today's fraught political climate," Newsom said in a statement. "These measures will help to combat the harmful use of deepfakes in political ads and other content, one of several areas in which the state is being proactive to foster transparent and trustworthy AI." Large social media platforms are also required to remove the deceptive material under a first-in-the-nation law set to be enacted next year. Newsom also signed a bill requiring political campaigns to publicly disclose if they are running ads with materials altered by AI. The governor signed the bills at an event hosted by Salesforce, a major software company, in San Francisco. The new laws reaffirms California's position as a leader in regulating AI in the U.S., especially in combating election deepfakes. The state was the first in the U.S. to ban manipulated videos and pictures related to elections in 2019. Measures in technology and AI proposed by California lawmakers have been used as blueprints for legislators across the country, industry experts said. With AI supercharging the threat of election disinformation worldwide, lawmakers across the country have raced to address the issue over concerns the manipulated materials could erode the public's trust in what they see and hear. "With fewer than 50 days until the general election, there is an urgent need to protect against misleading, digitally-altered content that can interfere with the election," Assemblymember Gail Pellerin, author of the law banning election deepfakes, said in a statement. "California is taking a stand against the manipulative use of deepfake technology to deceive voters." Newsom's decision followed his vow in July to crack down on election deepfakes in response to a video posted by X-owner Elon Musk featuring altered images of Vice President and Democratic presidential nominee Kamala Harris. The new California laws come the same day as members of Congress unveiled federal legislation aiming to stop election deepfakes. The bill would give the Federal Election Commission the power to regulate the use of AI in elections in the same way it has regulated other political misrepresentation for decades. The FEC has started to consider such regulations after outlawing AI-generated robocalls aimed to discourage voters in February. Newsom has touted California as an early adopter as well as regulator, saying the state could soon deploy generative AI tools to address highway congestion and provide tax guidance, even as his administration considers new rules against AI discrimination in hiring practices. He also signed two other bills Tuesday to protect Hollywood performers from unauthorized AI use against their consent.
[4]
California law cracking down on election deepfakes by AI to be tested
SACRAMENTO, Calif. -- California now has some of the toughest laws in the United States to crack down on election deepfakes ahead of the 2024 election after Gov. Gavin Newsom signed three landmark proposals this week at an artificial intelligence conference in San Francisco. The state could be among the first to test out such legislation, which bans the use of AI to create false images and videos in political ads close to Election Day. State lawmakers in more than a dozen states have advanced similar proposals after the emergence of AI began supercharging the threat of election disinformation worldwide, with the new California law being the most sweeping in scope. It targets not only materials that could affect how people vote but also any videos and images that could misrepresent election integrity. The law also covers materials depicting election workers and voting machines, not just political candidates. Among the three law signed by Newsom on Tuesday, only one takes effect immediately to prevent deepfakes surrounding the 2024 election. It makes it illegal to create and publish false materials related to elections 120 days before Election Day and 60 days thereafter. It also allows courts to stop the distribution of the materials, and violators could face civil penalties. The law exempts parody and satire. The goal, Newsom and lawmakers said, is to prevent the erosion of public trust in U.S. elections amid a "fraught political climate." The legislation is already drawing fierce criticism from free speech advocates and social media platform operators. Elon Musk, owner of the social media platform X, called the new California law unconstitutional and an infringement on the First Amendment. Hours after they were signed into law, Musk on Tuesday night elevated a post on X sharing an AI-generated video featuring altered audios of Vice President and Democratic presidential nominee Kamala Harris. His post of another deepfake featuring Harris prompted Newsom to vow to pass legislation cracking down on the practice in July. "The governor of California just made this parody video illegal in violation of the Constitution of the United States. Would be a shame if it went viral," Musk wrote of the AI-generated video, which has the caption identifying the video as a parody. But it's not clear how effective these laws are in stopping election deepfakes, said Ilana Beller of Public Citizen, a nonprofit consumer advocacy organization. The group tracks state legislation related to election deepfakes. None of the law has been tested in a courtroom, Beller said. The law's effectiveness could be blunted by the slowness of the courts against a technology that can produce fake images for political ads and disseminate them at warp speed. It could take several days for a court to order injunctive relief to stop the distribution of the content, and by then, damages to a candidate or to an election could have been already done, Beller said. "In an ideal world, we'd be able to take the content down the second it goes up," she said. "Because the sooner you can take down the content, the less people see it, the less people proliferate it through reposts and the like, and the quicker you're able to dispel it." Still, having such a law on the books could serve as a deterrent for potential violations, she said. Newsom's office didn't immediately respond to questions about whether Musk's post violated the new state law. Assemblymember Gail Pellerin, author of the law, wasn't immediately available Wednesday to comment. Newsom on Tuesday also signed two other laws, built upon some of the first-in-the-nation legislation targeting election deepfakes enacted in California in 2019, to require campaigns to start disclosing AI-generated materials and mandate online platforms, like X, to remove the deceptive material. Those laws will take effect next year, after the 2024 election.
[5]
California governor signs laws to crack down on election deepfakes created by AI
California Gov. Gavin Newsom has signed three landmark bills Tuesday to crack down on political deepfakes ahead of the 2024 election SACRAMENTO, Calif. -- California Gov. Gavin Newsom signed three bills Tuesday to crack down on the use of artificial intelligence to create false images or videos in political ads ahead of the 2024 election. A new law, set to take effect immediately, makes it illegal to create and publish deepfakes related to elections 120 days before Election Day and 60 days thereafter. It also allows courts to stop distribution of the materials and impose civil penalties. "Safeguarding the integrity of elections is essential to democracy, and it's critical that we ensure AI is not deployed to undermine the public's trust through disinformation -- especially in today's fraught political climate," Newsom said in a statement. "These measures will help to combat the harmful use of deepfakes in political ads and other content, one of several areas in which the state is being proactive to foster transparent and trustworthy AI." Large social media platforms are also required to remove the deceptive material under a first-in-the-nation law set to be enacted next year. Newsom also signed a bill requiring political campaigns to publicly disclose if they are running ads with materials altered by AI. The governor signed the bills at an event hosted by Salesforce, a major software company, in San Francisco. The new laws reaffirms California's position as a leader in regulating AI in the U.S., especially in combating election deepfakes. The state was the first in the U.S. to ban manipulated videos and pictures related to elections in 2019. Measures in technology and AI proposed by California lawmakers have been used as blueprints for legislators across the country, industry experts said. With AI supercharging the threat of election disinformation worldwide, lawmakers across the country have raced to address the issue over concerns the manipulated materials could erode the public's trust in what they see and hear. "With fewer than 50 days until the general election, there is an urgent need to protect against misleading, digitally-altered content that can interfere with the election," Assemblymember Gail Pellerin, author of the law banning election deepfakes, said in a statement. "California is taking a stand against the manipulative use of deepfake technology to deceive voters." Newsom's decision followed his vow in July to crack down on election deepfakes in response to a video posted by X-owner Elon Musk featuring altered images of Vice President and Democratic presidential nominee Kamala Harris. The new California laws come the same day as members of Congress unveiled federal legislation aiming to stop election deepfakes. The bill would give the Federal Election Commission the power to regulate the use of AI in elections in the same way it has regulated other political misrepresentation for decades. The FEC has started to consider such regulations after outlawing AI-generated robocalls aimed to discourage voters in February. Newsom has touted California as an early adopter as well as regulator, saying the state could soon deploy generative AI tools to address highway congestion and provide tax guidance, even as his administration considers new rules against AI discrimination in hiring practices. He also signed two other bills Tuesday to protect Hollywood performers from unauthorized AI use against their consent.
[6]
California law cracking down on election deepfakes by AI to be tested
SACRAMENTO, Calif. (AP) -- California now has some of the toughest laws in the United States to crack down on election deepfakes ahead of the 2024 election after Gov. Gavin Newsom signed three landmark proposals this week at an artificial intelligence conference in San Francisco. The state could be among the first to test out such legislation, which bans the use of AI to create false images and videos in political ads close to Election Day. State lawmakers in more than a dozen states have advanced similar proposals after the emergence of AI began supercharging the threat of election disinformation worldwide, with the new California law being the most sweeping in scope. It targets not only materials that could affect how people vote but also any videos and images that could misrepresent election integrity. The law also covers materials depicting election workers and voting machines, not just political candidates. Among the three law signed by Newsom on Tuesday, only one takes effect immediately to prevent deepfakes surrounding the 2024 election. It makes it illegal to create and publish false materials related to elections 120 days before Election Day and 60 days thereafter. It also allows courts to stop the distribution of the materials, and violators could face civil penalties. The law exempts parody and satire. The goal, Newsom and lawmakers said, is to prevent the erosion of public trust in U.S. elections amid a "fraught political climate." The legislation is already drawing fierce criticism from free speech advocates and social media platform operators. Elon Musk, owner of the social media platform X, called the new California law unconstitutional and an infringement on the First Amendment. Hours after they were signed into law, Musk on Tuesday night elevated a post on X sharing an AI-generated video featuring altered audios of Vice President and Democratic presidential nominee Kamala Harris. His post of another deepfake featuring Harris prompted Newsom to vow to pass legislation cracking down on the practice in July. "The governor of California just made this parody video illegal in violation of the Constitution of the United States. Would be a shame if it went viral," Musk wrote of the AI-generated video, which has the caption identifying the video as a parody. But it's not clear how effective these laws are in stopping election deepfakes, said Ilana Beller of Public Citizen, a nonprofit consumer advocacy organization. The group tracks state legislation related to election deepfakes. None of the law has been tested in a courtroom, Beller said. The law's effectiveness could be blunted by the slowness of the courts against a technology that can produce fake images for political ads and disseminate them at warp speed. It could take several days for a court to order injunctive relief to stop the distribution of the content, and by then, damages to a candidate or to an election could have been already done, Beller said. "In an ideal world, we'd be able to take the content down the second it goes up," she said. "Because the sooner you can take down the content, the less people see it, the less people proliferate it through reposts and the like, and the quicker you're able to dispel it." Still, having such a law on the books could serve as a deterrent for potential violations, she said. Newsom's office didn't immediately respond to questions about whether Musk's post violated the new state law. Assemblymember Gail Pellerin, author of the law, wasn't immediately available Wednesday to comment. Newsom on Tuesday also signed two other laws, built upon some of the first-in-the-nation legislation targeting election deepfakes enacted in California in 2019, to require campaigns to start disclosing AI-generated materials and mandate online platforms, like X, to remove the deceptive material. Those laws will take effect next year, after the 2024 election.
[7]
California law cracking down on election deepfakes by AI to be tested
SACRAMENTO, Calif. (AP) -- California now has some of the toughest laws in the United States to crack down on election deepfakes ahead of the 2024 election after Gov. Gavin Newsom signed three landmark proposals this week at an artificial intelligence conference in San Francisco. The state could be among the first to test out such legislation, which bans the use of AI to create false images and videos in political ads close to Election Day. State lawmakers in more than a dozen states have advanced similar proposals after the emergence of AI began supercharging the threat of election disinformation worldwide, with the new California law being the most sweeping in scope. It targets not only materials that could affect how people vote but also any videos and images that could misrepresent election integrity. The law also covers materials depicting election workers and voting machines, not just political candidates. Among the three law signed by Newsom on Tuesday, only one takes effect immediately to prevent deepfakes surrounding the 2024 election. It makes it illegal to create and publish false materials related to elections 120 days before Election Day and 60 days thereafter. It also allows courts to stop the distribution of the materials, and violators could face civil penalties. The law exempts parody and satire. The goal, Newsom and lawmakers said, is to prevent the erosion of public trust in U.S. elections amid a "fraught political climate." The legislation is already drawing fierce criticism from free speech advocates and social media platform operators. Elon Musk, owner of the social media platform X, called the new California law unconstitutional and an infringement on the First Amendment. Hours after they were signed into law, Musk on Tuesday night elevated a post on X sharing an AI-generated video featuring altered audios of Vice President and Democratic presidential nominee Kamala Harris. His post of another deepfake featuring Harris prompted Newsom to vow to pass legislation cracking down on the practice in July. "The governor of California just made this parody video illegal in violation of the Constitution of the United States. Would be a shame if it went viral," Musk wrote of the AI-generated video, which has the caption identifying the video as a parody. But it's not clear how effective these laws are in stopping election deepfakes, said Ilana Beller of Public Citizen, a nonprofit consumer advocacy organization. The group tracks state legislation related to election deepfakes. None of the law has been tested in a courtroom, Beller said. The law's effectiveness could be blunted by the slowness of the courts against a technology that can produce fake images for political ads and disseminate them at warp speed. It could take several days for a court to order injunctive relief to stop the distribution of the content, and by then, damages to a candidate or to an election could have been already done, Beller said. "In an ideal world, we'd be able to take the content down the second it goes up," she said. "Because the sooner you can take down the content, the less people see it, the less people proliferate it through reposts and the like, and the quicker you're able to dispel it." Still, having such a law on the books could serve as a deterrent for potential violations, she said. Newsom's office didn't immediately respond to questions about whether Musk's post violated the new state law. Assemblymember Gail Pellerin, author of the law, wasn't immediately available Wednesday to comment. Newsom on Tuesday also signed two other laws, built upon some of the first-in-the-nation legislation targeting election deepfakes enacted in California in 2019, to require campaigns to start disclosing AI-generated materials and mandate online platforms, like X, to remove the deceptive material. Those laws will take effect next year, after the 2024 election.
[8]
California targets AI, political deepfakes with new laws
California Gov. Gavin Newsom signed three bills Tuesday to crack down on the use of artificial intelligence to create false images or videos in political ads ahead of the 2024 election. A new law, set to take effect immediately, makes it illegal to create and publish deepfakes related to elections 120 days before Election Day and 60 days thereafter. It also allows courts to stop distribution of the materials and impose civil penalties. "Safeguarding the integrity of elections is essential to democracy, and it's critical that we ensure AI is not deployed to undermine the public's trust through disinformation -- especially in today's fraught political climate," Newsom said in a statement. "These measures will help to combat the harmful use of deepfakes in political ads and other content, one of several areas in which the state is being proactive to foster transparent and trustworthy AI." Large social media platforms are also required to remove the deceptive material under a first-in-the-nation law set to be enacted next year. Newsom also signed a bill requiring political campaigns to publicly disclose if they are running ads with materials altered by AI. The governor signed the bills to loud applause during a conversation with Salesforce CEO Marc Benioff at an event hosted the major software company during its annual conference in San Francisco. The new laws reaffirm California's position as a leader in regulating AI in the U.S., especially in combating election deepfakes. The state was the first in the U.S. to ban manipulated videos and pictures related to elections in 2019. Measures in technology and AI proposed by California lawmakers have been used as blueprints for legislators across the country, industry experts said. With AI supercharging the threat of election disinformation worldwide, lawmakers across the country have raced to address the issue over concerns the manipulated materials could erode the public's trust in what they see and hear. "With fewer than 50 days until the general election, there is an urgent need to protect against misleading, digitally-altered content that can interfere with the election," Assemblymember Gail Pellerin, author of the law banning election deepfakes, said in a statement. "California is taking a stand against the manipulative use of deepfake technology to deceive voters." Newsom's decision followed his vow in July to crack down on election deepfakes in response to a video posted by X-owner Elon Musk featuring altered images of Vice President and Democratic presidential nominee Kamala Harris. The new California laws come the same day as members of Congress unveiled federal legislation aiming to stop election deepfakes. The bill would give the Federal Election Commission the power to regulate the use of AI in elections in the same way it has regulated other political misrepresentation for decades. The FEC has started to consider such regulations after outlawing AI-generated robocalls aimed to discourage voters in February. Newsom has touted California as an early adopter as well as regulator of AI, saying the state could soon deploy generative AI tools to address highway congestion and provide tax guidance, even as his administration considers new rules against AI discrimination in hiring practices. He also signed two other bills Tuesday to protect Hollywood performers from unauthorized AI use without their consent.
[9]
California law cracking down on election deepfakes by AI to be tested
SACRAMENTO, Calif. (AP) -- California now has some of the toughest laws in the United States to crack down on election deepfakes ahead of the 2024 election after Gov. Gavin Newsom signed three landmark proposals this week at an artificial intelligence conference in San Francisco. The state could be among the first to test out such legislation, which bans the use of AI to create false images and videos in political ads close to Election Day. State lawmakers in more than a dozen states have advanced similar proposals after the emergence of AI began supercharging the threat of election disinformation worldwide, with the new California law being the most sweeping in scope. It targets not only materials that could affect how people vote but also any videos and images that could misrepresent election integrity. The law also covers materials depicting election workers and voting machines, not just political candidates. Among the three law signed by Newsom on Tuesday, only one takes effect immediately to prevent deepfakes surrounding the 2024 election. It makes it illegal to create and publish false materials related to elections 120 days before Election Day and 60 days thereafter. It also allows courts to stop the distribution of the materials, and violators could face civil penalties. The law exempts parody and satire. The goal, Newsom and lawmakers said, is to prevent the erosion of public trust in U.S. elections amid a "fraught political climate." The legislation is already drawing fierce criticism from free speech advocates and social media platform operators. Elon Musk, owner of the social media platform X, called the new California law unconstitutional and an infringement on the First Amendment. Hours after they were signed into law, Musk on Tuesday night elevated a post on X sharing an AI-generated video featuring altered audios of Vice President and Democratic presidential nominee Kamala Harris. His post of another deepfake featuring Harris prompted Newsom to vow to pass legislation cracking down on the practice in July. "The governor of California just made this parody video illegal in violation of the Constitution of the United States. Would be a shame if it went viral," Musk wrote of the AI-generated video, which has the caption identifying the video as a parody. But it's not clear how effective these laws are in stopping election deepfakes, said Ilana Beller of Public Citizen, a nonprofit consumer advocacy organization. The group tracks state legislation related to election deepfakes. None of the law has been tested in a courtroom, Beller said. The law's effectiveness could be blunted by the slowness of the courts against a technology that can produce fake images for political ads and disseminate them at warp speed. It could take several days for a court to order injunctive relief to stop the distribution of the content, and by then, damages to a candidate or to an election could have been already done, Beller said. "In an ideal world, we'd be able to take the content down the second it goes up," she said. "Because the sooner you can take down the content, the less people see it, the less people proliferate it through reposts and the like, and the quicker you're able to dispel it." Still, having such a law on the books could serve as a deterrent for potential violations, she said. Newsom's office didn't immediately respond to questions about whether Musk's post violated the new state law. Assemblymember Gail Pellerin, author of the law, wasn't immediately available Wednesday to comment. Newsom on Tuesday also signed two other laws, built upon some of the first-in-the-nation legislation targeting election deepfakes enacted in California in 2019, to require campaigns to start disclosing AI-generated materials and mandate online platforms, like X, to remove the deceptive material. Those laws will take effect next year, after the 2024 election.
[10]
California Law Cracking Down on Election Deepfakes by AI to Be Tested
SACRAMENTO, Calif. (AP) -- California now has some of the toughest laws in the United States to crack down on election deepfakes ahead of the 2024 election after Gov. Gavin Newsom signed three landmark proposals this week at an artificial intelligence conference in San Francisco. The state could be among the first to test out such legislation, which bans the use of AI to create false images and videos in political ads close to Election Day. State lawmakers in more than a dozen states have advanced similar proposals after the emergence of AI began supercharging the threat of election disinformation worldwide, with the new California law being the most sweeping in scope. It targets not only materials that could affect how people vote but also any videos and images that could misrepresent election integrity. The law also covers materials depicting election workers and voting machines, not just political candidates. Among the three law signed by Newsom on Tuesday, only one takes effect immediately to prevent deepfakes surrounding the 2024 election. It makes it illegal to create and publish false materials related to elections 120 days before Election Day and 60 days thereafter. It also allows courts to stop the distribution of the materials, and violators could face civil penalties. The law exempts parody and satire. The goal, Newsom and lawmakers said, is to prevent the erosion of public trust in U.S. elections amid a "fraught political climate." The legislation is already drawing fierce criticism from free speech advocates and social media platform operators. Elon Musk, owner of the social media platform X, called the new California law unconstitutional and an infringement on the First Amendment. Hours after they were signed into law, Musk on Tuesday night elevated a post on X sharing an AI-generated video featuring altered audios of Vice President and Democratic presidential nominee Kamala Harris. His post of another deepfake featuring Harris prompted Newsom to vow to pass legislation cracking down on the practice in July. "The governor of California just made this parody video illegal in violation of the Constitution of the United States. Would be a shame if it went viral," Musk wrote of the AI-generated video, which has the caption identifying the video as a parody. But it's not clear how effective these laws are in stopping election deepfakes, said Ilana Beller of Public Citizen, a nonprofit consumer advocacy organization. The group tracks state legislation related to election deepfakes. None of the law has been tested in a courtroom, Beller said. The law's effectiveness could be blunted by the slowness of the courts against a technology that can produce fake images for political ads and disseminate them at warp speed. It could take several days for a court to order injunctive relief to stop the distribution of the content, and by then, damages to a candidate or to an election could have been already done, Beller said. "In an ideal world, we'd be able to take the content down the second it goes up," she said. "Because the sooner you can take down the content, the less people see it, the less people proliferate it through reposts and the like, and the quicker you're able to dispel it." Still, having such a law on the books could serve as a deterrent for potential violations, she said. Newsom's office didn't immediately respond to questions about whether Musk's post violated the new state law. Assemblymember Gail Pellerin, author of the law, wasn't immediately available Wednesday to comment. Newsom on Tuesday also signed two other laws, built upon some of the first-in-the-nation legislation targeting election deepfakes enacted in California in 2019, to require campaigns to start disclosing AI-generated materials and mandate online platforms, like X, to remove the deceptive material. Those laws will take effect next year, after the 2024 election. Copyright 2024 The Associated Press. All rights reserved. This material may not be published, broadcast, rewritten or redistributed.
[11]
California passes AI laws to curb election deepfakes, protect actors
The bills signed by Gov. Gavin Newsom (D) come amid concerns over the use of AI in the run-up to the presidential vote and its threat to actors' livelihoods. California Gov. Gavin Newsom (D) signed into law a raft of artificial intelligence bills Tuesday, aimed at curbing the effects of deepfakes during elections and protecting Hollywood performers from their likenesses being replicated by AI without their consent. There is growing worry about deepfakes circulating during the 2024 campaign, and concerns over Hollywood's use of artificial intelligence were a prominent part of last year's historic actors strike. California is home to "32 of the world's 50 leading AI companies, high-impact research and education institutions," according to Newsom's office, forcing his government to balance the public's welfare with the ambitions of a rapidly evolving industry. "Safeguarding the integrity of elections is essential to democracy, and it's critical that we ensure AI is not deployed to undermine the public's trust through disinformation -- especially in today's fraught political climate," Newsom said Tuesday in a statement. Among the measures is A.B. 2655, which requires large online platforms to remove or label deceptive, digitally altered or digitally created content related to elections during certain periods before and after they are held. He also signed A.B. 2839 -- which expands the time frame during which people and entities are prohibited from knowingly sharing election material containing deceptive AI-generated or manipulated content -- and A.B. 2355, which requires election advertisements to disclose whether they use AI-generated or substantially altered content. In July, after X owner Elon Musk retweeted an altered Kamala Harris campaign advertisement, Newsom wrote on social media that "manipulating a voice in an 'ad' like this one should be illegal" and committed to signing a bill "to make sure it is." Despite the bills signed Tuesday, it's still unclear whether Newsom will sign or veto S.B. 1047, which aims to make AI companies liable if their technology is used for harm and is fiercely opposed by most of the tech industry. Venture capitalists and start-up founders say it would stifle innovation as developers worry about unforeseen uses of AI technology that they build. The bill's author, Sen. Scott Wiener (D), says it simply seeks to formalize commitments that AI companies have made about trying to keep their tech from being used for ill. The new laws also include two measures for actors and performers that Newsom said will ensure the industry "can continue thriving while strengthening protections for workers and how their likeness can or cannot be used." A.B. 2602 requires contracts to specify how AI-generated replicas of a performer's voice or likeness will be used. A.B. 1836 prohibits commercial use of digital replicas of deceased performers without the consent of their estates. The use of AI in entertainment -- whether through the consensual replication of performances like James Earl Jones's Darth Vader voice or the warnings from several celebrities about AI-altered images of them circulating online without their consent -- is hotly debated. Last year, the actors union SAG-AFTRA secured a contract with safeguards against AI, including a requirement that actors give studios "informed consent" and receive "fair compensation" for the creation of digital replicas, The Washington Post reported. Union President Fran Drescher praised the bills in a Tuesday statement for expanding on AI protections that actors "fought so hard for last year" and thanked Newsom for "recognizing that performers matter, and their contributions have value." Duncan Crabtree-Ireland, SAG-AFTRA's national executive director, added: "No one should live in fear of becoming someone else's unpaid digital puppet."
[12]
California governor Gavin Newsom signs laws to crack down on election deepfakes created by AI
California Governor Gavin Newsom signed three bills Tuesday to crack down on the use of artificial intelligence to create false images or videos in political ads ahead of the 2024 election. A new law, set to take effect immediately, makes it illegal to create and publish deepfakes related to elections 120 days before Election Day and 60 days thereafter. It also allows courts to stop distribution of the materials and impose civil penalties. "Safeguarding the integrity of elections is essential to democracy, and it's critical that we ensure AI is not deployed to undermine the public's trust through disinformation -- especially in today's fraught political climate," Newsom said in a statement. "These measures will help to combat the harmful use of deepfakes in political ads and other content, one of several areas in which the state is being proactive to foster transparent and trustworthy AI." Large social media platforms are also required to remove the deceptive material under a first-in-the-nation law set to be enacted next year. Newsom also signed a bill requiring political campaigns to publicly disclose if they are running ads with materials altered by AI. Indian firms flock to AI notetakers for online meetings The governor signed the bills to loud applause during a conversation with Salesforce CEO Marc Benioff at an event hosted the major software company during its annual conference in San Francisco. The new laws reaffirm California's position as a leader in regulating AI in the U.S., especially in combating election deepfakes. The state was the first in the U.S. to ban manipulated videos and pictures related to elections in 2019. Measures in technology and AI proposed by California lawmakers have been used as blueprints for legislators across the country, industry experts said. With AI supercharging the threat of election disinformation worldwide, lawmakers across the country have raced to address the issue over concerns the manipulated materials could erode the public's trust in what they see and hear. "With fewer than 50 days until the general election, there is an urgent need to protect against misleading, digitally-altered content that can interfere with the election," Assemblymember Gail Pellerin, author of the law banning election deepfakes, said in a statement. "California is taking a stand against the manipulative use of deepfake technology to deceive voters." Newsom's decision followed his vow in July to crack down on election deepfakes in response to a video posted by X-owner Elon Musk featuring altered images of Vice President and Democratic presidential nominee Kamala Harris. OpenAI co-founder Ilya Sutskever's new safety-focused AI startup SSI raises $1 billion The new California laws come the same day as members of Congress unveiled federal legislation aiming to stop election deepfakes. The bill would give the Federal Election Commission the power to regulate the use of AI in elections in the same way it has regulated other political misrepresentation for decades. The FEC has started to consider such regulations after outlawing AI-generated robocalls aimed to discourage voters in February. Newsom has touted California as an early adopter as well as regulator of AI, saying the state could soon deploy generative AI tools to address highway congestion and provide tax guidance, even as his administration considers new rules against AI discrimination in hiring practices. He also signed two other bills Tuesday to protect Hollywood performers from unauthorised AI use without their consent. Published - September 18, 2024 01:51 pm IST Read Comments
[13]
California Gov. Gavin Newsom signs deepfake laws on elections and entertainment
California is getting a series of new laws that crack down on AI deepfakes in the contexts of elections and entertainment. But the fate of the state's most momentous AI bill to date is yet to be determined. Governor Gavin Newsom signed five AI-related bills on Tuesday, placing new responsibilities on big online platforms like Facebook and X, and limiting how studios can exploit the likenesses and voices of performers. The three bills that deal with elections build on a separate law that Newsom signed five years ago, making it illegal to maliciously distribute deceptive audio or visual media that try to discredit a candidate in the immediate run-up to an election. One of the new bills expands the timeframe specified in that law from 60 days to 120 days before an election. (Also in 2019, Newsom signed a bill giving people the ability to sue those who make or share sexual deepfakes depicting them without their consent.) Another of the new bills, known as the Defending Democracy from Deepfake Deception Act, forces large online platforms to block users from posting "materially deceptive" election-related content as Californians prepare to cast their vote -- this means content that tries to depict a candidate, elected official, or election official, saying or doing something that they didn't really say or do. "Advances in AI over the last few years make it easy to generate hyper-realistic yet completely fake election-related deepfakes, but [the new law] will ensure that online platforms minimize their impact," said Assemblymember Marc Berman (D-Menlo Park), who proposed the bill. The third of the election-related bills covers electoral ads, ensuring that any AI-generated or "substantially altered" content comes with a disclosure. This year's momentous election has already featured some misleading AI content, most notably deepfakes distributed by presidential candidate Donald Trump that falsely depicted megastar Taylor Swift and her fans as supporting him. That incident prompted Swift to publicly endorse Trump's rival, Vice President Kamala Harris. Trump has also shared AI-generated images that purported to demonstrate his support among Black voters, and that depicted someone resembling Harris addressing a gathering of communists. The latter example would likely be the sort of thing that would be covered by California's new laws, as would faked audio posted by X owner Elon Musk that had Harris saying she was the "ultimate diversity hire." There are as yet no federal laws covering election deepfakes, but there are already state-level laws covering the subject -- with varying degrees of strength -- in 20 other states, from Washington and New York to Texas and Florida. California's efforts are particularly notable because of the state's large population, and the fact that big online companies such as Meta are located in it. Newsom has clashed with Musk over California's efforts, and the tycoon responded to Newsom's signing of the laws by claiming that he had made parody illegal. California is of course also the traditional home of the U.S.'s movie industry, and the entertainment-related laws that Newsom just signed are a big win for SAG-AFTRA, the media professionals' union. One ensures that performers and actors can't find their voices or likenesses being replicated by AI without their permission -- all contracts will have to include terms about this, with the performer getting their say during negotiations. The other deals with digital replicas of deceased performers, ensuring that these can't be commercially used without the consent of their estates. "It is a momentous day for SAG-AFTRA members and everyone else because the AI protections we fought so hard for last year are now expanded upon by California law thanks to the legislature and Governor Gavin Newsom," said union president Fran Drescher, who is best known for her roles in The Nanny and This Is Spinal Tap. Newsom said the new law would allow California's iconic entertainment industry to "continue thriving while strengthening protections for workers." The governor said yesterday that there were three dozen AI-related bills awaiting his signature. But the most momentous would be SB 1047, a pivotal AI safety bill that would force AI companies to ensure that their models can't be used to cause "critical harms" like biological attacks or huge crimes. This has caused furious debate in the AI community, with some such as OpenAI and "Godmother of AI" Fei-Fei Li, saying it would harm the sector in the U.S., and others such as Musk and Anthropic calling for its passage. Also on Tuesday, Newsom said at a Salesforce conference that SB 1047 could have an "outsized impact" and perhaps even a "chilling effect" on the open-source AI community. "I can't solve for everything," he said, indicating that he isn't done assessing the bill's balance between tackling demonstrable and potential risks.
[14]
California Signs Law to Make Political Deepfakes Illegal Ahead of 2024 Election
California passed a new law that makes it illegal to create deepfakes related to the 2024 election -- the toughest law on political AI-generated content in the U.S. yet. On Tuesday, California Governor Gavin Newsom signed three bills to crack down on the use of AI to create false images or videos in political ads ahead of the 2024 election. A new law, set to take effect immediately, makes it illegal to create and publish deepfakes related to elections 120 days before Election Day and 60 days thereafter. It also allows courts to stop distribution of the materials and impose civil penalties. "Safeguarding the integrity of elections is essential to democracy, and it's critical that we ensure AI is not deployed to undermine the public's trust through disinformation -- especially in today's fraught political climate," Newsom says in a statement. "These measures will help to combat the harmful use of deepfakes in political ads and other content, one of several areas in which the state is being proactive to foster transparent and trustworthy AI." Under a second first-in-the-nation law set to be enacted in January 2025, large social media platforms and other websites with more than one million users in California will be required to label or remove A.I. deepfakes within 72 hours after receiving a complaint. If the website does not take action, a court can require them to do so. Governor Newsom also signed a bill that will require political campaigns to publicly disclose if they are running ads with materials altered by AI. The legislation requires labels to appear on deceptive audio, video, or images in political advertisements when they are generated with help from AI tools. California's legislation could serve as a guide for regulators nationwide seeking to curb the spread of AI-driven manipulative content in the U.S. However, it is believed that the state's new laws are likely to encounter legal challenges from social media companies or free speech advocacy groups. Governor Newsom's aggressive new laws come after he condemned Elon Musk, the owner of X (the platform formerly known as Twitter) for sharing a misleading AI-generated video of Vice President Kamala Harris in July. After Newsom signed the bills yesterday, Musk criticized the California governor in a series of posts on his X. He called for a "new leadership" in California and urged his followers to make the Harris deepfake video go "viral."
[15]
New California Laws Could Kill Election Deepfakes on Your Social Media Feeds
California Governor Gavin Newsom (Credit: Andrew Harnik / Staff / Getty Images News via Getty Images) California Governor Gavin Newsom signed three bills this week aimed at limiting the spread of election-related deepfakes on social media. "Safeguarding the integrity of elections is essential to democracy, and it's critical that we ensure AI is not deployed to undermine the public's trust through disinformation - especially in today's fraught political climate," says Newsom. Bill 1: You Can't Share Deceptive, AI-Generated Posts Near an Election Only one, AB 2839, takes effect before the 2024 presidential election. The "urgency measure" expands an existing law that prevents people or groups from "knowingly distributing" deceptive, AI-generated election materials. It also allows election officials, candidates, and others to sue to prevent the distribution of such material. The existing law prevents distributing materials within 60 days of an election. The new provision prohibits it within 120 days of an election in California and, in certain cases, 60 days after an election. Bill 2: Political Campaigns Must Disclose AI Advertisements The other two measures go into effect in January 2025. The first, AB 2355, requires candidates to disclose when their campaign ads use AI-generated, or "substantially altered," content. Donald Trump has favored AI-generated images on his social media feeds. Four days after X unveiled an AI image generator that readily creates outrageous images of political candidates, Donald Trump posted an AI-generated image of Vice President Kamala Harris on stage at the Democratic National Convention under a hammer and sickle flag with the word "Chicago" lit up in red over the crowd. The post did not include an AI-generated disclosure. California is the first state to include AI in its campaign transparency rules, says Assemblymember Wendy Carrillo, who sponsored the bill. "As these technologies become more accessible and are used in political campaigns, their impact on democracy requires urgent action," says Carrillo. "Free speech and political expression are a cornerstone of our democracy, but we cannot lose sight of our humanity amid the advancement of artificial intelligence." Bill 3: Social Media Companies Must Label, Remove AI Deepfakes A third law, AB 2655, which also takes effect in January, requires large social media platforms to label or remove AI deepfakes within 72 hours after receiving a complaint. The site must create mechanisms to receive those complaints. If the site does not address the complaints, then candidates, election officials, and government officials can require it to do so with legal action. The Defending Democracy from Deepfake Deception Act of 2024 does not apply to satire or parody posts, and only applies to specified periods of time, such as leading up to an election. AB 2655 could help standardize content moderation across social platforms, rather than leaving it to each site to decide what stays and goes. In February, a fake video of President Joe Biden spread on Facebook, which Meta's oversight board decided to keep up. Another one of VP Harris made the rounds on X in July, thanks in part to CEO Elon Musk retweeting it to his 198 million followers. "AI-generated deepfakes pose a clear and present risk to our elections and our democracy," says Assemblymember Marc Berman, who sponsored the bill. "AB 2655 is a first-in-the-nation solution to this growing threat. The new law is a win for California's voters, and for our democracy." California seeks to set an example for the nation's AI regulations with these laws, but it's not alone. Twenty-four states have either already passed election-related deepfake laws, or are awaiting signature on them, according to The New York Times, citing data from Public Citizen. Hollywood Gets Its AI Bill, Elon Musk May Not Also this week, Newsom signed two bills protecting Hollywood actors from having their digital likeness reproduced by AI-generated audio and visual productions, including performers who are deceased. "It is a momentous day for SAG-AFTRA members and everyone else because the AI protections we fought so hard for last year are now expanded upon by California law thanks to the legislature and Governor Gavin Newsom," says SAG-AFTRA President Fran Drescher. "They say as California goes, so goes the nation!" But one final, controversial bill remains on Newsom's desk, which has generated debate among US lawmakers and big tech companies alike. SB 1047 would introduce more safety and transparency requirements for large AI systems like ChatGPT. It has the support of Elon Musk and Anthropic CEO Dario Amodei, who say it's needed to hold large systems accountable and prevent an AI catastrophe. Others, such as Nancy Pelosi, say it will stifle competition. The bill passed a final state assembly vote and awaits Newsom's veto or approval by Sept. 31. This week, he expressed concerns about the "chilling effect" the bill could have on the industry, as first reported by Bloomberg.
[16]
California's 5 new AI laws crack down on election deepfakes and actor clones | TechCrunch
On Tuesday, California Governor Gavin Newsom signed some of America's toughest laws yet regulating the artificial intelligence sector. Three of these laws crack down on AI deepfakes that could influence elections, while two others prohibit Hollywood studios from creating an AI clone of an actor's body or voice without their consent. "Home to the majority of the world's leading AI companies, California is working to harness these transformative technologies to help address pressing challenges while studying the risks they present," said Governor Newsom's office in a press release Tuesday. One of California's new laws, AB 2655, requires large online platforms, like Facebook and X, to remove or label AI deepfakes related to elections, as well as create channels to report such content. Candidates and elected officials can seek injunctive relief if a large online platform is not complying with the act. Another law, 2355, requires disclosures of AI-generated political advertisements. That means moving forward, Trump may not be able to get away with posting AI deepfakes of Taylor Swift endorsing him on Truth Social (she endorsed Kamala Harris). The FCC has proposed a similar disclosure requirement at a national level, and has already made robocalls using AI-generated voices illegal. The last two AI laws signed on Tuesday - which the nation's largest film and broadcast actors union, SAG-AFTRA, was pushing for - create new standards for California's media industry. AB 2602 requires studios to obtain permission from an actor before creating an AI-generated replica of their voice or likeness. Meanwhile, AB 1836 prohibits studios from creating digital replicas of deceased performers without consent from their estates. (Legally cleared replicas were used in recent Alien, Star Wars, and other films, for instance.) Governor Newsom is currently considering several AI-related bills, including the highly contentious SB 1047, which California's Senate has sent to his desk for final approval. During a chat with Salesforce CEO Mark Benioff on Tuesday, Newsom may have tipped his hat, reportedly echoing concerns from SB 1047's opponents that the bill could have a chilling effect on the open source community. He has two weeks to sign or veto that bill.
[17]
Gavin Newsom Signs Laws Aimed At Curbing Deepfakes Around Elections: 'Critical That We ensure AI Is Not Deployed To Undermine The Public's Trust'
California governor Gavin Newsom (D-Calif.) signed three bills on Tuesday aimed at curbing the use of artificial intelligence in creating misleading images or videos in political advertisements. What Happened: The new laws will immediately make it illegal to create and distribute deepfakes related to elections 120 days before Election Day and 60 days thereafter, reported the Associated Press. Courts will have the power to halt the distribution of such materials and impose civil penalties. "Safeguarding the integrity of elections is essential to democracy, and it's critical that we ensure AI is not deployed to undermine the public's trust through disinformation -- especially in today's fraught political climate," Newsom stated. See Also: Larry Ellison's Daughter's Gaming Company Hit By Wave Of Mass Resignation: Staff Walkout Amid Spinoff Failure Large social media platforms like Elon Musk's X, Meta's Facebook and Instagram, and ByteDance-owned TikTok will be required to remove deceptive material. Political campaigns will also have to publicly disclose if they are running ads with materials altered by AI. The governor signed the bills during a conversation with Salesforce CEO Marc Benioff at an event hosted by the major software company during its annual conference in San Francisco. Subscribe to the Benzinga Tech Trends newsletter to get all the latest tech developments delivered to your inbox. Why It Matters: The new laws were enacted on the same day as members of Congress unveiled federal legislation aiming to stop election deepfakes. The bill would give the Federal Election Commission the power to regulate the use of AI in elections, the report noted. The misuse of AI in creating deepfakes has been a growing concern. Previously, a study conducted by Google's DeepMind revealed that deepfakes of politicians and celebrities were more common than AI-assisted cyber attacks. Earlier this year, AI image creation tools from ChatGPT-parent OpenAI and Microsoft were reported to be fueling election misinformation scandals. In January 2024, deepfake attacks on public figures, including Taylor Swift and President Joe Biden, also alarmed the White House. The U.K. was also warned of AI misinformation targeting its 2024 polls. Check out more of Benzinga's Consumer Tech coverage by following this link. Read Next: OpenAI's 'O1' Model, Nvidia's AI Demand, Google's Missed Opportunity, And More: This Week In AI Disclaimer: This content was partially produced with the help of AI tools and was reviewed and published by Benzinga editors. Market News and Data brought to you by Benzinga APIs
[18]
California guv signs laws to crack down on AI-created election deepfakes
A new law, set to take effect immediately, makes it illegal to create and publish deepfakes related to elections 120 days before Election Day and 60 days thereafter. It also allows courts to stop distribution of the materials and impose civil penalties. "Safeguarding the integrity of elections is essential to democracy, and it's critical that we ensure AI is not deployed to undermine the public's trust through disinformation - especially in today's fraught political climate," Newsom said in a statement. "These measures will help to combat the harmful use of deepfakes in political ads and other content, one of several areas in which the state is being proactive to foster transparent and trustworthy AI." Large social media platforms are also required to remove the deceptive material under a first-in-the-nation law set to be enacted next year. Newsom also signed a bill requiring political campaigns to publicly disclose if they are running ads with materials altered by AI.
[19]
California Passes Election 'Deepfake' Laws, Forcing Social Media Companies to Take Action
Stuart Thompson writes about false and misleading information circulating online, including A.I.-generated deepfakes. California will now require social media companies to moderate the spread of election-related impersonations powered by artificial intelligence, known as "deepfakes," after Gov. Gavin Newsom, a Democrat, signed three new laws on the subject Tuesday. The three laws, including a first-of-its kind law that imposes a new requirement on social media platforms, largely deal with banning or labeling the deepfakes. Only one of the laws will take effect in time to affect the 2024 presidential election, but the trio could offer a road map for regulators across the country who are attempting to slow the spread of the manipulative content powered by artificial intelligence. The laws are expected to face legal challenges from social media companies or groups focusing on free speech rights. Deepfakes use A.I. tools to create lifelike images, videos or audio clips resembling actual people. Though the technology has been used to create jokes and artwork, it has also been widely adopted to supercharge scams, create non-consensual pornography and disseminate political misinformation. Elon Musk, the owner of X, has posted a deepfake to his account this year that would have run afoul of the new laws, experts said. In one video viewed millions of times, Mr. Musk posted fake audio of Vice President Kamala Harris, the Democratic nominee, calling herself the "ultimate diversity hire." California's new laws add to efforts in dozens of states to limit the spread of the A.I. fakes around elections and sexual content. Many have required labels on deceptive audio or visual media, part of a surge in regulation that has received wide bipartisan support. Some have regulated election-related deepfakes, but most are focused on deepfake pornography. There is no federal law that bans or even regulates deepfakes, though several have been proposed. California policymakers have taken an intense interest in regulating A.I., including with a new piece of legislation that would require tech companies to test the safety of powerful A.I. tools before releasing them to the public. The governor has until Sept. 30 to sign or veto the other legislation. Two of the laws signed Tuesday place limits on how election-related deepfakes -- including those targeting candidates and officials or those questioning the outcome of an election -- can circulate. One takes effect immediately and effectively bans people or groups from knowingly sharing certain deceptive election-related deepfakes. It is enforceable for 120 days before an election, similar to laws in other states, but goes further by remaining enforceable for 60 days after -- a sign that lawmakers are concerned about misinformation spreading as votes are being tabulated. The other will go into effect in January, and requires labels to appear on deceptive audio, video or images in political advertisements when they are generated with help from A.I. tools. The third law, known as the "Defending Democracy from Deepfake Deception Act," will go into effect in January and require social media platforms and other websites with more than 1 million users in California to label or remove A.I. deepfakes within 72 hours after receiving a complaint. If the website does not take action, a court can require them to do so. "It's very different from other bills that have been put forth," said Ilana Beller, a organizing manager for the democracy team at Public Citizen, which has tracked deepfake laws nationwide. "This is the only bill of its kind on a state level." All three apply only to deepfakes that could deceive voters, leaving the door open for satire or parody -- so long as they are labeled -- and would be effectively limited to the period surrounding an election. Though the laws only apply to California, they govern deepfakes depicting presidential and vice-presidential candidates along with scores of statewide candidates, elected officials and election administrators. Gov. Newsom also signed two other laws Tuesday governing how Hollywood uses deepfake technology: one requiring explicit consent to use deepfakes of performers, and another requiring an estate's permission to depict deceased performers in commercial media like movies or audiobooks. Lawmakers have generally not passed laws that govern how social media companies moderate content because of a federal law, known as Section 230, that protects the companies from liability over content posted by users. The First Amendment also offers wide protections to social media companies and users, limiting how governments can regulate what is said online. "They're really asking platforms to do things we don't think are feasible," said Hayley Tsukayama, the associate director of legislative activism at the Electronic Frontier Foundation, a digital rights group in San Francisco, which wrote letters opposing the new laws. "To say that they're going to be able to identify what is really deceptive speech, and what is satire, or what is First Amendment protected speech is going to be really hard." The law's supporters have argued that because it imposes no financial penalties on companies for failing to follow the law, Section 230 may not apply. A number of free speech and digital rights groups, including the First Amendment Coalition, have strenuously opposed the laws. "Some people may, of course, disseminate a falsehood -- that's a problem as old as politics, as old as democracy, as old as speech," said David Loy, the legal director for the First Amendment Coalition. "The premise of the First Amendment is that it's for the press and public and civil society to sort that out."
[20]
California laws target deepfake political ads, disinformation
In a step that could have broad implications for future elections in the U.S., California Governor Gavin Newsom this week signed three pieces of legislation restricting the role that artificial intelligence, specifically deepfake audio and video recordings, can play in election campaigns. One law, which took effect immediately, makes it illegal to distribute "materially deceptive audio or visual media of a candidate" in the 120 days leading up to an election and in the 60 days following an election. Another law requires that election-related advertisements using AI-manipulated content provide a disclosure alerting viewers or listeners to that fact. The third law requires that large online platforms take steps to block the posting of "materially deceptive content related to elections in California," and that they remove any such material that has been posted within 72 hours of being notified of its presence. "Safeguarding the integrity of elections is essential to democracy, and it's critical that we ensure AI is not deployed to undermine the public's trust through disinformation -- especially in today's fraught political climate," Newsom said in a statement. "These measures will help to combat the harmful use of deepfakes in political ads and other content, one of several areas in which the state is being proactive to foster transparent and trustworthy AI." While California is not the only state with laws regulating the use of deepfakes in political ads, the application of the ban to 60 days following the election is unique and may be copied by other states. Over the years, California has often been a bellwether for future state laws. Tech titan opposition Social media platforms and free speech advocates are expected to challenge the laws, asserting that they infringe on the First Amendment's protection of freedom of expression. One high-profile opponent of the measures is Elon Musk, billionaire owner of the social media platform X, who has been aggressively using his platform to voice his support of Republican presidential nominee Donald Trump. In July, Musk shared a video that used deepfake technology to impersonate the voice of Vice President Kamala Harris. In the video, the cloned voice describes Harris as a "deep state puppet" and the "ultimate diversity hire." On Tuesday, after Newsom signed the new laws, Musk once again posted the video, writing, "The governor of California just made this parody video illegal in violation of the Constitution of the United States. Would be a shame if it went viral." Federal action considered Most of the legislative efforts to regulate AI in politics have, so far, been happening at the state level. This week, however, a bipartisan group of lawmakers in Congress proposed a measure that would authorize the Federal Election Commission to oversee the use of AI by political campaigns. Specifically, it would allow the agency to prohibit campaigns from using deepfake technology to make it appear that a rival has said or done something that they did not do or say. During an appearance at an event sponsored by Politico this week, Deputy U.S. Attorney General Lisa Monaco said there was a clear need for rules of the road governing the use of AI in political campaigns, and she expressed her confidence that Congress would act. While AI promises many benefits, it is also "lowering the barrier to entry for all sorts of malicious actors," she said. "There will be changes in law, I'm confident, over time," she added. Minimal role in campaign so far Heading into the 2024 presidential campaign, there was widespread concern that out-of-control use of deepfake technology would swamp voters in huge amounts of misleading content. That hasn't really happened, said PolitiFact editor-in-chief Katie Sanders. "It has not turned out the way many people feared," she told VOA. "I don't know that it's entirely good news, because there's still plenty of misinformation being shared in political ads. It's just not generated by artificial intelligence. It's really relying on the same tricks of exaggerating where your opponent stands or clipping things out of context." Sanders said that campaigns might be reluctant to make use of deepfake technology because voters "are distrustful of AI." "Where the deepfake material that does exist is coming from is smaller accounts, anonymous accounts, and is sometimes catching enough fire to be shared by people who are considered elites on political platforms," she said.
[21]
Gavin Newsom targets AI deepfakes with new law: Why Elon Musk and others think it's a really bad idea - Times of India
California Governor Gavin Newsom signed a series of bills on Tuesday banning political "deepfakes" and other misleading digitally generated audio and visual content created with artificial intelligence, fulfilling his promise to take action after criticising Elon Musk, CEO of X (formerly Twitter), for sharing a doctored video of Vice President Kamala Harris. "I could care less if it was Harris or Trump," Newsom said during a conversation with Salesforce CEO Marc Benioff earlier the same day."It was just wrong on every level, " Politico reported. Elon Musk in response to Newsom's signing of the laws, claimed that the governor had effectively made parody illegal, in violation of the Constitution of the United States. He shared a video featuring a satirical depiction of Harris, mocking her qualifications and making sarcastic comments about diversity hiring and political competence. The video imitates Harris's voice, but instead of using her actual words from the original ad, it falsely portrays her saying that President Biden is senile, that she doesn't "know the first thing about running the country." The AI-generated video included statements such as, "I was selected because I am the ultimate diversity hire. If you criticise anything I say, you're both sexist and racist," taking a jab at Harris. Musk also reposted a viral post sharing the same video of Harris, stating, "The video that offended California Governor Gavin Newsom is now trending on X." "Like when Streisand sued someone for exposing her very obvious Malibu address. That really kept it under wraps, lmao," Musk added. Trump has also shared AI-generated images claiming to show his support among Black voters, along with another image depicting a figure resembling Kamala Harris addressing a group of communists, the fortune reported. "Safeguarding the integrity of elections is essential to democracy, and it's critical that we ensure AI is not deployed to undermine the public's trust through disinformation - especially in today's fraught political climate," Newsom said in a statement. "These measures will help to combat the harmful use of deepfakes in political ads and other content, one of several areas in which the state is being proactive to foster transparent and trustworthy AI," he added. What are deepfakes Deepfakes are "highly realistic and difficult-to-detect digital manipulations of audio or video." The term is a blend of "deep learning" and "fakes," reflecting the combination of advanced machine learning techniques used to create convincing yet deceptive digital content. What are the 3 bills Under Assembly Bill 2355, political advertisements will be required to disclose whether they have utilised generative AI technology to create or modify any materials featured in the ad. This measure aims to ensure that voters are aware of the use of AI in political messaging. Assembly Bill 2655 places the responsibility on large online platforms to remove or label deceptive election content within 72 hours of receiving a report from a user. Failure to comply with this requirement will result in liability for the platform. "AI-generated deepfakes pose a clear and present risk to our elections and our democracy. AB 2655 is a first-in-the-nation solution to this growing threat, and I am grateful to Governor Newsom for signing it. Advances in AI over the last few years make it easy to generate hyper-realistic yet completely fake election-related deepfakes, but AB 2655 will ensure that online platforms minimize their impact.The new law is a win for California's voters, and for our democracy," Assemblymember Marc Berman, who proposed the bill, said in the statement. Lastly, Assembly Bill 2839 specifically targets individuals who create or publish deceptive content about candidates and election workers using AI technology. The bill prohibits such actions within the 120 days before and 60 days after an election. If found in violation, a judge can order the removal of the content and impose a fine. Why is the ban important These measures are designed to enhance transparency and accountability in the digital realm, particularly during election periods. Titled the Defending Democracy from Deep Fake Deception Act, the recently introduced legislation mandates major online platforms to prevent users from sharing "materially deceptive" content related to elections as Californians gear up to exercise their right to vote. The rapid advancement of AI has raised various apprehensions, including its potential to interfere with democratic processes, amplify fraudulent activities, and lead to job losses. These concerns have prompted lawmakers to take action and implement measures to counter the negative impacts of AI. The Biden administration, led by the Democratic president, has advocated for the regulation of artificial intelligence. However, the divided Congress, with the House of Representatives controlled by Republicans and the Senate by Democrats, has struggled to make significant progress in implementing effective AI legislation. California is the first state in the country to establish legal protections for performers against AI technology. Tennessee, renowned as the birthplace of country music and the launching point for many musical icons, took the lead in March by passing legislation to shield musicians and artists from AI misuse. At TOI World Desk, our dedicated team of seasoned journalists and passionate writers tirelessly sifts through the vast tapestry of global events to bring you the latest news and diverse perspectives round the clock. With an unwavering commitment to accuracy, depth, and timeliness, we strive to keep you informed about the ever-evolving world, delivering a nuanced understanding of international affairs to our readers. Join us on a journey across continents as we unravel the stories that shape our interconnected world.
[22]
California governor signs laws to protect actors against unauthorized use of AI
SACRAMENTO, Calif. -- California Gov. Gavin Newsom signed off Tuesday on legislation aiming at protecting Hollywood actors and performers against unauthorized artificial intelligence that could be used to create digital clones of themselves without their consent. The new laws come as California legislators ramped up efforts this year to regulate the marquee industry that is increasingly affecting the daily lives of Americans but has had little to no oversight in the United States. The laws also reflect the priorities of the Democratic governor who's walking a tightrope between protecting the public and workers against potential AI risks and nurturing the rapidly evolving homegrown industry. "We continue to wade through uncharted territory when it comes to how AI and digital media is transforming the entertainment industry, but our North Star has always been to protect workers," Newsom said in a statement. "This legislation ensures the industry can continue thriving while strengthening protections for workers and how their likeness can or cannot be used." Inspired by the Hollywood actors' strike last year over low wages and concerns that studios would use AI technology to replace workers, a new California law will allow performers to back out of existing contracts if vague language might allow studios to freely use AI to digitally clone their voices and likeness. The law is set to take effect in 2025 and has the support of the California Labor Federation and the Screen Actors Guild-American Federation of Television and Radio Artists, or SAG-AFTRA. Another law signed by Newsom, also supported by SAG-AFTRA, prevents dead performers from being digitally cloned for commercial purposes without the permission of their estates. Supporters said the law is crucial to curb the practice, citing the case of a media company that produced a fake, AI-generated hourlong comedy special to recreate the late comedian George Carlin's style and material without his estate's consent. "It is a momentous day for SAG-AFTRA members and everyone else because the AI protections we fought so hard for last year are now expanded upon by California law thanks to the legislature and Governor Gavin Newsom," SAG-AFTRA President Fran Drescher said in a statement. "They say as California goes, so goes the nation!" California is among the first states in the nation to establish performer protection against AI. Tennessee, long known as the birthplace of country music and the launchpad for musical legends, led the country by enacting a law protecting musicians and artists in March. Supporters of the new laws said they will help encourage responsible AI use without stifling innovation. Opponents, including the California Chamber of Commerce, said the new laws are likely unenforceable and could lead to lengthy legal battles in the future. The two new laws are among a slew of measures passed by lawmakers this year in an attempt to reign in the AI industry. Newsom signaled in July that he will sign a proposal to crack down on election deepfakes but has not weighed in other legislation, including one that would establish first-in-the-nation safety measures for large AI models. The governor has until Sept. 30 to sign the proposals, veto them or let them become law without his signature.
[23]
California governor signs laws to protect actors against unauthorized use of AI
SACRAMENTO, Calif. (AP) -- California Gov. Gavin Newsom signed off Tuesday on legislation aiming at protecting Hollywood actors and performers against unauthorized artificial intelligence that could be used to create digital clones of themselves without their consent. The new laws come as California legislators ramped up efforts this year to regulate the marquee industry that is increasingly affecting the daily lives of Americans but has had little to no oversight in the United States. The laws also reflect the priorities of the Democratic governor who's walking a tightrope between protecting the public and workers against potential AI risks and nurturing the rapidly evolving homegrown industry. "We continue to wade through uncharted territory when it comes to how AI and digital media is transforming the entertainment industry, but our North Star has always been to protect workers," Newsom said in a statement. "This legislation ensures the industry can continue thriving while strengthening protections for workers and how their likeness can or cannot be used." Inspired by the Hollywood actors' strike last year over low wages and concerns that studios would use AI technology to replace workers, a new California law will allow performers to back out of existing contracts if vague language might allow studios to freely use AI to digitally clone their voices and likeness. The law is set to take effect in 2025 and has the support of the California Labor Federation and the Screen Actors Guild-American Federation of Television and Radio Artists, or SAG-AFTRA. Another law signed by Newsom, also supported by SAG-AFTRA, prevents dead performers from being digitally cloned for commercial purposes without the permission of their estates. Supporters said the law is crucial to curb the practice, citing the case of a media company that produced a fake, AI-generated hourlong comedy special to recreate the late comedian George Carlin's style and material without his estate's consent. "It is a momentous day for SAG-AFTRA members and everyone else because the AI protections we fought so hard for last year are now expanded upon by California law thanks to the legislature and Governor Gavin Newsom," SAG-AFTRA President Fran Drescher said in a statement. "They say as California goes, so goes the nation!" California is among the first states in the nation to establish performer protection against AI. Tennessee, long known as the birthplace of country music and the launchpad for musical legends, led the country by enacting a law protecting musicians and artists in March. Supporters of the new laws said they will help encourage responsible AI use without stifling innovation. Opponents, including the California Chamber of Commerce, said the new laws are likely unenforceable and could lead to lengthy legal battles in the future. The two new laws are among a slew of measures passed by lawmakers this year in an attempt to reign in the AI industry. Newsom signaled in July that he will sign a proposal to crack down on election deepfakes but has not weighed in other legislation, including one that would establish first-in-the-nation safety measures for large AI models. The governor has until Sept. 30 to sign the proposals, veto them or let them become law without his signature.
[24]
California governor signs laws to protect actors against unauthorized use of AI
SACRAMENTO, Calif. (AP) -- California Gov. Gavin Newsom signed off Tuesday on legislation aiming at protecting Hollywood actors and performers against unauthorized artificial intelligence that could be used to create digital clones of themselves without their consent. The new laws come as California legislators ramped up efforts this year to regulate the marquee industry that is increasingly affecting the daily lives of Americans but has had little to no oversight in the United States. The laws also reflect the priorities of the Democratic governor who's walking a tightrope between protecting the public and workers against potential AI risks and nurturing the rapidly evolving homegrown industry. "We continue to wade through uncharted territory when it comes to how AI and digital media is transforming the entertainment industry, but our North Star has always been to protect workers," Newsom said in a statement. "This legislation ensures the industry can continue thriving while strengthening protections for workers and how their likeness can or cannot be used." Inspired by the Hollywood actors' strike last year over low wages and concerns that studios would use AI technology to replace workers, a new California law will allow performers to back out of existing contracts if vague language might allow studios to freely use AI to digitally clone their voices and likeness. The law is set to take effect in 2025 and has the support of the California Labor Federation and the Screen Actors Guild-American Federation of Television and Radio Artists, or SAG-AFTRA. Another law signed by Newsom, also supported by SAG-AFTRA, prevents dead performers from being digitally cloned for commercial purposes without the permission of their estates. Supporters said the law is crucial to curb the practice, citing the case of a media company that produced a fake, AI-generated hourlong comedy special to recreate the late comedian George Carlin's style and material without his estate's consent. "It is a momentous day for SAG-AFTRA members and everyone else because the AI protections we fought so hard for last year are now expanded upon by California law thanks to the legislature and Governor Gavin Newsom," SAG-AFTRA President Fran Drescher said in a statement. "They say as California goes, so goes the nation!" California is among the first states in the nation to establish performer protection against AI. Tennessee, long known as the birthplace of country music and the launchpad for musical legends, led the country by enacting a law protecting musicians and artists in March. Supporters of the new laws said they will help encourage responsible AI use without stifling innovation. Opponents, including the California Chamber of Commerce, said the new laws are likely unenforceable and could lead to lengthy legal battles in the future. The two new laws are among a slew of measures passed by lawmakers this year in an attempt to reign in the AI industry. Newsom signaled in July that he will sign a proposal to crack down on election deepfakes but has not weighed in other legislation, including one that would establish first-in-the-nation safety measures for large AI models. The governor has until Sept. 30 to sign the proposals, veto them or let them become law without his signature.
[25]
California Governor Signs Laws to Protect Actors Against Unauthorized Use of AI
SACRAMENTO, Calif. (AP) -- California Gov. Gavin Newsom signed off Tuesday on legislation aiming at protecting Hollywood actors and performers against unauthorized artificial intelligence that could be used to create digital clones of themselves without their consent. The new laws come as California legislators ramped up efforts this year to regulate the marquee industry that is increasingly affecting the daily lives of Americans but has had little to no oversight in the United States. The laws also reflect the priorities of the Democratic governor who's walking a tightrope between protecting the public and workers against potential AI risks and nurturing the rapidly evolving homegrown industry. "We continue to wade through uncharted territory when it comes to how AI and digital media is transforming the entertainment industry, but our North Star has always been to protect workers," Newsom said in a statement. "This legislation ensures the industry can continue thriving while strengthening protections for workers and how their likeness can or cannot be used." Inspired by the Hollywood actors' strike last year over low wages and concerns that studios would use AI technology to replace workers, a new California law will allow performers to back out of existing contracts if vague language might allow studios to freely use AI to digitally clone their voices and likeness. The law is set to take effect in 2025 and has the support of the California Labor Federation and the Screen Actors Guild-American Federation of Television and Radio Artists, or SAG-AFTRA. Another law signed by Newsom, also supported by SAG-AFTRA, prevents dead performers from being digitally cloned for commercial purposes without the permission of their estates. Supporters said the law is crucial to curb the practice, citing the case of a media company that produced a fake, AI-generated hourlong comedy special to recreate the late comedian George Carlin's style and material without his estate's consent. "It is a momentous day for SAG-AFTRA members and everyone else because the AI protections we fought so hard for last year are now expanded upon by California law thanks to the legislature and Governor Gavin Newsom," SAG-AFTRA President Fran Drescher said in a statement. "They say as California goes, so goes the nation!" California is among the first states in the nation to establish performer protection against AI. Tennessee, long known as the birthplace of country music and the launchpad for musical legends, led the country by enacting a similar law to protect musicians and artists in March. Supporters of the new laws said they will help encourage responsible AI use without stifling innovation. Opponents, including the California Chamber of Commerce, said the new laws are likely unenforceable and could lead to lengthy legal battles in the future. The two new laws are among a slew of measures passed by lawmakers this year in an attempt to reign in the AI industry. Newsom signaled in July that he will sign a proposal to crack down on election deepfakes but has not weighed in other legislation, including one that would establish first-in-the-nation safety measures for large AI models. The governor has until Sept. 30 to sign the proposals, veto them or let them become law without his signature. Copyright 2024 The Associated Press. All rights reserved. This material may not be published, broadcast, rewritten or redistributed.
[26]
California Governor Signs Actor Union-Supported AI Deepfake Bills into Law - Decrypt
California Governor Gavin Newsom signed two bills into law on Tuesday that require the consent of actors and performers before a digital replica can be created and used. The bills, AB2602 and AB1836, also aim to put in place protections from unauthorized AI-generated deepfakes for living and deceased performers. Last year, the use of artificial intelligence became a major sticking point in negotiations between SAG-AFTRA and the Alliance of Motion Picture and Television Producers (AMPTP). Talks broke down over several issues, including protecting background actors -- who were offered just one day of pay in exchange for allowing studios to create digital avatars of them for future use -- and securing residual pay from streaming platforms. Those issues led to a months-long strike that brought Hollywood productions to a standstill. In November, after talks resumed, a deal was struck between the two groups that SAG-AFTRA said established detailed informed consent and compensation guardrails for the use of AI. The new laws appear to strengthen actors' rights. First introduced in September 2023 by Assemblymember Ash Kalra (D-San Jose), CA Assembly Bill 2602 requires contracts to specify when AI-generated replicas are being created and must clearly state the conditions under which these replicas will be used. The bill, which requires actors to have legal representation when entering into AI-related rights contracts, did not specify the penalty for violating the new law. "We talk about California being a state of dreamers and doers," Newsom said in a video post on X. "A lot of dreamers come to California, but sometimes they're not well-represented and with [SAG-AFTRA] and this bill I just signed, we're making sure that no one turns over their name, image, and likeness to unscrupulous people without representation or union advocacy." Introduced by Assemblymember Rebecca Bauer-Kahan (D-Orinda), CA Assembly Bill 1836 prohibits the creation of digital replicas of deceased actors and performers for commercial purposes without the permission of the performer's estate. Violators face damages of at least $10,000. SAG-AFTRA, which had been in negotiations with studios over the use of artificial intelligence and compensation for the use of video game performers' voices and likenesses, cheered the new laws. "AB 1836 and AB 2602 represent much-needed legislation prioritizing the rights of individuals in the A.I. age," SAG-AFTRA National Executive Director and Chief Negotiator Duncan Crabtree-Ireland said in a statement. "No one should live in fear of becoming someone else's unpaid digital puppet."
[27]
California passes landmark regulation to require permission from actors for AI deepfakes
Gov. Gavin Newsom also signed a second bill requiring consent from the estates of deceased performers. California has passed a landmark AI regulation bill to protect performers' digital likenesses. On Tuesday, Governor Gavin Newsom signed Assembly Bill 2602, which will go into effect on January 1, 2025. The bill requires studios and other employers to get consent before using "digital replicas" of performers. Newsom also signed AB 1836, which grants similar rights to deceased performers, requiring their estate's permission before using their AI likenesses. AB 2602, introduced in April, covers film, TV, video games, commercials, audiobooks, and non-union performing jobs. Deadline notes its terms are similar to those in the contract that ended the 2023 actors' strike against Hollywood studios. SAG-AFTRA, the film and TV actors' union that held out for last year's deal, strongly supported the bill. The Motion Picture Association first opposed the legislation but later switched to a neutral stance after revisions. The bill mandates that employers can't use an AI deepfake of an actor's voice or likeness if it replaces work the performer could have done in person. It also prevents digital replicas if the actor's contract doesn't explicitly state how the deepfake will be used. It also voids any such deals signed when the performer didn't have legal or union representation. The bill defines a digital replica as a "computer-generated, highly realistic electronic representation that is readily identifiable as the voice or visual likeness of an individual that is embodied in a sound recording, image, audiovisual work, or transmission in which the actual individual either did not actually perform or appear, or the actual individual did perform or appear, but the fundamental character of the performance or appearance has been materially altered." Meanwhile, AB 1836 expands California's postmortem right of publicity. Hollywood must now get permission from the deceased estates before using their digital replicas. Deadline notes that exceptions were included for "satire, comment, criticism and parody, and for certain documentary, biographical or historical projects." "The bill, which protects not only SAG-AFTRA performers but all performers, is a huge step forward," SAG-AFTRA chief negotiator Duncan Crabtree-Ireland told the The LA Times in late August. "Voice and likeness rights, in an age of digital replication, must have strong guardrails around licensing to protect from abuse, this bill provides those guardrails." AB2602 passed the California State Senate on August 27 with a 37-1 tally. (The lone holdout was from State Senator Brian Dahle, a Republican.) The bill then returned to the Assembly (which passed an earlier version in May) to formalize revisions made during Senate negotiations. On Tuesday, SAG-AFTRA President Fran Drescher celebrated the passage, which the union fought for. "It is a momentous day for SAG-AFTRA members and everyone else, because the A.I. protections we fought so hard for last year are now expanded upon by California law thanks to the Legislature and Gov. Gavin Newsom," Drescher said.
[28]
SF firm fights deepfakes as lawmakers target deceptive election videos
Gov. Gavin Newsom just signed three bills into law on Tuesday requiring social media companies to do more to police deep fake videos, especially as we head into this year's election cycle. It's all in an effort to prevent mis-information from spreading, and one company here in San Francisco is developing new software to detect altered images. Some deepfake videos are easy to spot, like the spoof Tom Cruise Presidential campaign ad in 2020, or former Gov. Arnold Schwarzenegger appearing in "The Wizard of Oz". Unfortunately, other deepfake videos aren't as obvious, like a new fake Kamala Harris presidential campaign parody ad. That's where Hive comes in. "Usually if you saw a video that looked real, you'd believe what it said, but now that's not the case, right?" said Kevin Guo, the company's CEO. The San Francisco-based company started to help with content moderation but has since focused on identifying AI-generated pictures and videos. The company uses advanced algorithms and AI to be able to detect the fakes. "The way our models work don't really interpret the content the way that a human does. It's really looking at kind of on a pixel level basis in a way that's almost invisible to humans, but there is that signature. That watermark there that our models have figured out," Guo told CBS News Bay Area. Elon Musk re-tweeted that Harris campaign ad parody video, resulting in a Twitter spat with Newsom. "The federal government, humbly I submit, for many different reasons has failed to regulate. In the absence of that regulatory framework, California asserts itself. We feel a responsibility, particularly as the birthplace of so many of these technologies and so much of this innovation," Newsom said at the Salesforce Dreamforce conference in San Francisco on Tuesday. While on stage with CEO Mark Benioff, the governor signed three bills into law that require social media companies to label deep fake videos as parody - or remove them completely. The bills also limit people and companies from sharing deep fakes that spread political misinformation within 120 days of an election. "The ability for bad actors to kind of influence elections now with widespread misinformation campaigns, this is pretty unprecedented. You can pass a bill that bans political deepfakes, but you still have to be able to detect these deepfakes and kind of sus them out in some way," says Guo. California Rep. Adam Schiff, who is also running for Senate, introduced bipartisan legislation Tuesday at a federal level to increase regulation around deepfakes. That bill would give the Federal Election Commission the power to regulate the use of AI in campaign ads.
[29]
Gavin Newsom signs bills to help provide AI protections for actors
California Gov. Gavin Newsom on the first night of the Democratic National Convention in Chicago on Aug. 19.Tom Williams / CQ-Roll Call, Inc via Getty Images file California Gov. Gavin Newsom signed two bills Tuesday aimed at protecting actors and other performers from unauthorized use of their digital likenesses. Introduced in the state Legislature early this year, the bills specify new legal protections -- both during performers' lifetimes and after death -- around the digital replication of their image or voice. Newsom, who governs a state that's home to the biggest entertainment market in the world, signed them into law amid mounting concern over the impact of artificial intelligence on artists' labor. "We talk about California being a state of dreamers and doers. A lot of dreamers come to California, but sometimes they're not well-represented," Newsom said in a video shared on social media Tuesday. "And with SAG and this bill I just signed, we're making sure that no one turns over their name, image and likeness to unscrupulous people without representation or union advocacy." He was joined in the video by Fran Drescher, the president of SAG-AFTRA, which represents about 160,000 media professionals. The union has strongly advocated for the new laws, along with other protections for actors and other performers surrounding AI. Drescher said the legislation could "speak to people all over the world that are feeling threatened by AI." "And even though there are smart people that come up with these inventions, I don't think that they think it all the way through of what will happen when humans don't have a place to make a living and continue to feed their families," she said in the video. One of the laws, AB 2602, protects artists from being bound to contracts that allow the use of their digital voices or images, whether in lieu of their actual work or to train AI. Such clauses would be considered unfair and against public policy, according to the law, which applies to both past and future contracts. It also requires anyone who has such a contract to notify the other party in writing by Feb. 1 that the clause is no longer valid. The other law, AB 1836, specifically protects digital likenesses as part of performers' posthumous right of publicity, a legal right that protects people's identities from unauthorized commercial use. It allows the rights holders for deceased personalities to sue if digital replicas of them are used without permission in movies or sound recordings. The rights holders would be entitled to at least $10,000 or the amount of actual damages caused by the unauthorized use, whichever figure is greater. It's a change that bolsters performers' rights in an area that's already subject to legal conflict. Drake this year pulled a diss track he promoted online that used an AI-generated version of the late Tupac Shakur's voice after Shakur's estate threatened to sue him. The future of generative AI -- and how it can be used to replace human work -- was a crucial sticking point for actors and writers during last summer's Hollywood strikes. At the start of this year, SAG-AFTRA brokered a controversial deal with an AI voice technology company to allow the consensual licensing of digitally replicated voices for video games. In July, Hollywood video game performers voted to strike over continual AI concerns. In recent years, the likenesses of actors such as Tom Hanks and Scarlett Johansson, along with a slew of fellow celebrities and influencers, have been used in nonconsensual deepfake advertisements. Many performers and tech companies are also waiting to see whether Newsom will sign a third bill, SB 1047, that would require AI developers to comply with certain safety and security guidelines before they train their AI models. The legislation has gotten support from SAG-AFTRA, nonprofit advocacy groups and the likes of actor Mark Ruffalo, who posted a video on X last weekend urging Newsom to sign. "All the big tech companies and billionaire tech boys in Silicon Valley don't want to see this happen, which should make us all start looking at why immediately," Ruffalo said in his video. "But AI is about to explode, and in a way that we have no idea what the consequences are."
[30]
Cali governor signs five AI bills in one day
Newsom still worried about SB 1047's 'chilling effect' on AI industry tax dollar revenue innovation in California California Governor Gavin Newsom signed five AI-related bills into law this week, but a pivotal one remains unsigned, and the Democrat politico isn't sure about its future. Speaking to Salesforce CEO Marc Benioff yesterday at the company's Dreamforce conference, Newsom expressed doubt as to whether Senate Bill 1047 was the right approach to broad regulation of the AI industry. SB 1047, which would impose a series of guardrails and transparency requirements on large AI models, has been divisive in the Golden State since being introduced in February by state senator Scott Wiener. On one side is the AI industry, which has resisted the bill and called it heavy-handed; on the other side is the public, which has largely supported the bill as written. The bill would place a number of requirements on AI firms, but only the largest companies with the biggest models are covered. The only things SB 1047 creates enforcement powers over are "frontier models," which it defines as those requiring more than 10 FLOPS to build, or cost more than $100 million, based on average market prices, to train. Companies training those models would be required to report their AI's potential to cause critical harm, define protections to prevent such harm, and include kill switches on AI to prevent them from going haywire. The AI industry has already managed to get several changes made to the bill, putting limits on enforcement penalties, eliminating perjury provisions for lying about AI models, and other softening of language and requirements. Newsom, who has had SB 1047 on his desk for more than a week, still seems uncertain which side to take. Speaking to Benioff, Newsom said that California has spent the past couple of years working to come up with "rational regulation that supports risk taking, but not recklessness," and he's unsure SB 1047 is the right fit for that philosophy. Regulation is "challenging now in this space, particularly with SB 1047, because of the sort of outsized impact that legislation could have, and the chilling effect, particularly in the open source community," Newsom told Benioff. Newsom doesn't hesitate to veto AI regulation, having killed regulations that would have required autonomous trucks to have human operators last year. His reasoning all along has been that if the AI industry is pushed too hard, it might flee California and take its billions in funding with it. "We dominate in [the tech] space. I want to continue to dominate in this space," Newsom said in May. "I don't want to cede this space to other states or other countries." What that means for the future of SB 1047 is unclear. We've contacted Newsom's and Wiener's offices for comment, but haven't heard back. Uncertain future of SB 1047 aside, Newsom is also perfectly willing to sign AI regulation that he thinks is appropriate - he signed five such bills yesterday alone. Two of the bills center on protecting actors from AI-driven misuse of their likeness, Assembly Bills 2602 and 1836. The first requires contracts to specify any potential use of AI-generated digital replicas of an actor's likeness or voice, while the latter prohibits the use of digital replicas of deceased performers without the consent of their estate. "We continue to wade through uncharted territory when it comes to how AI and digital media is transforming the entertainment industry, but our North Star has always been to protect workers," Newsom said. "This legislation ensures the industry can continue thriving while strengthening protections for workers and how their likeness can or cannot be used." The other three AI bills, signed on stage at Dreamforce, all center on preventing the spread of AI-generated or modified election misinformation. AB 2655 requires large online platforms (those with more than one million Californian users over the past 12 months) to remove or label "deceptive and digitally altered or created content" during the 120 days leading up to an election, and for 60 days thereafter. AB 2839 makes it illegal to knowingly distribute deceptive or altered content related to an election within the same time frame as AB 2655, and expands the scope of existing law to prohibit deceptive content about political officials, candidates, and election workers. AB 2355, finally, requires disclosure of the use of AI in any campaign material. All three laws include provisions for victims and others to file civil action or injunctive relief. "There are a lot of deepfakes out there. There's not a lot of disclosure or labeling," Newsom said of the three laws before pulling them out of his jacket and signing them to applause. "Why waste your time with a politician unless they're going to do something for you?" ®
[31]
Gov. Gavin Newsom Signs Bills Regulating AI Performance Replicas Into Law
Entry-Level Staffers Are Most Susceptible to Disruption by AI, Animation Guild Says California Gov. Gavin Newsom has signed two union-supported bills restricting the use of AI digital replicas of performers into law. In a symbolic move, the governor visited the Los Angeles headquarters of performers' union SAG-AFTRA on Tuesday to officially greenlight the bills, AB 2602 and AB 1836, which were passed by the California state Senate in August. SAG-AFTRA sponsored both bills after instituting initial AI protections for members in its 2023 TV/theatrical contract. AB 2602 bars contract provisions that facilitate the use of a digital replica of a performer in a project instead of an in-person performance from that human being, unless there is a "reasonably specific" description of the intended use of the digital replica and the performer was represented by legal counsel or a labor union in negotiations. AB 1836, meanwhile, requires entertainment employers to gain the consent of a deceased performer's estate before using a digital replica of that person. The new law refines an "expressive works" exemption from the state's existing postmortem right of publicity laws that entertainment companies otherwise could have pointed to in an era of AI digital replicas. "We talk about California as being a state of dreamers and doers. A lot of dreamers come to California but sometimes they're not well-represented," Newsom said in a video released on Drescher's and the CA governor's Instagram pages on Tuesday. "And with SAG and this bill I just signed, we're making sure that no one turns over their name, image and likeness to unscrupulous people without representation or union advocacy." The bills enshrine some of the major concepts that SAG-AFTRA fought for during its 2023 into state law. In the 2023 contract reached at the end of the 118-day strike with studios and streamers, the union secured language requiring employers to get consent from performers and provide a description of intended use when using a digital replica tied to an in-person job and when using one not associated with in-person employment. The 2023 contract also requires employers to get the consent of a deceased performer's estate (or union if no other representatives are available) for an independently-created digital replica. In a statement, Drescher called Tuesday "a momentous day for SAG-AFTRA members and everyone." She added that this was because "the A.I. protections we fought so hard for last year are now expanded upon by California law thanks to the Legislature and Gov. Gavin Newsom." The union is continuing to advocate for further regulation of AI-facilitated digital replicas and synthetic performers. SAG-AFTRA supported Tennessee's Ensuring Likeness Voice and Image Security (ELVIS) Act, which was signed into law in March, and is pushing for the passage of a federal bill called the Nurture Originals, Foster Art and Keep Entertainment Safe (NO FAKES) Act.
[32]
California governor signs bills offering AI protections for actors
LOS ANGELES - California Gov. Gavin Newsom on Tuesday signed into law two bills that will give actors more protections over their digital likenesses, addressing concerns brought up during last year's Hollywood strike led by performers guild SAG-AFTRA. One of the bills, AB1836, prohibits and penalizes the making and distribution of a deceased person's digital replica without permission from their estate. The other legislation, AB2602, makes a contract entered after Jan. 1, 2025, unenforceable if a digital replica of an actor was used when the individual could have performed the work in person, if the contract did not include a reasonably specific description of how the digital replica would be used and if the actor was not represented by their lawyer or labor union when the deal was signed. "No one should live in fear of becoming someone else's unpaid digital puppet," said Duncan Crabtree-Ireland, SAG-AFTRA's national executive director and chief negotiator in a statement. "Gov. Newsom has led the way in protecting people - and families - from A.I. replication without real consent." Newsom signed the bills at SAG-AFTRA's headquarters in Los Angeles on Tuesday. "We're making sure that no one turns over their name, image and likeness to unscrupulous people without representation or union advocacy," Newsom said in a video posted on SAG-AFTRA's Instagram account. SAG-AFTRA President Fran Drescher called it a momentous day because the AI protections the union fought for last year have been expanded into state law. "A.I. poses a threat not just to performers in the entertainment industry, but to workers in all fields, in all industries everywhere," Drescher said in a statement. "No technology should be introduced into society without extreme caution and careful consideration of its long-term impact on humanity and the natural world." AI remains a hot topic in Hollywood, as many workers are concerned that the rapidly advancing technology will eliminate jobs. But proponents for the new technology say that AI could be a powerful tool for creatives, allowing them to test bold ideas without being as constrained by budgets. The new laws were part of a slew of roughly 50 AI-related bills in the Legislature, brought as the state's political leaders are trying to address the concerns raised by the public about AI.
[33]
California governor signs rules limiting AI actor clones
California governor Gavin Newsom has signed two bills that will protect performers from having their likeness simulated by AI digital replicas. The two SAG-AFTRA supported bills, AB 2602 and AB 1836, were passed by the California legislature in August and are part of a slate of state-level AI regulations. AB 2602 bars contract provisions that would let companies use a digital version of a performer in a project instead of the real human actor, unless the performer knows exactly how their digital stand-in will be used and has a lawyer or union representative involved. AB 1836 says that if a performer has died, entertainment companies must get permission from their family or estate before producing or distributing a "digital replica" of them. The law specifies that these replicas don't fall under an exemption that lets works of art represent people's likeness without permission, closing what The Hollywood Reporter characterizes as a potential loophole for AI companies. "We're making sure that no one turns over their name, image, and likeness to unscrupulous people without representation," Newsom said in a video posted to his Instagram on Tuesday, where he's seen alongside SAG-AFTRA president Fran Drescher. The two bills' signing may bode well for the fate of the arguably biggest legal disruption to the AI industry: California's SB 1047, which currently sits on Newsom's desk awaiting his decision. SAG-AFTRA has also publicly supported SB 1047. But the bill has drawn opposition from much of the AI industry -- which has until the end of September to lobby for its veto.
[34]
California Governor Signs Legislation to Protect Entertainers From AI
WASHINGTON (Reuters) - California Governor Gavin Newsom signed two bills into law on Tuesday that aim to help actors and performers protect their digital replicas in audio and visual productions from artificial intelligence, the governor's office said. WHY IT'S IMPORTANT While the presence of AI in the entertainment industry can be traced back to decades ago, recent groundbreaking advances in generative AI, with robots now making music as digital pop stars, have divided opinions in the industry. Performers fear AI will make theft of their likenesses common and many experts have raised legal and ethical concerns. KEY QUOTES One of the bills Newsom signed "requires contracts to specify the use of AI-generated digital replicas of a performer's voice or likeness, and the performer must be professionally represented in negotiating the contract," his office said. The other bill "prohibits commercial use of digital replicas of deceased performers in films, TV shows, video games, audiobooks, sound recordings and more, without first obtaining the consent of those performers' estates," the statement from Newsom's office added. CONTEXT More broadly, the rise of AI has fed a host of other concerns as well, including the fear that it could be used to disrupt the democratic process, turbocharge fraud or lead to job loss. Democratic U.S. President Joe Biden's administration has pressed lawmakers for AI regulation, but a polarized U.S. Congress, where Republicans control the House of Representatives and Democrats control the Senate, has made little headway in passing effective regulation. In March, Tennessee Governor Bill Lee signed a bill into law that aimed to protect artists, including musicians, from unauthorized use by artificial intelligence. (Reporting by Kanishka Singh in Washington; Editing by Aurora Ellis)
[35]
California passes protections for performers' likeness from AI without contract permission
Former President Donald Trump falsely claimed that the crowd at the Harris-Walz rally in Michigan was A.I. generated. Pictures and videos from the event helped to combat his claims. California has passed a pair of bills meant to protect the digital likeness of actors and performers from artificial intelligence. The two bills, signed by Gov. Gavin Newsom Tuesday, are meant to strengthen protections for workers in audio and visual productions amidst the rapidly evolving AI industry, according to a news release. AB 2602 requires contracts to specify when AI-generated digital replicas of a performer's voice or likeness will be used with permission. Performers must also be professionally represented in these contract negotiations, the news release stated. The other law, AB 1836, prohibits the commercial use of digital replicas of deceased performers without the consent of their estate. The law was designed to curb the use of deceased performers in films, TV shows, audiobooks, video games and other media using work from when they were alive, the news release added. "A lot of dreamers come to California but sometimes they're not well represented," Newsom said in a video posted to X Tuesday. "And with SAG and this bill I just signed we're making sure that no one turns over their name and likeness to unscrupulous people without representation or union advocacy." Laws come after actors union strike for AI protections The legislation echoes sentiments by Hollywood actors guild SAG-AFTRA, who negotiated for stronger protections from AI during the dual strikes last year. "To have now the state of California and your support in making sure that we are protected with our likeness and everything it just means the world," SAG-AFTRA President Fran Drescher told Newsom in the X video. "Your actions today are going to speak to people all over the world that are feeling threatened by AI." The historic 118 day actors strike lasted until last November as performers fought for better wages in the streaming age as well as AI safeguards. "AI was a deal breaker," Drescher said in November. "If we didn't get that package, then what are we doing to protect our members?" About 86% of the SAG-AFTRA national board approved the deal, which also incorporated benefits like pay raises and a "streaming participation bonus." Video game performers on strike over AI protections Since July 26, video game voice actors and motion-capture performers have been on strike following failed labor contract negotiations surrounding AI protections for workers. Negotiations with major video game companies including Activision Productions, Electronic Arts and Epic Games have been ongoing since its contract expired in November 2022. "Although agreements have been reached on many issues important to SAG-AFTRA members, the employers refuse to plainly affirm, in clear and enforceable language, that they will protect all performers covered by this contract in their AI language," SAG-AFTRA said in a statement.
[36]
California governor signs legislation to protect entertainers from AI
WASHINGTON, Sept 17 (Reuters) - California Governor Gavin Newsom signed two bills into law on Tuesday that aim to help actors and performers protect their digital replicas in audio and visual productions from artificial intelligence, the governor's office said. WHY IT'S IMPORTANT While the presence of AI, opens new tab in the entertainment industry can be traced back to decades ago, recent groundbreaking, opens new tab advances in generative AI, with robots now making music as digital pop stars, have divided opinions in the industry. Advertisement · Scroll to continue Performers fear AI will make theft of their likenesses common and many experts have raised legal and ethical concerns. KEY QUOTES One of the bills Newsom signed "requires contracts to specify the use of AI-generated digital replicas of a performer's voice or likeness, and the performer must be professionally represented in negotiating the contract," his office said. The other bill "prohibits commercial use of digital replicas of deceased performers in films, TV shows, video games, audiobooks, sound recordings and more, without first obtaining the consent of those performers' estates," the statement from Newsom's office added. Advertisement · Scroll to continue CONTEXT More broadly, the rise of AI has fed, opens new tab a host of other concerns as well, including the fear that it could be used to disrupt the democratic process, turbocharge fraud or lead to job loss. Democratic U.S. President Joe Biden's administration has pressed lawmakers for AI regulation, but a polarized U.S. Congress, where Republicans control the House of Representatives and Democrats control the Senate, has made little headway in passing effective regulation. In March, Tennessee Governor Bill Lee signed a bill into law that aimed to protect artists, including musicians, from unauthorized use by artificial intelligence. Reporting by Kanishka Singh in Washington; Editing by Aurora Ellis Our Standards: The Thomson Reuters Trust Principles., opens new tab Kanishka Singh Thomson Reuters Kanishka Singh is a breaking news reporter for Reuters in Washington DC, who primarily covers US politics and national affairs in his current role. His past breaking news coverage has spanned across a range of topics like the Black Lives Matter movement; the US elections; the 2021 Capitol riots and their follow up probes; the Brexit deal; US-China trade tensions; the NATO withdrawal from Afghanistan; the COVID-19 pandemic; and a 2019 Supreme Court verdict on a religious dispute site in his native India.
[37]
Newsom signs bills to protect actors, performers from AI
California Gov. Gavin Newsom signed two bills on Tuesday that aim to protect actors and performers from having their names, images and likenesses copied by artificial intelligence without authorization. "We're making sure that no one turns over their name, image and likeness to unscrupulous people without representation or union advocacy," Newsom said in a video posted on X. One of the bills protects actors and performers from binding contracts that allow artificial intelligence to use replicas of their digital voices or images in place of in-person work unless a performer has representation. The second law protects an artist's digital likeness even after death -- meaning entertainment companies must get permission from the performer's family or estate before making a digital replica. Fran Drescher, the president of SAG-AFTRA, a labor union representing performers and broadcasters, joined the governor. "This is really a momentous experience because we worked so hard in our TV/theatrical contract negotiation and subsequent strike, and to have now the state of California and your support in making sure that we are protected with our likeness and everything -- it just means the world," Drescher said in the video. She added that Newsom's actions will resonate with people "all over the world that are feeling threatened by AI."
[38]
California bills protecting actors, performers from A.I. signed into law by Gov. Newsom
Gov. Gavin Newsom has signed two bills into law that set out to protect actors and performers from artificial intelligence replicas of their likeness or voice being used without their consent. Such protections were at the forefront of labor negotiations during the monthslong strike last year by SAG-AFTRA, or the Screen Actors Guild and the American Federation of Television and Radio Artists. Several actors have spoken out against A.I. replicas of their image or voice potentially being used without fair pay or without their consent entirely. One of the bills deals with protecting actors and other performers, like voice actors for video games, in the writing of contracts while the other is focused on protecting deceased performers who may be digitally replicated or imitated long after their death. Last year, the A.I. issue became a sticking point during SAG-AFTRA's months of labor negotiations as some union members felt the safeguards established in the final deal reached with the Alliance of Motion Picture and Television Producers, which represented employers, did not go far enough in protecting performers, the Associated Press reported. On Tuesday, the union issued a statement applauding Newsom's signing of the two new laws, AB 1836 and AB 2602. "No one should live in fear of becoming someone else's unpaid digital puppet," SAG-AFTRA National Executive Director and Chief Negotiator Duncan Crabtree-Ireland said in the statement, describing the bills as "much-needed legislation prioritizing the rights of individuals in the A.I. age." SAG-AFTRA President Fran Drescher, best known for her leading role on "The Nanny," spoke alongside Newsom in a post to the governor's X account. She sat next to Newsom as he signed the bills at the union's Los Angeles headquarters. "Your actions today are going to speak to people all over the world that are feeling threatened by A.I.," Drescher said. "And even though there are smart people that come up with these inventions, I don't think they think it all the way through of what will happen when humans don't have a place to make a living and continue to feed their families." AB 2602, reintroduced this year by Assemblymember Ash Kalra (D-San José), requires labor contracts to specify if A.I.-generated replicas of a performer -- imitating their likeness or voice -- are being used. It also mandates that the performer is professionally represented in such contracts. SAG-AFTRA's statement said the law is "the first of its kind in the United States." Kalra has said it is necessary since some performers may not have the same A.I. protections as unionized entertainers like actors who are members of SAG-AFTRA. "Union-represented actors may now have a collective bargaining agreement that includes safeguards against AI, but other performers like voice actors for media such as audio books, video games, and more, deserve the same legal safeguards," Kalra said in a statement earlier this year. "AB 2602 will codify these critical labor protections and ensure performers maintain a seat at the table." Video game actors with SAG-AFTRA began going on strike earlier this summer over uses of A.I., with the union announcing on Monday that they would continue protests later this week outside Disney Character Voices in Burbank. The other newly established state law, AB 1836, was introduced by Assemblymember Rebecca Bauer-Kahan (D-Orinda) and bans the commercial use of digital replicas of performers who are deceased in TV shows, films, video games and more -- without first getting the consent of the estates representing the late performers. It updates current legislation while also removing certain exemptions for TV and film that currently exist. Kahan has described the law as a necessary measure given how new technology has changed things and allowed entertainers, or their performances, to be recreated long after they have passed. "It is now possible to create new performances of artists even after their death," Kahan said in statement from the governor's office. "Individuals and their estates deserve protections that extend beyond their life to ensure they control their own likeness and profit from it; that is exactly what AB 1836 does." Just as with so many other fields of work, the entertainment industry has grappled with establishing how and when artificial intelligence -- a balance of facing the realities of the digital age while still protecting workers against unfair or even exploitative practices. In fact, the deal SAG-AFTRA ultimately reached last year was not necessarily welcome by all union members specifically because of the A.I. issue. While the contract offered some safeguards, establishing guidelines for how and when digital replicas can be used, some union members felt those protections simply were not enough and disagreed with the deal for that reason. "If we set aside the AI issue, it would have been ratified by 99% of members probably," Crabtree-Ireland told the Associated Press. According to Crabtree-Ireland, the union played a role in getting the two new pieces of legislation passed and signed by Newsom. In a statement from the union, he said SAG-AFTRA Secretary-Treasurer Joely Fisher reached out to the governor's office to advocate for the legislation, a step he described as pivotal in "securing its ultimate approval." Meanwhile, he said, Vice President and Los Angeles Local President Jodi Long testified before the California Senate Judiciary Committee in Sacramento to advocate for the new laws.
[39]
California governor signs legislation to protect entertainers from AI
While the presence of AI in the entertainment industry can be traced back to decades ago, recent groundbreaking advances in generative AI, with robots now making music as digital pop stars, have divided opinions in the industry. Performers fear AI will make theft of their likenesses common and many experts have raised legal and ethical concerns. One of the bills Newsom signed "requires contracts to specify the use of AI-generated digital replicas of a performer's voice or likeness, and the performer must be professionally represented in negotiating the contract," his office said. The other bill "prohibits commercial use of digital replicas of deceased performers in films, TV shows, video games, audiobooks, sound recordings and more, without first obtaining the consent of those performers' estates," the statement from Newsom's office added. More broadly, the rise of AI has fed a host of other concerns as well, including the fear that it could be used to disrupt the democratic process, turbocharge fraud or lead to job loss. Democratic U.S. President Joe Biden's administration has pressed lawmakers for AI regulation, but a polarized U.S. Congress, where Republicans control the House of Representatives and Democrats control the Senate, has made little headway in passing effective regulation. In March, Tennessee Governor Bill Lee signed a bill into law that aimed to protect artists, including musicians, from unauthorized use by artificial intelligence. (Reporting by Kanishka Singh in Washington; Editing by Aurora Ellis)
Share
Share
Copy Link
California Governor Gavin Newsom signs new laws to address the growing threat of AI-generated deepfakes in elections. The legislation aims to protect voters from misinformation and maintain election integrity.
In a significant move to safeguard the integrity of elections, California Governor Gavin Newsom has signed into law a set of bills aimed at combating the rising threat of artificial intelligence-generated deepfakes. These laws, set to take effect in 2024, represent a proactive stance against the potential misuse of AI technology in political campaigns 1.
The newly enacted laws introduce several crucial measures:
Mandatory Disclosure: Political campaigns using AI-generated media that depict real people must now include a clear disclaimer stating that the content is AI-generated 2.
Restrictions on Timing: The creation and distribution of AI-generated deepfakes of candidates are prohibited within 120 days of an election, unless they carry a prominent disclosure 3.
Legal Recourse: Candidates now have the right to seek court orders to remove or cease the distribution of deceptive AI-generated content 4.
While these laws mark a significant step forward, they face potential challenges:
First Amendment Concerns: Critics argue that the laws might infringe on free speech rights, potentially leading to legal challenges 5.
Enforcement Difficulties: The rapid advancement of AI technology may make it challenging to detect and regulate all instances of deepfakes effectively 1.
Cross-Border Issues: The laws' effectiveness may be limited by content created and distributed from outside California 3.
California's initiative could serve as a model for other states and potentially federal legislation. As AI technology continues to evolve, the effectiveness of these laws will be closely watched, potentially leading to further refinements and adaptations in the future 2.
The implementation of these laws represents a critical juncture in the ongoing struggle to maintain election integrity in the face of rapidly advancing technology. As the 2024 elections approach, California's approach will be put to the test, potentially shaping the future of election laws across the nation 4.
Reference
[2]
U.S. News & World Report
|California Governor Signs Laws to Crack Down on Election Deepfakes Created by AI[3]
California Governor Gavin Newsom has signed multiple AI-related bills into law, addressing concerns about deepfakes, actor impersonation, and AI regulation. These new laws aim to protect individuals and establish guidelines for AI use in various sectors.
5 Sources
California's recently enacted law targeting AI-generated deepfakes in elections is being put to the test, as Elon Musk's reposting of Kamala Harris parody videos sparks debate and potential legal challenges.
6 Sources
California's legislature has passed a series of bills aimed at regulating artificial intelligence, including a ban on deepfakes in elections and measures to protect workers from AI-driven discrimination. These laws position California as a leader in AI regulation in the United States.
7 Sources
Governor Gavin Newsom signs bills closing legal loopholes and criminalizing AI-generated child sexual abuse material, positioning California as a leader in AI regulation.
7 Sources
Elon Musk's social media platform X has filed a lawsuit against California's new law targeting AI-generated deepfakes in elections, claiming it violates free speech protections.
7 Sources
The Outpost is a comprehensive collection of curated artificial intelligence software tools that cater to the needs of small business owners, bloggers, artists, musicians, entrepreneurs, marketers, writers, and researchers.
© 2024 TheOutpost.AI All rights reserved