The Outpost is a comprehensive collection of curated artificial intelligence software tools that cater to the needs of small business owners, bloggers, artists, musicians, entrepreneurs, marketers, writers, and researchers.
© 2025 TheOutpost.AI All rights reserved
Curated by THEOUTPOST
On Sun, 1 Sept, 4:00 PM UTC
7 Sources
[1]
California lawmakers approve legislation to ban deepfakes, protect workers and regulate AI
SACRAMENTO, Calif. -- California lawmakers approved a host of proposals this week aiming to regulate the artificial intelligence industry, combat deepfakes and protect workers from exploitation by the rapidly evolving technology. The California Legislature, which is controlled by Democrats, is voting on hundreds of bills during its final week of the session to send to Gov. Gavin Newsom's desk. Their deadline is Saturday. The Democratic governor has until Sept. 30 to sign the proposals, veto them or let them become law without his signature. Newsom signaled in July he will sign a proposal to crack down on election deepfakes but has not weighed in other legislation. He warned earlier this summer that overregulation could hurt the homegrown industry. In recent years, he often has cited the state's budget troubles when rejecting legislation that he would otherwise support. Here is a look at some of the AI bills lawmakers approved this year. Citing concerns over how AI tools are increasingly being used to trick voters and generate deepfake pornography of minors, California lawmakers approved several bills this week to crack down on the practice. Lawmakers approved legislation to ban deepfakes related to elections and require large social media platforms to remove the deceptive material 120 days before Election Day and 60 days thereafter. Campaigns also would be required to publicly disclose if they're running ads with materials altered by AI. A pair of proposals would make it illegal to use AI tools to create images and videos of child sexual abuse. Current law does not allow district attorneys to go after people who possess or distribute AI-generated child sexual abuse images if they cannot prove the materials are depicting a real person. Tech companies and social media platforms would be required to provide AI detection tools to users under another proposal. California could become the first state in the nation to set sweeping safety measures on large AI models. The legislation sent by lawmakers to the governor's desk requires developers to start disclosing what data they use to train their models. The efforts aim to shed more light into how AI models work and prevent future catastrophic disasters. Another measure would require the state to set safety protocols preventing risks and algorithmic discrimination before agencies could enter any contract involving AI models used to define decisions. Inspired by the months-long Hollywood actors strike last year, lawmakers approved a proposal to protect workers, including voice actors and audiobook performers, from being replaced by their AI-generated clones. The measure mirrors language in the contract the SAG-AFTRA made with studios last December. State and local agencies would be banned from using AI to replace workers at call centers under one of the proposals. California also may create penalties for digitally cloning dead people without consent of their estates. As corporations increasingly weave AI into Americans' daily lives, state lawmakers also passed several bills to increase AI literacy. One proposal would require a state working group to consider incorporating AI skills into math, science, history and social science curriculums. Another would develop guideline on how schools could use AI in the classrooms.
[2]
California lawmakers approve laws banning deepfakes, regulating AI
California lawmakers approved a host of proposals this week aiming to regulate the artificial intelligence industry, combat deepfakes and protect workers from exploitation by the rapidly evolving technology. The California Legislature, which is controlled by Democrats, is voting on hundreds of bills during its final week of the session to send to Gov. Gavin Newsom's desk. Their deadline is Saturday. The Democratic governor has until Sept. 30 to sign the proposals, veto them or let them become law without his signature. Newsom signaled in July he will sign a proposal to crack down on election deepfakes but has not weighed in on other legislation. He warned earlier this summer that overregulation could hurt the homegrown industry. In recent years, he often has cited the state's budget troubles when rejecting legislation that he would otherwise support. Here is a look at some of the AI bills lawmakers approved this year. Combating deepfakes Citing concerns over how AI tools are increasingly being used to trick voters and generate deepfake pornography of minors, California lawmakers approved several bills this week to crack down on the practice. Lawmakers approved legislation to ban deepfakes related to elections and require large social media platforms to remove the deceptive material 120 days before Election Day and 60 days thereafter. Campaigns also would be required to publicly disclose if they're running ads with materials altered by AI. A pair of proposals would make it illegal to use AI tools to create images and videos of child sexual abuse. Current law does not allow district attorneys to go after people who possess or distribute AI-generated child sexual abuse images if they cannot prove the materials are depicting a real person. Tech companies and social media platforms would be required to provide AI detection tools to users under another proposal. Setting safety guardrails California could become the first state in the nation to set sweeping safety measures on large AI models. The legislation sent by lawmakers to the governor's desk requires developers to start disclosing what data they use to train their models. The efforts aim to shed more light into how AI models work and prevent future catastrophic disasters. Another measure would require the state to set safety protocols preventing risks and algorithmic discrimination before agencies could enter any contract involving AI models used to define decisions. Protecting workers Inspired by the monthslong Hollywood actors strike last year, lawmakers approved a proposal to protect workers, including voice actors and audiobook performers, from being replaced by their AI-generated clones. The measure mirrors language in the contract the SAG-AFTRA made with studios last December. State and local agencies would be banned from using AI to replace workers at call centers under one of the proposals. California also may create penalties for digitally cloning dead people without consent of their estates. Keeping up with the technology As corporations increasingly weave AI into Americans' daily lives, state lawmakers also passed several bills to increase AI literacy. One proposal would require a state working group to consider incorporating AI skills into math, science, history and social science curriculums. Another would develop guidelines on how schools could use AI in the classrooms.
[3]
California lawmakers pass bills to ban deepfakes and set safety measures on large AI models
California lawmakers approved a host of proposals this week aiming to regulate the artificial intelligence industry, combat deepfakes and protect workers from exploitation by the rapidly evolving technology. The California Legislature, which is controlled by Democrats, is voting on hundreds of bills during its final week of the session to send to Gov. Gavin Newsom's desk. Their deadline is Saturday. The Democratic governor has until Sept. 30 to sign the proposals, veto them or let them become law without his signature. Newsom signaled in July he will sign a proposal to crack down on election deepfakes but has not weighed in other legislation. He warned earlier this summer that overregulation could hurt the homegrown industry. In recent years, he often has cited the state's budget troubles when rejecting legislation that he would otherwise support. Here is a look at some of the AI bills lawmakers approved this year. Citing concerns over how AI tools are increasingly being used to trick voters and generate deepfake pornography of minors, California lawmakers approved several bills this week to crack down on the practice. Lawmakers approved legislation to ban deepfakes related to elections and require large social media platforms to remove the deceptive material 120 days before Election Day and 60 days thereafter. Campaigns also would be required to publicly disclose if they're running ads with materials altered by AI. A pair of proposals would make it illegal to use AI tools to create images and videos of child sexual abuse. Current law does not allow district attorneys to go after people who possess or distribute AI-generated child sexual abuse images if they cannot prove the materials are depicting a real person. Tech companies and social media platforms would be required to provide AI detection tools to users under another proposal. California could become the first state in the nation to set sweeping safety measures on large AI models. The legislation sent by lawmakers to the governor's desk requires developers to start disclosing what data they use to train their models. The efforts aim to shed more light into how AI models work and prevent future catastrophic disasters. Another measure would require the state to set safety protocols preventing risks and algorithmic discrimination before agencies could enter any contract involving AI models used to define decisions. Inspired by the months-long Hollywood actors strike last year, lawmakers approved a proposal to protect workers, including voice actors and audiobook performers, from being replaced by their AI-generated clones. The measure mirrors language in the contract the SAG-AFTRA made with studios last December. State and local agencies would be banned from using AI to replace workers at call centers under one of the proposals. California also may create penalties for digitally cloning dead people without consent of their estates. As corporations increasingly weave AI into Americans' daily lives, state lawmakers also passed several bills to increase AI literacy. One proposal would require a state working group to consider incorporating AI skills into math, science, history and social science curriculums. Another would develop guideline on how schools could use AI in the classrooms.
[4]
California lawmakers approve legislation to ban deepfakes, protect workers and regulate AI
SACRAMENTO, Calif. (AP) -- California lawmakers approved a host of proposals this week aiming to regulate the artificial intelligence industry, combat deepfakes and protect workers from exploitation by the rapidly evolving technology. The California Legislature, which is controlled by Democrats, is voting on hundreds of bills during its final week of the session to send to Gov. Gavin Newsom's desk. Their deadline is Saturday. The Democratic governor has until Sept. 30 to sign the proposals, veto them or let them become law without his signature. Newsom signaled in July he will sign a proposal to crack down on election deepfakes but has not weighed in other legislation. He warned earlier this summer that overregulation could hurt the homegrown industry. In recent years, he often has cited the state's budget troubles when rejecting legislation that he would otherwise support. Here is a look at some of the AI bills lawmakers approved this year. Citing concerns over how AI tools are increasingly being used to trick voters and generate deepfake pornography of minors, California lawmakers approved several bills this week to crack down on the practice. Lawmakers approved legislation to ban deepfakes related to elections and require large social media platforms to remove the deceptive material 120 days before Election Day and 60 days thereafter. Campaigns also would be required to publicly disclose if they're running ads with materials altered by AI. A pair of proposals would make it illegal to use AI tools to create images and videos of child sexual abuse. Current law does not allow district attorneys to go after people who possess or distribute AI-generated child sexual abuse images if they cannot prove the materials are depicting a real person. Tech companies and social media platforms would be required to provide AI detection tools to users under another proposal. California could become the first state in the nation to set sweeping safety measures on large AI models. The legislation sent by lawmakers to the governor's desk requires developers to start disclosing what data they use to train their models. The efforts aim to shed more light into how AI models work and prevent future catastrophic disasters. Another measure would require the state to set safety protocols preventing risks and algorithmic discrimination before agencies could enter any contract involving AI models used to define decisions. Inspired by the months-long Hollywood actors strike last year, lawmakers approved a proposal to protect workers, including voice actors and audiobook performers, from being replaced by their AI-generated clones. The measure mirrors language in the contract the SAG-AFTRA made with studios last December. State and local agencies would be banned from using AI to replace workers at call centers under one of the proposals. California also may create penalties for digitally cloning dead people without consent of their estates. As corporations increasingly weave AI into Americans' daily lives, state lawmakers also passed several bills to increase AI literacy. One proposal would require a state working group to consider incorporating AI skills into math, science, history and social science curriculums. Another would develop guideline on how schools could use AI in the classrooms.
[5]
California Lawmakers Approve Legislation to Ban Deepfakes, Protect Workers and Regulate AI
SACRAMENTO, Calif. (AP) -- California lawmakers approved a host of proposals this week aiming to regulate the artificial intelligence industry, combat deepfakes and protect workers from exploitation by the rapidly evolving technology. The California Legislature, which is controlled by Democrats, is voting on hundreds of bills during its final week of the session to send to Gov. Gavin Newsom's desk. Their deadline is Saturday. The Democratic governor has until Sept. 30 to sign the proposals, veto them or let them become law without his signature. Newsom signaled in July he will sign a proposal to crack down on election deepfakes but has not weighed in other legislation. He warned earlier this summer that overregulation could hurt the homegrown industry. In recent years, he often has cited the state's budget troubles when rejecting legislation that he would otherwise support. Here is a look at some of the AI bills lawmakers approved this year. Combatting deepfakes Citing concerns over how AI tools are increasingly being used to trick voters and generate deepfake pornography of minors, California lawmakers approved several bills this week to crack down on the practice. Lawmakers approved legislation to ban deepfakes related to elections and require large social media platforms to remove the deceptive material 120 days before Election Day and 60 days thereafter. Campaigns also would be required to publicly disclose if they're running ads with materials altered by AI. A pair of proposals would make it illegal to use AI tools to create images and videos of child sexual abuse. Current law does not allow district attorneys to go after people who possess or distribute AI-generated child sexual abuse images if they cannot prove the materials are depicting a real person. Tech companies and social media platforms would be required to provide AI detection tools to users under another proposal. Settng safety guardrails California could become the first state in the nation to set sweeping safety measures on large AI models. The legislation sent by lawmakers to the governor's desk requires developers to start disclosing what data they use to train their models. The efforts aim to shed more light into how AI models work and prevent future catastrophic disasters. Another measure would require the state to set safety protocols preventing risks and algorithmic discrimination before agencies could enter any contract involving AI models used to define decisions. Protecting workers Inspired by the months-long Hollywood actors strike last year, lawmakers approved a proposal to protect workers, including voice actors and audiobook performers, from being replaced by their AI-generated clones. The measure mirrors language in the contract the SAG-AFTRA made with studios last December. State and local agencies would be banned from using AI to replace workers at call centers under one of the proposals. As corporations increasingly weave AI into Americans' daily lives, state lawmakers also passed several bills to increase AI literacy. One proposal would require a state working group to consider incorporating AI skills into math, science, history and social science curriculums. Another would develop guideline on how schools could use AI in the classrooms. Copyright 2024 The Associated Press. All rights reserved. This material may not be published, broadcast, rewritten or redistributed.
[6]
This US state is banning deepfakes to protect workers and regulate AI
The California Legislature, which is controlled by Democrats, is voting on hundreds of bills during its final week of the session to send to Gov. Gavin Newsom's desk. Their deadline is Saturday. The Democratic governor has until Sept. 30 to sign the proposals, veto them or let them become law without his signature. Newsom signaled in July he will sign a proposal to crack down on election deepfakes but has not weighed in other legislation. He warned earlier this summer that overregulation could hurt the homegrown industry. In recent years, he often has cited the state's budget troubles when rejecting legislation that he would otherwise support. Here is a look at some of the AI bills lawmakers approved this year. Combatting deepfakes Citing concerns over how AI tools are increasingly being used to trick voters and generate deepfake pornography of minors, California lawmakers approved several bills this week to crack down on the practice. Lawmakers approved legislation to ban deepfakes related to elections and require large social media platforms to remove the deceptive material 120 days before Election Day and 60 days thereafter. Campaigns also would be required to publicly disclose if they're running ads with materials altered by AI. A pair of proposals would make it illegal to use AI tools to create images and videos of child sexual abuse. Current law does not allow district attorneys to go after people who possess or distribute AI-generated child sexual abuse images if they cannot prove the materials are depicting a real person. Tech companies and social media platforms would be required to provide AI detection tools to users under another proposal. Setting safety guardrails California could become the first state in the nation to set sweeping safety measures on large AI models. The legislation sent by lawmakers to the governor's desk requires developers to start disclosing what data they use to train their models. The efforts aim to shed more light into how AI models work and prevent future catastrophic disasters. Another measure would require the state to set safety protocols preventing risks and algorithmic discrimination before agencies could enter any contract involving AI models used to define decisions. Protecting workers Inspired by the months-long Hollywood actors strike last year, lawmakers approved a proposal to protect workers, including voice actors and audiobook performers, from being replaced by their AI-generated clones. The measure mirrors language in the contract the SAG-AFTRA made with studios last December. State and local agencies would be banned from using AI to replace workers at call centers under one of the proposals. California also may create penalties for digitally cloning dead people without consent of their estates. As corporations increasingly weave AI into Americans' daily lives, state lawmakers also passed several bills to increase AI literacy. One proposal would require a state working group to consider incorporating AI skills into math, science, history and social science curriculums. Another would develop guideline on how schools could use AI in the classrooms.
[7]
California is racing to combat deepfakes ahead of the election
LOS ANGELES -- Days after Vice President Kamala Harris launched her presidential bid, a video -- created with the help of artificial intelligence -- went viral. "I ... am your Democrat candidate for president because Joe Biden finally exposed his senility at the debate," a voice that sounded like Harris' said in the fake audio track used to alter one of her campaign ads. "I was selected because I am the ultimate diversity hire." Billionaire Elon Musk -- who has endorsed Harris' Republican opponent, former President Donald Trump -- shared the video on X, then clarified two days later that it was actually meant as a parody. His initial tweet had 136 million views. The follow-up calling the video a parody garnered 26 million views. To Democrats, including California Gov. Gavin Newsom, the incident was no laughing matter, fueling calls for more regulation to combat AI-generated videos with political messages and a fresh debate over the appropriate role for government in trying to contain emerging technology. On Friday, California lawmakers gave final approval to a bill that would prohibit the distribution of deceptive campaign ads or "election communication" within 120 days of an election. Assembly Bill 2839 targets manipulated content that would harm a candidate's reputation or electoral prospects along with confidence in an election's outcome. It's meant to address videos like the one Musk shared of Harris, though it includes an exception for parody and satire. "We're looking at California entering its first-ever election during which disinformation that's powered by generative AI is going to pollute our information ecosystems like never before and millions of voters are not going to know what images, audio or video they can trust," said Assemblymember Gail Pellerin, D-Santa Cruz. "So we have to do something." Newsom has signaled he will sign the bill, which would take effect immediately, in time for the November election. The legislation updates a California law that bars people from distributing deceptive audio or visual media that intends to harm a candidate's reputation or deceive a voter within 60 days of an election. State lawmakers say the law needs to be strengthened during an election cycle in which people are already flooding social media with digitally altered videos and photos known as deepfakes. The use of deepfakes to spread misinformation has concerned lawmakers and regulators during previous election cycles. These fears increased after the release of new AI-powered tools, such as chatbots that can rapidly generate images and videos. From fake robocalls to bogus celebrity endorsement of candidates, AI-generated content is testing tech platforms and lawmakers. Under AB 2839, a candidate, election committee or elections official could seek a court order to get deepfakes pulled down. They could also sue the person who distributed or republished the deceptive material for damages. The legislation also applies to deceptive media posted 60 days after the election, including content that falsely portrays a voting machine, ballot, voting site or other election-related property in a way that is likely to undermine the confidence in the outcome of elections. It doesn't apply to satire or parody that's labeled as such, or to broadcast stations if they inform viewers that what is depicted doesn't accurately represent a speech or event. Tech industry groups oppose AB 2839, along with other bills that target online platforms for not properly moderating deceptive election content or labeling AI-generated content. "It will result in the chilling and blocking of constitutionally protected free speech," said Carl Szabo, vice president and general counsel for NetChoice. The group's members include Google, X and Snap as well as Facebook's parent company, Meta, and other tech giants. Online platforms have their own rules about manipulated media and political ads, but their policies can differ. Unlike Meta and X, TikTok doesn't allow political ads and says it may remove even labeled AI-generated content if it depicts a public figure such as a celebrity "when used for political or commercial endorsements." Truth Social, a platform created by Trump, doesn't address manipulated media in its rules about what's not allowed on its platform. Federal and state regulators are already cracking down on AI-generated content. The Federal Communications Commission in May proposed a $6 million fine against Steve Kramer, a Democratic political consultant behind a robocall that used AI to impersonate President Biden's voice. The fake call discouraged participation in New Hampshire's Democratic presidential primary in January. Kramer, who told NBC News he planned the call to bring attention to the dangers of AI in politics, also faces criminal charges of felony voter suppression and misdemeanor impersonation of a candidate. Szabo said current laws are enough to address concerns about election deepfakes. NetChoice has sued various states to stop some laws aimed at protecting children on social media, alleging they violate free speech protections under the First Amendment. "Just creating a new law doesn't do anything to stop the bad behavior, you actually need to enforce laws," Szabo said. More than two dozen states, including Washington, Arizona and Oregon, have enacted, passed or are working on legislation to regulate deepfakes, according to the consumer advocacy nonprofit Public Citizen. In 2019, California instituted a law aimed at combating manipulated media after a video that made it appear as if House Speaker Nancy Pelosi was drunk went viral on social media. Enforcing that law has been a challenge. "We did have to water it down," said Assemblymember Marc Berman, D-Menlo Park, who authored the bill. "It attracted a lot of attention to the potential risks of this technology, but I was worried that it really, at the end of the day, didn't do a lot." Rather than take legal action, said Danielle Citron, a professor at the University of Virginia School of Law, political candidates might choose to debunk a deepfake or even ignore it to limit its spread. By the time they could go through the court system, the content might already have gone viral. "These laws are important because of the message they send. They teach us something," she said, adding that they inform people who share deepfakes that there are costs. This year, lawmakers worked with the California Initiative for Technology and Democracy, a project of the nonprofit California Common Cause, on several bills to address political deepfakes. Some target online platforms that have been shielded under federal law from being held liable for content posted by users. Berman introduced a bill that requires an online platform with at least 1 million California users to remove or label certain deceptive election-related content within 120 days of an election. The platforms would have to take action no later than 72 hours after a user reports the post. Under AB 2655, which passed the Legislature Wednesday, the platforms would also need procedures for identifying, removing and labeling fake content. It also doesn't apply to parody or satire or news outlets that meet certain requirements. Another bill, co-authored by Assemblymember Buffy Wicks, D-Oakland, requires online platforms to label AI-generated content. While NetChoice and TechNet, another industry group, oppose the bill, ChatGPT maker OpenAI is supporting AB 3211, Reuters reported. The two bills, though, wouldn't take effect until after the election, underscoring the challenges with passing new laws as technology advances rapidly. "Part of my hope with introducing the bill is the attention that it creates, and hopefully the pressure that it puts on the social media platforms to behave right now," Berman said.
Share
Share
Copy Link
California's legislature has passed a series of bills aimed at regulating artificial intelligence, including a ban on deepfakes in elections and measures to protect workers from AI-driven discrimination. These laws position California as a leader in AI regulation in the United States.
In a landmark move, California lawmakers have approved a comprehensive package of bills aimed at regulating artificial intelligence (AI) and protecting citizens from its potential misuse. The legislation, passed on September 1, 2024, addresses various aspects of AI technology, from election integrity to worker protection 1.
One of the most significant measures is the ban on deceptive AI-generated content, commonly known as deepfakes, in political campaigns. This law prohibits the distribution of AI-created audio, images, or video of candidates within 120 days of an election without a clear disclosure of their artificial nature 2. The move aims to prevent the spread of misinformation and maintain the integrity of the electoral process.
Another crucial aspect of the legislation focuses on safeguarding workers from AI-driven discrimination. The new laws require companies to inform employees when AI is used in hiring decisions or to evaluate job performance 3. This transparency measure ensures that workers are aware of AI's role in their professional lives and can challenge potentially biased decisions.
The package also includes regulations for large language models, the technology behind popular AI chatbots. Companies developing these models must now conduct regular risk assessments and maintain detailed records of the data used to train their AI systems 4. This requirement aims to address concerns about bias, misinformation, and the potential misuse of personal data in AI development.
By passing these bills, California has positioned itself as a pioneer in AI regulation within the United States. The state's approach could serve as a model for federal legislation, which is currently lacking in this rapidly evolving field 5. Governor Gavin Newsom has until October 14 to sign these bills into law, potentially setting a new standard for AI governance nationwide.
The tech industry's reaction to these new regulations has been mixed. While some companies welcome clear guidelines, others express concerns about potential limitations on innovation. As AI continues to advance, the balance between regulation and technological progress remains a critical point of discussion 2.
California's bold move in AI regulation reflects growing global concerns about the technology's impact on society, democracy, and the workforce. As these laws take effect, their implementation and effectiveness will be closely watched by policymakers, tech companies, and citizens alike, potentially influencing future AI policies across the country and beyond.
Reference
[1]
[2]
[3]
[4]
[5]
U.S. News & World Report
|California Lawmakers Approve Legislation to Ban Deepfakes, Protect Workers and Regulate AICalifornia Governor Gavin Newsom signs new laws to address the growing threat of AI-generated deepfakes in elections. The legislation aims to protect voters from misinformation and maintain election integrity.
39 Sources
39 Sources
California Governor Gavin Newsom has signed multiple AI-related bills into law, addressing concerns about deepfakes, actor impersonation, and AI regulation. These new laws aim to protect individuals and establish guidelines for AI use in various sectors.
5 Sources
5 Sources
California's legislature has approved a groundbreaking bill to regulate large AI models, setting the stage for potential nationwide standards. The bill, if signed into law, would require companies to evaluate AI systems for risks and implement mitigation measures.
7 Sources
7 Sources
Governor Gavin Newsom signs bills closing legal loopholes and criminalizing AI-generated child sexual abuse material, positioning California as a leader in AI regulation.
7 Sources
7 Sources
California's recently enacted law targeting AI-generated deepfakes in elections is being put to the test, as Elon Musk's reposting of Kamala Harris parody videos sparks debate and potential legal challenges.
6 Sources
6 Sources