Curated by THEOUTPOST
On Mon, 30 Sept, 12:01 AM UTC
47 Sources
[1]
Author of Vetoed California AI Bill Says Issue 'Not Going Away'
The architect of California's controversial artificial intelligence safety bill said local lawmakers will continue pushing for guardrails on the technology after Governor Gavin Newsom's decision Sunday to veto the legislation. "It's disappointing that this veto happened, but this issue isn't going away," Democratic California Senator Scott Wiener said in an interview Monday with Bloomberg Television. "We are going to get the job done."
[2]
California governor vetoes controversial AI safety bill
California Governor Gavin Newsom has vetoed a controversial AI bill, tho don't assume it was necessarily a final win for the tech industry. On Sunday, Newsom (D) returned California Senate Bill 1047 to the legislature unsigned, explaining in an accompanying statement [PDF] that the bill doesn't take the right approach to ensuring or requiring AI safety. That said, the matter isn't concluded: Newsom wants the US state's lawmakers to hand him a better bill. "Let me be clear - I agree with the [bill's] author - we cannot afford to wait for a major catastrophe to occur before taking action to protect the public," Newsom said. "I do not agree, however, that to keep the public safe, we must settle for a solution that is not informed by an empirical trajectory analysis of AI systems and capabilities." Newsom's criticism of the bill centers on the sort of AI models it regulates - namely, the largest ones out there. Smaller models are exempt from enforcement, which he said is a serious policy gap. Smaller, specialized models may emerge as equally or even more dangerous than models targeted by SB 1047 "By focusing only on the most expensive and largest-scale models, SB 1047 establishes a regulatory framework that could give the public a false sense of security about controlling this fast-moving technology," Newsom said. "Smaller, specialized models may emerge as equally or even more dangerous than models targeted by SB 1047 ... Adaptability is critical as we race to regulate a technology still in its infancy." Newsom is also concerned that the bill failed to account for where an AI system was deployed, whether it was expected to make critical decisions, or how systems used sensitive data. "Instead, the bill applies stringent standards to even the most basic functions - so long as a large system deploys it," he said. "I do not believe this is the best approach to protecting the public from real threats posed by the technology." Thanks, but go back to the drawing board and try again, in other words. Legislators and the lobbyists. The proposed law, which passed the state senate and house, is considered controversial as while it had its supporters, it was also fought against by AI makers and federal-level politicians who basically thought it was just a bad bill. In the end, the wording of the legislation was amended following feedback from Anthropic, a startup built by former OpenAI staff and others with a focus on the safe use of machine learning, and others, before being handed to the governor to sign - and he refused. Newsom has previously stated that he was worried about how SB 1047 and other potential large-scale AI regulation bills would affect the continued presence of AI companies in California, which he mentions again in the signing statement. That might be the case, but Newsom's letter makes it clear he wants AI innovation to remain in the Golden State, but he also desires a sweeping AI safety bill like SB 1047. Dean Ball, a research fellow at free-market think-tank the Mercatus Center, told The Register that Newsom's veto was the right move for all the same reasons the governor said. "The size thresholds the bill used are already going out of date," Ball said. "[They're] almost certainly below the bill's threshold yet undoubtedly have 'frontier' capabilities." California state senator Scott Wiener (D-11th district), the author of the bill, described Newsom's veto in a post on X as a "setback for everyone who believes in oversight of massive corporations." "This veto leaves us with the troubling reality that companies aiming to create an extremely powerful technology face no binding restrictions from US policymakers," Wiener said. "This veto is a missed opportunity to once again lead on innovative tech regulation ... and we are all less safe as a result." Ball, on the other hand, doesn't seem to see things as so final, opining that California legislators will likely take action on a similar bill in the next session - one that could pass. "This is only chapter one in what will be a long story," Ball said. ®
[3]
Gavin Newsom vetoes California's contentious AI safety bill
By Shirin Ghaffary, Bloomberg News The Tribune Content Agency California Governor Gavin Newsom has vetoed a contentious artificial intelligence safety bill that would have required companies to make sure their AI models don't cause major harm. The bill, called SB 1047, was poised to be one of the most consequential pieces of AI regulation in the U.S., given California's central position in the tech ecosystem and the lack of federal legislation for artificial intelligence. In recent weeks, the bill had divided the AI industry and drawn significant criticism from some tech leaders and prominent Democrats. "While well-intentioned, SB 1047 does not take into account whether an AI system is deployed in high-risk environments, involves critical decision-making or the use of sensitive data," the governor wrote in a statement on Sunday. "Instead, the bill applies stringent standards to even the most basic functions - so long as a large system deploys it. I do not believe this is the best approach to protecting the public from real threats posed by the technology." In a letter explaining his decision to veto the bill, Newsom wrote that he agrees with proponents of the bill that "we cannot afford to wait for a major catastrophe to occur before taking action to protect the public," but that any regulation "must be based on empirical evidence and science." The governor pointed to his executive order on AI as well as several other bills he has signed in recent weeks that regulate the technology around "specific known risks" such as deepfakes. 'Reasonable care' SB 1047 mandated that companies developing powerful AI models take "reasonable care" to ensure that their technologies don't cause "severe harm" such as mass casualties or property damage above $500 million. The bill, which was introduced by Democratic state Senator Scott Wiener and passed the state Senate in May, would have required companies to take precautions such as implementing a kill switch that could turn off their technology at any time. It also called for AI models to be submitted to third-party testing to ensure they are minimizing grave risk. Additionally, the bill would have created whistleblower protections for employees at AI companies that want to share safety concerns. Companies that weren't in compliance with the bill could have been sued by the California attorney general. Supporters argued that SB 1047 would create common-sense legal standards to hold AI companies accountable. But venture capitalists, startup leaders and companies like OpenAI said the bill could hurt innovation and perhaps even drive AI companies out of the state. Critics of SB 1047, including OpenAI, tech incubator Y Combinator and VC firm Andreessen Horowitz, have registered lobbyists working on the bill. "The AI revolution is only just beginning, and California's unique status as the global leader in AI is fueling the state's economic dynamism," Jason Kwon, chief strategy officer at OpenAI, wrote in a letter last month opposing the legislation. "SB 1047 would threaten that growth, slow the pace of innovation, and lead California's world-class engineers and entrepreneurs to leave the state in search of greater opportunity elsewhere." Lawmakers opposed Lawmakers including former House Speaker Nancy Pelosi, Representative Ro Khanna and San Francisco Mayor London Breed also voiced their opposition, echoing concerns from the tech industry that the bill could impede California's leadership in AI innovation. Newsom recently said he was concerned the bill might have a "chilling effect" on AI development. The bill had also earned backing from some notable names in tech late last month in the days leading up to its passage by California's legislature. Elon Musk unexpectedly voiced his support, even though he said it's a "tough call and will make some people upset." OpenAI rival Anthropic, which has a reputation for being safety-oriented, said the bill's "benefits likely outweigh its costs," though the company said some aspects of SB 1047 remained "concerning or ambiguous to us." Wiener had defended the bill, stressing that its provisions only apply to companies that spend more than $100 million on training large models or $10 million fine-tuning models, which would exempt most smaller startups. The lawmaker has also said that while he would support federal legislation, Congress has been historically slow to regulate tech and that in the absence of national action, he believes the state has a responsibility to lead. Along with his SB 1047 veto announcement, Newsom said he will conduct an analysis of leading AI models' risks and capabilities led by several academics, including AI scholar and entrepreneur Fei-Fei Li. The governor also signed a bill on Sunday, SB 896, that regulates how state agencies use AI and conducts several agencies to produce a report on the risks and benefits of the technology. _____
[4]
California governor vetoes contentious AI safety bill
California Governor Gavin Newsom on Sunday vetoed a hotly contested artificial intelligence safety bill, after the tech industry raised objections, saying it could drive AI companies from the state and hinder innovation. Newsom said he had asked leading experts on Generative AI to help California "develop workable guardrails" that focus "on developing an empirical, science-based trajectory analysis." He also ordered state agencies to expand their assessment of the risks from potential catastrophic events tied to AI use. Generative AI - which can create text, photos and videos in response to open-ended prompts - has spurred excitement as well as fears it could make some jobs obsolete, upend elections and potentially overpower humans and have catastrophic effects. The bill's author, Democratic State Senator Scott Wiener, said legislation was necessary to protect the public before advances in AI become either unwieldy or uncontrollable. The AI industry is growing fast in California and some leaders questioned the future of these companies in the state if the bill became law. Wiener said Sunday the veto makes California less safe and means "companies aiming to create an extremely powerful technology face no binding restrictions." He added "voluntary commitments from industry are not enforceable and rarely work out well for the public." "We cannot afford to wait for a major catastrophe to occur before taking action to protect the public," Newsom said, but added he did not agree "we must settle for a solution that is not informed by an empirical trajectory analysis of AI systems and capabilities." Newsom said he will work with the legislature on AI legislation during its next session. It comes as legislation in U.S. Congress to set safeguards has stalled and the Biden administration is advancing regulatory AI oversight proposals. Newsom said "a California-only approach may well be warranted - especially absent federal action by Congress." Chamber of Progress, a tech industry coalition, praised Newsom's veto saying "the California tech economy has always thrived on competition and openness." Among other things, the measure would have mandated safety testing for many of the most advanced AI models that cost more than $100 million to develop or those that require a defined amount of computing power. Developers of AI software operating in the state would have also needed to outline methods for turning off the AI models, effectively a kill switch. The bill would have established a state entity to oversee the development of so-called "Frontier Models" that exceed the capabilities present in the most advanced existing models. The bill faced strong opposition from a wide range of groups. Alphabet's Google, Microsoft-backed OpenAI and Meta Platforms, all of which are developing generative AI models, had expressed their concerns about the proposal. Some Democrats in U.S. Congress, including Representative Nancy Pelosi, also opposed it. Proponents included Tesla CEO Elon Musk, who also runs an AI firm called xAI. Amazon-backed Anthropic said the benefits to the bill likely outweigh the costs, though it added there were still some aspects that seem concerning or ambiguous. Newsom separately signed legislation requiring the state to assess potential threats posed by Generative AI to California's critical infrastructure. The state is analyzing energy infrastructure risks and previously convened power sector providers and will undertake the same risk assessment with water infrastructure providers in the coming year and later the communications sector, Newsom said.
[5]
Gavin Newsom Blocks Contentious AI Safety Bill in California
California Governor Gavin Newsom has vetoed what would have become one of the most comprehensive policies governing the safety of artificial intelligence in the U.S. The bill would've been among the first to hold AI developers accountable for any severe harm caused by their technologies. It drew fierce criticism from some prominent Democrats and major tech firms, including ChatGPT creator OpenAI and venture capital firm Andreessen Horowitz, who warned it could stall innovation in the state. Newsom described the legislation as "well-intentioned" but said in a statement that it would've applied "stringent standards to even the most basic functions." Regulation should be based on "empirical evidence and science," he said, pointing to his own executive order on AI and other bills he's signed that regulate the technology around known risks such as deepfakes. The debate around California's SB 1047 bill highlights the challenge that lawmakers around the world are facing in controlling the risks of AI while also supporting the emerging technology. U.S policymakers have yet to pass any comprehensive legislation around the technology since the release of ChatGPT two years ago touched off a global generative AI boom. Democratic California Senator Scott Wiener, who introduced the bill, called Newsom's veto a "setback for everyone who believes in oversight of massive corporations." In a statement posted on X, Wiener said, "We are all less safe as a result." SB 1047 would've mandated that companies developing powerful AI models take reasonable care to ensure that their technologies wouldn't cause "severe harm" such as mass casualties or property damage above $500 million. Companies would've had to take specific precautions, including maintaining a kill switch that could turn off their technology. AI models would've also been subject to third-party testing to ensure they minimized grave risk. The bill would've also created whistleblower protections for employees at AI companies that want to share safety concerns. Companies that weren't in compliance with the bill could have been sued by the California attorney general. Supporters of the legislation said it would've created common-sense legal standards. But VC investors, startup leaders and companies like OpenAI warned that it would slow innovation and drive AI companies out of the state. "The AI revolution is only just beginning, and California's unique status as the global leader in AI is fueling the state's economic dynamism," Jason Kwon, chief strategy officer at OpenAI, wrote in a letter last month opposing the legislation. "SB 1047 would threaten that growth, slow the pace of innovation, and lead California's world-class engineers and entrepreneurs to leave the state in search of greater opportunity elsewhere." Lawmakers including former House Speaker Nancy Pelosi, Representative Ro Khanna and San Francisco Mayor London Breed also voiced their opposition, echoing concerns from the tech industry that the bill could impede upon California's leadership in AI innovation. Newsom recently said he was concerned the bill might have a "chilling effect" on AI development. The bill had earned backing from some notable names in tech late last month in the days leading up to its passage by California's legislature. Elon Musk unexpectedly voiced his support, even though he said it's a "tough call and will make some people upset." OpenAI rival Anthropic, which has a reputation for being safety-oriented, said the bill's "benefits likely outweigh its costs," though the company said some aspects remained "concerning or ambiguous to us." Wiener had defended the bill, stressing that its provisions only apply to companies that spend more than $100 million on training large models or $10 million fine-tuning models, limits that would exempt most smaller startups. The lawmaker had also noted that Congress has been historically slow to regulate tech itself. In announcing his veto, Newsom said he will consult with outside experts, including AI scholar and entrepreneur Fei-Fei Li, to "develop workable guardrails" on the technology and continue working with state legislature on the topic. The governor also signed a bill on Sunday, SB 896, that regulates how state agencies use AI.
[6]
California Gov. Gavin Newsom vetoes contentious AI safety bill that...
SACRAMENTO, Calif. -- California Gov. Gavin Newsom vetoed a landmark bill aimed at establishing first-in-the-nation safety measures for large artificial intelligence models Sunday. The decision is a major blow to efforts attempting to rein in the homegrown industry that is rapidly evolving with little oversight. The bill would have established some of the first regulations on large-scale AI models in the nation and paved the way for AI safety regulations across the country, supporters said. Earlier this month, the Democratic governor told an audience at Dreamforce, an annual conference hosted by software giant Salesforce, that California must lead in regulating AI in the face of federal inaction but that the proposal "can have a chilling effect on the industry." The proposal, which drew fierce opposition from startups, tech giants and several Democratic House members, could have hurt the homegrown industry by establishing rigid requirements, Newsom said. "While well-intentioned, SB 1047 does not take into account whether an AI system is deployed in high-risk environments, involves critical decision-making or the use of sensitive data," Newsom said in a statement. "Instead, the bill applies stringent standards to even the most basic functions -- so long as a large system deploys it. I do not believe this is the best approach to protecting the public from real threats posed by the technology." Newsom on Sunday instead announced that the state will partner with several industry experts, including AI pioneer Fei-Fei Li, to develop guardrails around powerful AI models. Li opposed the AI safety proposal. The measure, aimed at reducing potential risks created by AI, would have required companies to test their models and publicly disclose their safety protocols to prevent the models from being manipulated to, for example, wipe out the state's electric grid or help build chemical weapons. Experts say those scenarios could be possible in the future as the industry continues to rapidly advance. It also would have provided whistleblower protections to workers. The bill's author, Democratic state Sen. Scott Weiner, called the veto "a setback for everyone who believes in oversight of massive corporations that are making critical decisions that affect the safety and the welfare of the public and the future of the planet." "The companies developing advanced AI systems acknowledge that the risks these models present to the public are real and rapidly increasing. While the large AI labs have made admirable commitments to monitor and mitigate these risks, the truth is that voluntary commitments from industry are not enforceable and rarely work out well for the public," Wiener said in a statement Sunday afternoon. Wiener said the debate around the bill has dramatically advanced the issue of AI safety, and that he would continue pressing that point. The legislation is among a host of bills passed by the Legislature this year to regulate AI, fight deepfakes and protect workers. State lawmakers said California must take actions this year, citing hard lessons they learned from failing to rein in social media companies when they might have had a chance. Proponents of the measure, including Elon Musk and Anthropic, said the proposal could have injected some levels of transparency and accountability around large-scale AI models, as developers and experts say they still don't have a full understanding of how AI models behave and why. The bill targeted systems that require more than $100 million to build. No current AI models have hit that threshold, but some experts said that could change within the next year. "This is because of the massive investment scale-up within the industry," said Daniel Kokotajlo, a former OpenAI researcher who resigned in April over what he saw as the company's disregard for AI risks. "This is a crazy amount of power to have any private company control unaccountably, and it's also incredibly risky." The United States is already behind Europe in regulating AI to limit risks. The California proposal wasn't as comprehensive as regulations in Europe, but it would have been a good first step to set guardrails around the rapidly growing technology that is raising concerns about job loss, misinformation, invasions of privacy and automation bias, supporters said. A number of leading AI companies last year voluntarily agreed to follow safeguards set by the White House, such as testing and sharing information about their models. The California bill would have mandated AI developers to follow requirements similar to those commitments, said the measure's supporters. But critics, including former U.S. House Speaker Nancy Pelosi, argued that the bill would "kill California tech" and stifle innovation. It would have discouraged AI developers from investing in large models or sharing open-source software, they said.
[7]
California Gov. Gavin Newsom vetoes first-in-nation AI safety bill
California Gov. Gavin Newsom on Sunday vetoed a landmark bill aimed at establishing first-in-the-nation safety measures for large artificial intelligence models. The decision is a major blow to efforts attempting to rein in the homegrown industry that is rapidly evolving with little oversight. The bill would have established some of the first regulations on large-scale AI models in the nation and paved the way for AI safety regulations across the country, supporters said. Earlier this month, the Democratic governor told an audience at Dreamforce, an annual conference hosted by software giant Salesforce, that California must lead in regulating AI in the face of federal inaction but that the proposal "can have a chilling effect on the industry." The proposal, which drew fierce opposition from startups, tech giants and several Democratic House members, could have hurt the homegrown industry by establishing rigid requirements, Newsom said. "While well-intentioned, SB 1047 does not take into account whether an AI system is deployed in high-risk environments, involves critical decision-making or the use of sensitive data," Newsom said in a statement. "Instead, the bill applies stringent standards to even the most basic functions -- so long as a large system deploys it. I do not believe this is the best approach to protecting the public from real threats posed by the technology." Newsom on Sunday instead announced that the state will partner with several industry experts, including AI pioneer Fei-Fei Li, to develop guardrails around powerful AI models. Li opposed the AI safety proposal. The measure, aimed at reducing potential risks created by AI, would have required companies to test their models and publicly disclose their safety protocols to prevent the models from being manipulated to, for example, wipe out the state's electric grid or help build chemical weapons. Experts say those scenarios could be possible in the future as the industry continues to rapidly advance. It also would have provided whistleblower protections to workers. The legislation is among a host of bills passed by the Legislature this year to regulate AI, fight deepfakes and protect workers. State lawmakers said California must take actions this year, citing hard lessons they learned from failing to rein in social media companies when they might have had a chance. Proponents of the measure, including Elon Musk and Anthropic, said the proposal could have injected some levels of transparency and accountability around large-scale AI models, as developers and experts say they still don't have a full understanding of how AI models behave and why. The bill targeted systems that require more than $100 million to build. No current AI models have hit that threshold, but some experts said that could change within the next year. "This is because of the massive investment scale-up within the industry," said Daniel Kokotajlo, a former OpenAI researcher who resigned in April over what he saw as the company's disregard for AI risks. "This is a crazy amount of power to have any private company control unaccountably, and it's also incredibly risky." The United States is already behind Europe in regulating AI to limit risks. The California proposal wasn't as comprehensive as regulations in Europe, but it would have been a good first step to set guardrails around the rapidly growing technology that is raising concerns about job loss, misinformation, invasions of privacy and automation bias, supporters said. A number of leading AI companies last year voluntarily agreed to follow safeguards set by the White House, such as testing and sharing information about their models. The California bill would have mandated AI developers to follow requirements similar to those commitments, said the measure's supporters. But critics, including former U.S. House Speaker Nancy Pelosi, argued that the bill would "kill California tech" and stifle innovation. It would have discouraged AI developers from investing in large models or sharing open-source software, they said. Newsom's decision to veto the bill marks another win in California for big tech companies and AI developers, many of whom spent the past year lobbying alongside the California Chamber of Commerce to sway the governor and lawmakers from advancing AI regulations. Two other sweeping AI proposals, which also faced mounting opposition from the tech industry and others, died ahead of a legislative deadline last month. The bills would have required AI developers to label AI-generated content and ban discrimination from AI tools used to make employment decisions.
[8]
California governor vetoes bill to create first-in-nation AI safety measures
California Governor Gavin Newsom vetoed a landmark bill aimed at establishing first-in-the-nation safety measures for large artificial intelligence models Sunday. The decision is a major blow to efforts attempting to rein in the homegrown industry that is rapidly evolving with little oversight. The bill would have established some of the first regulations on large-scale AI models in the nation and paved the way for AI safety regulations across the country, supporters said. Earlier in September, the Democratic governor told an audience at Dreamforce, an annual conference hosted by software giant Salesforce, that California must lead in regulating AI in the face of federal inaction but that the proposal "can have a chilling effect on the industry." The proposal, which drew fierce opposition from startups, tech giants and several Democratic House members, could have hurt the homegrown industry by establishing rigid requirements, Newsom said. "While well-intentioned, SB 1047 does not take into account whether an AI system is deployed in high-risk environments, involves critical decision-making or the use of sensitive data," Newsom said in a statement. "Instead, the bill applies stringent standards to even the most basic functions -- so long as a large system deploys it. I do not believe this is the best approach to protecting the public from real threats posed by the technology." Newsom on Sunday instead announced that the state will partner with several industry experts, including AI pioneer Fei-Fei Li, to develop guardrails around powerful AI models. Li opposed the AI safety proposal. The measure, aimed at reducing potential risks created by AI, would have required companies to test their models and publicly disclose their safety protocols to prevent the models from being manipulated to, for example, wipe out the state's electric grid or help build chemical weapons. Experts say those scenarios could be possible in the future as the industry continues to rapidly advance. It also would have provided whistleblower protections to workers. The legislation is among a host of bills passed by the legislature this year to regulate AI, fight deepfakes and protect workers. State lawmakers said California must take action this year, citing hard lessons they learned from failing to rein in social media companies when they might have had a chance. Proponents of the measure, including Elon Musk and Anthropic, said the proposal could have injected some levels of transparency and accountability around large-scale AI models, as developers and experts say they still don't have a full understanding of how AI models behave and why. The bill targeted systems that require more than $100 million to build. No current AI models have hit that threshold, but some experts said that could change within the next year. "This is because of the massive investment scale-up within the industry," said Daniel Kokotajlo, a former OpenAI researcher who resigned in April over what he saw as the company's disregard for AI risks. "This is a crazy amount of power to have any private company control unaccountably, and it's also incredibly risky." The United States is already behind Europe in regulating AI to limit risks. The California proposal wasn't as comprehensive as regulations in Europe, but it would have been a good first step to set guardrails around the rapidly growing technology that is raising concerns about job loss, misinformation, invasions of privacy and automation bias, supporters said. A number of leading AI companies last year voluntarily agreed to follow safeguards set by the White House, such as testing and sharing information about their models. The California bill would have mandated that AI developers follow requirements similar to those commitments, said the measure's supporters. But critics, including former U.S. House Speaker Nancy Pelosi, argued that the bill would "kill California tech" and stifle innovation. It would have discouraged AI developers from investing in large models or sharing open-source software, they said. Newsom's decision to veto the bill marks another win in California for big tech companies and AI developers, many of whom spent the past year lobbying alongside the California Chamber of Commerce to sway the governor and lawmakers from advancing AI regulations. Two other sweeping AI proposals, which also faced mounting opposition from the tech industry and others, died ahead of a legislative deadline in August. The bills would have required AI developers to label AI-generated content and ban discrimination from AI tools used to make employment decisions. The governor said earlier this summer he wanted to protect California's status as a global leader in AI, noting that 32 of the world's top 50 AI companies are located in the state. He has promoted California as an early adopter as the state could soon deploy generative AI tools to address highway congestion, provide tax guidance and streamline homelessness programs. The state also announced last month a voluntary partnership with AI giant Nvidia to help train students, college faculty, developers and data scientists. California is also considering new rules against AI discrimination in hiring practices. Earlier in September, Newsom signed some of the toughest laws in the country to crack down on election deepfakes and measures to protect Hollywood workers from unauthorized AI use. But even with Newsom's veto, the California safety proposal is inspiring lawmakers in other states to take up similar measures, said Tatiana Rice, deputy director of the Future of Privacy Forum, a nonprofit that works with lawmakers on technology and privacy proposals. "They are going to potentially either copy it or do something similar next legislative session," Rice said. "So it's not going away."
[9]
California governor vetoes bill to create first-in-nation AI safety measures
SACRAMENTO, Calif. -- California Gov. Gavin Newsom vetoed a landmark bill aimed at establishing first-in-the-nation safety measures for large artificial intelligence models Sunday. The decision is a major blow to efforts attempting to rein in the homegrown industry that is rapidly evolving with little oversight. The bill would have established some of the first regulations on large-scale AI models in the nation and paved the way for AI safety regulations across the country, supporters said. Earlier this month, the Democratic governor told an audience at Dreamforce, an annual conference hosted by software giant Salesforce, that California must lead in regulating AI in the face of federal inaction but that the proposal "can have a chilling effect on the industry." The proposal, which drew fierce opposition from startups, tech giants and several Democratic House members, could have hurt the homegrown industry by establishing rigid requirements, Newsom said. "While well-intentioned, SB 1047 does not take into account whether an AI system is deployed in high-risk environments, involves critical decision-making or the use of sensitive data," Newsom said in a statement. "Instead, the bill applies stringent standards to even the most basic functions -- so long as a large system deploys it. I do not believe this is the best approach to protecting the public from real threats posed by the technology." Newsom on Sunday instead announced that the state will partner with several industry experts, including AI pioneer Fei-Fei Li, to develop guardrails around powerful AI models. Li opposed the AI safety proposal. The measure, aimed at reducing potential risks created by AI, would have required companies to test their models and publicly disclose their safety protocols to prevent the models from being manipulated to, for example, wipe out the state's electric grid or help build chemical weapons. Experts say those scenarios could be possible in the future as the industry continues to rapidly advance. It also would have provided whistleblower protections to workers. The legislation is among a host of bills passed by the Legislature this year to regulate AI, fight deepfakes and protect workers. State lawmakers said California must take actions this year, citing hard lessons they learned from failing to rein in social media companies when they might have had a chance. Proponents of the measure, including Elon Musk and Anthropic, said the proposal could have injected some levels of transparency and accountability around large-scale AI models, as developers and experts say they still don't have a full understanding of how AI models behave and why. The bill targeted systems that require more than $100 million to build. No current AI models have hit that threshold, but some experts said that could change within the next year. "This is because of the massive investment scale-up within the industry," said Daniel Kokotajlo, a former OpenAI researcher who resigned in April over what he saw as the company's disregard for AI risks. "This is a crazy amount of power to have any private company control unaccountably, and it's also incredibly risky." The United States is already behind Europe in regulating AI to limit risks. The California proposal wasn't as comprehensive as regulations in Europe, but it would have been a good first step to set guardrails around the rapidly growing technology that is raising concerns about job loss, misinformation, invasions of privacy and automation bias, supporters said. A number of leading AI companies last year voluntarily agreed to follow safeguards set by the White House, such as testing and sharing information about their models. The California bill would have mandated AI developers to follow requirements similar to those commitments, said the measure's supporters. But critics, including former U.S. House Speaker Nancy Pelosi, argued that the bill would "kill California tech" and stifle innovation. It would have discouraged AI developers from investing in large models or sharing open-source software, they said. Newsom's decision to veto the bill marks another win in California for big tech companies and AI developers, many of whom spent the past year lobbying alongside the California Chamber of Commerce to sway the governor and lawmakers from advancing AI regulations. Two other sweeping AI proposals, which also faced mounting opposition from the tech industry and others, died ahead of a legislative deadline last month. The bills would have required AI developers to label AI-generated content and ban discrimination from AI tools used to make employment decisions. The governor said earlier this summer he wanted to protect California's status as a global leader in AI, noting that 32 of the world's top 50 AI companies are located in the state. He has promoted California as an early adopter as the state could soon deploy generative AI tools to address highway congestion, provide tax guidance and streamline homelessness programs. The state also announced last month a voluntary partnership with AI giant Nvidia to help train students, college faculty, developers and data scientists. California is also considering new rules against AI discrimination in hiring practices. Earlier this month, Newsom signed some of the toughest laws in the country to crack down on election deepfakes and measures to protect Hollywood workers from unauthorized AI use. But even with Newsom's veto, the California safety proposal is inspiring lawmakers in other states to take up similar measures, said Tatiana Rice, deputy director of the Future of Privacy Forum, a nonprofit that works with lawmakers on technology and privacy proposals. "They are going to potentially either copy it or do something similar next legislative session," Rice said. "So it's not going away." -- - The Associated Press and OpenAI have a licensing and technology agreement that allows OpenAI access to part of AP's text archives.
[10]
Gov. Newsom vetoes California's controversial AI bill, SB 1047 | TechCrunch
California Governor Gavin Newsom has vetoed SB 1047, a high-profile bill that would have was regulated the development of AI. The bill was authored by State Senator Scott Wiener and would have made companies that develop the largest AI models liable for implementing safety protocols to prevent "critical harms." It was opposed by many in Silicon Valley, including companies like OpenAI, high-profile technologists like Meta's chief AI scientist Yann LeCun, and even Democratic politicians such as U.S. Congressman Ro Khanna. While California's state legislature passed SB 1047, opponents were holding out hope that Newsom might veto it -- and indeed, he'd already indicated that he had reservations about the bill. In a statement about today's veto, Newsom said, "While well-intentioned, SB 1047 does not take into account whether an AI system is deployed in high-risk environments, involves critical decision-making or the use of sensitive data. Instead, the bill applies stringent standards to even the most basic functions -- so long as a large system deploys it. I do not believe this is the best approach to protecting the public from real threats posed by the technology." In the same announcement, Newsom's office noted that he's signed 17 bills around the regulation and deployment of AI technology in the last 30 days, and it said he's asked experts such as Fei-Fei Li, Tino Cuéllar, and Jennifer Tour Chayes to "help California develop workable guardrails for deploying GenAI." (Known as the "godmother of AI," Li had previously said SB 1047 would "harm our budding AI ecosystem.") Wiener, meanwhile, published a statement describing the veto as "a setback for everyone who believes in oversight of massive corporations that are making critical decisions that affect the safety and welfare of the public and the future of the planet." He also claimed that the debate around the bill "has dramatically advanced the issue of AI safety on the international stage."
[11]
California governor vetoes controversial AI safety bill
Newsom says SB-1047 ignored "smaller, specialized models" and curtailed innovation. Further ReadingCalifornia Governor Gavin Newsom has vetoed SB-1047, a controversial artificial intelligence regulation that would have required the makers of large AI models to impose safety tests and kill switches to prevent potential "critical harms." In a statement announcing the veto on Sunday evening, Newsom suggested the bill's specific interest in model size was misplaced. "By focusing only on the most expensive and large-scale models, SB-1047 establishes a regulatory framework that could give the public a false sense of security about controlling this fast-moving technology," Newsom wrote. "Smaller, specialized models may emerge as equally or even more dangerous than the models targeted by SB-1047 -- at the potential expense of curtailing the very innovation that fuels advancement in favor of the public good." Newsom mentioned specific "rapidly evolving risks" from AI models that could be regulated in a more targeted way, such as "threats to our democratic process, the spread of misinformation and deepfakes, risks to online privacy, threats to critical infrastructure, and disruptions in the workforce." California already has a number of AI laws on the books targeting some of these potential harms, and many other states have signed similar laws. "While well-intentioned, SB-1047 does not take into account whether an Al system is deployed in high-risk environments, involves critical decision-making or the use of sensitive data," Newsom continued in explaining the veto. "Instead, the bill applies stringent standards to even the most basic functions -- so long as a large system deploys it. I do not believe this is the best approach to protecting the public from real threats posed by the technology." State Senator Scott Wiener, who co-authored the bill, called Newsom's veto "a setback for everyone who believes in oversight of massive corporations that are making critical decisions that affect the safety and welfare of the public and the future of the planet" in a social media post. Voluntary commitments to safety from AI companies are not enough, Wiener argued, adding that the lack of effective government regulation means "we are all less safe as a result" of the veto. A hard-fought lobbying battle Further ReadingSB-1047, which passed the state Assembly in August, had the support of many luminaries in the AI field, including Geoffrey Hinton and Yoshua Bengio. But others in and around the industry criticized the bill's heavy-handed approach and worried about the legal liability it could have imposed on open-weight models that were used by others for harmful purposes. Shortly after the bill was passed, a group of California business leaders sent an open letter to Governor Newsom urging him to veto what they called a "fundamentally flawed" bill that "regulates model development instead of misuse" and which they said would "introduce burdensome compliance costs." Lobbyists for major tech companies including Google and Meta also publicly opposed the bill, though a group of employees from those and other large tech companies came out in favor of its passage. OpenAI Chief Strategy Officer Jason Kwon publicly urged an SB-1047 veto, saying in an open letter that federal regulation would be more appropriate and effective than "a patchwork of state laws." Early attempts to craft such federal legislation have stalled in congress amid the release of some anodyne policy road maps and working group reports. xAI leader Elon Musk advocated for the bill's passage, saying it was "a tough call" but that in the end, AI needs to be regulated "just as we regulate any product/technology that is a potential risk to the public." California's powerful actor's union SAG-AFTRA also came out in support of the bill, calling it a "first step" to protecting against known dangers like deepfakes and nonconsensual voice and likeness use for its members. At the 2024 Dreamforce conference earlier this month, Newsom publicly addressed "the sort of outsized impact that legislation [like SB-1047] could have, and the chilling effect, particularly in the open source community... I can't solve for everything. What can we solve for?"
[12]
California's controversial AI safety bill was just vetoed by Gov. Newsom
California legislation aiming to limit the most existential threats of artificial intelligence was vetoed by California Gov. Gavin Newsom today, following fierce debate within the tech community about whether its requirements would save human lives or crush AI advancement. The bill would have set a de facto national standard for AI safety in the absence of federal law, and the potential for its passage set off an intense lobbying campaign by the tech industry to defeat it. The bill, known as SB 1047, would have required companies building large-scale AI models -- meaning those that cost more than $100 million to train -- to run safety tests on those systems and take steps to limit any risks that they identify in the process. Newsom said the bill was too broad and instead proposed a task force of researchers, led by the "godmother of AI" Fei-Fei Li, to come up with new guardrails for future legislation. Li, whose AI startup World Labs has raised $230 million, was a vocal opponent of SB 1047. She wrote in a Fortune op-ed that, while well-intended, the legislation would harm U.S. innovation. The governor called for new language in a future bill that focuses only on AI deployed in high-risk environments and are used to make critical decisions or use sensitive data. In his veto of the recent legislation, Newsom wrote: "Instead, the bill applies stringent standards to even the most basic functions -- so long as a large system deploys it." He also noted that 32 of the world's 50 largest AI companies are based in his state while calling attention to 17 bills addressing AI that he did sign in recent weeks, from measures curbing election deepfakes to requiring AI watermarking. The safety guardrails proposed by SB 1047 sparked months of debate within the tech community about whether the bill would push AI innovation out of California or curb major threats posed by rapid unchecked advancement, like the escalation of nuclear war or development of bioweapons. Earlier this month a YouGov poll found nearly 80% of voters nationally supported the California AI safety bill. Just last week, more than a hundred Hollywood stars including Shonda Rhimes and Mark Hamill came out in support of the AI safety bill, building on the actors guild SAG-AFTRA's successful lobbying for protection from AI-generated copycats of their work. ChatGPT developer OpenAI urged Newsom not to sign the legislation, arguing that AI regulation should be left to the federal government to avoid a confusing patchwork of laws that vary by state. And while Silicon Valley remains a breeding ground for AI innovation, OpenAI said California risks making businesses flee the state to avoid burdensome regulation if SB 1047 passes. The legislation also faced pushback from some Californian members of Congress, including political power broker and Rep. Nancy Pelosi, who urged Newsom not to sign the bill in order to protect AI innovation. State Sen. Scott Wiener, the sponsor of the legislation, has argued that his proposal is a common sense and "light-touch" measure that aligns with the voluntary safety commitments many tech companies, including OpenAI, have already made. And in the absence of comprehensive federal AI rules, the California lawmaker saw his proposal as an opportunity for California to lead in U.S. tech policy, just as it has previously on data privacy, net neutrality, and social media regulation. "This veto leaves us with the troubling reality that companies aiming to create an extremely powerful technology face no binding restrictions from U.S. policymakers, particularly given Congress's continuing paralysis around regulating the tech industry in any meaningful way," Wiener said in a statement today. Tesla CEO Elon Musk was also among the legislation's supporters, posting on X that "all things considered," California should enact an AI safety law. Musk, whose xAI develops the Grok chatbot, said the endorsement was a "tough call" but that AI should ultimately be regulated just like any other technology that has a public risk. Anthropic, the buzzy AI startup that pitches itself as safety-focused, was heavily involved in making sure that the final version of SB 1047 didn't add overly burdensome legal obligations for developers, leading to amendments that clarify an AI company won't be punished unless its model harms the public. Anthropic declined to explicitly support or oppose the final proposal in their letter to Newsom but did say the state should create some AI regulatory framework, especially in the absence of any clear action from federal lawmakers.
[13]
California Governor Newsom Vetoes AI Safety Bill that Divided Silicon Valley - Decrypt
On Sunday, California Governor Gavin Newsom vetoed Senate Bill 1047 (SB 1047), a proposal that aimed to establish new safety standards for artificial intelligence systems. Although the bill was touted as a potential model for future AI regulation, Newsom argued it could stifle innovation in California's tech sector. "Adaptability is critical as we race to regulate a technology still in its infancy. This will require a delicate balance," Newsom wrote. Newsom noted that SB 1047's focus on large-scale AI models -- those costing over $100 million -- might leave smaller yet equally risky models outside its purview. The bill, authored by State Senator Scott Wiener, sought to impose safety protocols for developers of large AI models and establish a Board of Frontier Models to oversee compliance. "By focusing only on the most expensive and large-scale models, SB 1047 creates a framework that could give the public a false sense of security about controlling this fast-moving technology," Newsom added. SB 1047 garnered support from tech safety advocates, including Elon Musk, who called for its passage last month. "For over 20 years, I have been an advocate for AI regulation, just as we regulate any product or technology that poses a risk to the public," Musk said. Musk's call was backed by AI luminaries Geoffrey Hinton and Yoshua Bengio, along with more than 125 Hollywood figures, who signed an open letter urging Newsom to approve the bill. However, the bill faced resistance from major tech players and venture capitalists who argued the regulations could curb innovation and drive talent away from California. OpenAI, Meta, and Google were among the opponents, preferring a federal approach to regulation. Newsom echoed these concerns, suggesting a more nuanced, evidence-based approach. "A California-only approach may well be warranted -- especially absent federal action by Congress -- but it must be grounded in empirical evidence and science," he wrote. Senator Wiener voiced disappointment with the veto, warning that without regulation, AI companies would continue to self-police without enforceable safety standards. While Newsom has signed other AI-related bills, including measures to combat deepfakes in elections and protect actors' likenesses from being replicated by AI without consent, his rejection of SB 1047 underscores the challenge of balancing innovation with oversight. Newsom pledged to work with experts, lawmakers, and federal partners to develop future AI regulations, promising to "find the appropriate path forward."
[14]
California governor vetoes contentious AI safety bill
California Governor Gavin Newsom vetoed an AI safety bill after tech industry objections, citing concerns it could stifle innovation. He emphasized the need for empirical analysis and workable guardrails. The bill's author argued that the veto leaves California less safe, as voluntary commitments from the industry are not enforceable.California Governor Gavin Newsom on Sunday vetoed a hotly contested artificial intelligence safety bill after the tech industry raised objections, saying it could drive AI companies from the state and hinder innovation. Newsom said the bill "does not take into account whether an AI system is deployed in high-risk environments, involves critical decision-making or the use of sensitive data" and would apply "stringent standards to even the most basic functions - so long as a large system deploys it." Newsom said he had asked leading experts on generative AI to help California "develop workable guardrails" that focus "on developing an empirical, science-based trajectory analysis." He also ordered state agencies to expand their assessment of the risks from potential catastrophic events tied to AI use. Generative AI - which can create text, photos and videos in response to open-ended prompts - has spurred excitement as well as fears it could make some jobs obsolete, upend elections and potentially overpower humans and have catastrophic effects. The bill's author, Democratic State Senator Scott Wiener, said legislation was necessary to protect the public before advances in AI become either unwieldy or uncontrollable. The AI industry is growing fast in California and some leaders questioned the future of these companies in the state if the bill became law. Wiener said Sunday the veto makes California less safe and means "companies aiming to create an extremely powerful technology face no binding restrictions." He added "voluntary commitments from industry are not enforceable and rarely work out well for the public." "We cannot afford to wait for a major catastrophe to occur before taking action to protect the public," Newsom said, but added he did not agree "we must settle for a solution that is not informed by an empirical trajectory analysis of AI systems and capabilities." Newsom said he will work with the legislature on AI legislation during its next session. It comes as legislation in U.S. Congress to set safeguards has stalled and the Biden administration is advancing regulatory AI oversight proposals Newsom said "a California-only approach may well be warranted - especially absent federal action by Congress." Chamber of Progress, a tech industry coalition, praised Newsom's veto saying "the California tech economy has always thrived on competition and openness." Among other things, the measure would have mandated safety testing for many of the most advanced AI models that cost more than $100 million to develop or those that require a defined amount of computing power. Developers of AI software operating in the state would have also needed to outline methods for turning off the AI models, effectively a kill switch. The bill would have established a state entity to oversee the development of so-called "Frontier Models" that exceed the capabilities present in the most advanced existing models. The bill faced strong opposition from a wide range of groups. Alphabet's Google, Microsoft-backed OpenAI and Meta Platforms, all of which are developing generative AI models, had expressed their concerns about the proposal. Some Democrats in U.S. Congress, including Representative Nancy Pelosi, also opposed it. Proponents included Tesla CEO Elon Musk, who also runs an AI firm called xAI. Amazon-backed Anthropic said the benefits to the bill likely outweigh the costs, though it added there were still some aspects that seem concerning or ambiguous. Newsom separately signed legislation requiring the state to assess potential threats posed by Generative AI to California's critical infrastructure. The state is analyzing energy infrastructure risks and previously convened power sector providers and will undertake the same risk assessment with water infrastructure providers in the coming year and later the communications sector, Newsom said.
[15]
California Governor Vetoes AI Safety Bill for Only Targeting Large Models
(Credit: Tom Williams / Contributor / CQ-Roll Call, Inc. via Getty Images) California Governor Gavin Newsom on Sunday vetoed an AI safety bill that targeted OpenAI, Google, and the many large AI companies based in the region. The Golden State is home to 32 of the top 50 AI companies, Newsom says in his veto message, citing a Forbes list. Focusing on companies that have reached a certain size could stifle innovation while leaving the threats presented by smaller models unaddressed, he says. "By focusing only on the most expensive and large-scale models, [bill] SB 1047 establishes a regulatory framework that could give the public a false sense of security about controlling this fast-moving technology," Newsom says. "Smaller, specialized models may emerge as equally or even more dangerous." The bill would have required large companies to "perform basic safety testing on massively powerful AI models," says California State Sen. Scott Wiener, the bill's sponsor. Startups were not covered by the bill. Newsom argues the bill does not "keep pace with the technology [and is] not informed by an empirical trajectory analysis of Al systems and capabilities." Wiener says it was "crafted by some of the leading AI minds on the planet" and notes that it's "supported by both of the top two most cited AI researchers of all time: the 'Godfathers of AI,' Geoffrey Hinton and Yoshua Bengio." Plus, "tech workers are even more likely than members of the general public to support the bill." "This veto leaves us with the troubling reality that companies aiming to create an extremely powerful technology face no binding restrictions from US policymakers, particularly given Congress's continuing paralysis around regulating the tech industry in any meaningful way," Wiener adds. Newsom suggested he may sign a revised version of the bill and called for safety protocols before a "major catastrophe" occurs with an AI system getting out of control. He convened a new council of experts to shape the state's approach to AI guardrails. However, advocates for the bill expressed disappointment and frustration about going back to the drawing board. "The race to establish comprehensive AI standards is now wide open, with California unexpectedly taking a back seat," Dr. Jeanne Eicks, associate dean at The Colleges of Law, tells PCMag. Newsom signed a slew of other AI bills this month, but SB 1047 was the most high-profile and contentious. It had the support of Elon Musk, Anthropic CEO Dario Amodei, and some Hollywood stars. But Google and Meta lobbied against it, NPR reports. OpenAI also opposed it, along with several venture capital firms, including Andreesen Horowitz. Rep. Nancy Pelosi, who represents San Franciso, also did not support the bill, citing concerns about stifling innovation.
[16]
California governor vetoes controversial AI safety bill
Governor Newsom has enlisted expert assistance to help the state 'develop workable guardrails for deploying GenAI'. California governor Gavin Newsom vetoed the state's controversial AI safety law known as 'Senate Bill 1047' yesterday (29 September). Newsom said he that did not think this legislation would be the best approach to protect the public from threats posed by AI. "While well-intentioned, SB 1047 does not take into account whether an AI system is deployed in high-risk environments, involves critical decision-making or the use of sensitive data. "Instead, the bill applies stringent standards to even the most basic functions -- so long as a large system deploys it." Marred in controversy since its introduction earlier this year, the bill has been opposed by many - including politician Nancy Pelosi who called the bill "well-intentioned but ill-informed" and Silicon Valley heavyweights, including OpenAI which argued for a federal bill rather than a state one, accelerator Y Combinator, which signed a letter along with around 140 start-ups, stating that the bill could "threaten the vibrancy of California's technology economy," and AI start-up Anthropic which made suggestions that led to amendments in the bill. Introduced earlier this year by state senator Scott Wiener, the bill's aim was to ensure the safe development of AI systems by putting more responsibilities on developers. With the intention of safeguarding public safety and security, the bill forces developers of large "frontier" AI models to take precautions such as safety testing, implementing safeguards to prevent misuse and post-deployment monitoring. One safeguarding measure included an "emergency stop" button that shuts down the model. However, the bill was intended to apply only to large AI models that cost at least $100m to develop. Instead of SB 1047, governor Newsom announced that he has enlisted expert assistance, who will "help California develop workable guardrails for deploying GenAI". The team of experts include the 'godmother of AI' Dr Fei-Fei Li; Tino Cuéllar, a member of the National Academy of Sciences Committee on Social and Ethical Implications of Computing Research; and Jennifer Tour Chayes, dean of the College of Computing, Data Science and Society at UC Berkeley. The California assembly has been active in introducing protective legislation regarding AI. Just this month, governor Newsom considered 38 AI-related bills and signed 18 of them. Don't miss out on the knowledge you need to succeed. Sign up for the Daily Brief, Silicon Republic's digest of need-to-know sci-tech news.
[17]
California governor vetoes bill to create first-in-nation AI safety measures
SACRAMENTO, Calif. (AP) -- California Gov. Gavin Newsom vetoed a landmark bill aimed at establishing first-in-the-nation safety measures for large artificial intelligence models Sunday. The decision is a major blow to efforts attempting to rein in the homegrown industry that is rapidly evolving with little oversight. The bill would have established some of the first regulations on large-scale AI models in the nation and paved the way for AI safety regulations across the country, supporters said. Earlier this month, the Democratic governor told an audience at Dreamforce, an annual conference hosted by software giant Salesforce, that California must lead in regulating AI in the face of federal inaction but that the proposal "can have a chilling effect on the industry." The proposal, which drew fierce opposition from startups, tech giants and several Democratic House members, could have hurt the homegrown industry by establishing rigid requirements, Newsom said. "While well-intentioned, SB 1047 does not take into account whether an AI system is deployed in high-risk environments, involves critical decision-making or the use of sensitive data," Newsom said in a statement. "Instead, the bill applies stringent standards to even the most basic functions -- so long as a large system deploys it. I do not believe this is the best approach to protecting the public from real threats posed by the technology." Newsom on Sunday instead announced that the state will partner with several industry experts, including AI pioneer Fei-Fei Li, to develop guardrails around powerful AI models. Li opposed the AI safety proposal. The measure, aimed at reducing potential risks created by AI, would have required companies to test their models and publicly disclose their safety protocols to prevent the models from being manipulated to, for example, wipe out the state's electric grid or help build chemical weapons. Experts say those scenarios could be possible in the future as the industry continues to rapidly advance. It also would have provided whistleblower protections to workers. The legislation is among a host of bills passed by the Legislature this year to regulate AI, fight deepfakes and protect workers. State lawmakers said California must take actions this year, citing hard lessons they learned from failing to rein in social media companies when they might have had a chance. Proponents of the measure, including Elon Musk and Anthropic, said the proposal could have injected some levels of transparency and accountability around large-scale AI models, as developers and experts say they still don't have a full understanding of how AI models behave and why. The bill targeted systems that require more than $100 million to build. No current AI models have hit that threshold, but some experts said that could change within the next year. "This is because of the massive investment scale-up within the industry," said Daniel Kokotajlo, a former OpenAI researcher who resigned in April over what he saw as the company's disregard for AI risks. "This is a crazy amount of power to have any private company control unaccountably, and it's also incredibly risky." The United States is already behind Europe in regulating AI to limit risks. The California proposal wasn't as comprehensive as regulations in Europe, but it would have been a good first step to set guardrails around the rapidly growing technology that is raising concerns about job loss, misinformation, invasions of privacy and automation bias, supporters said. A number of leading AI companies last year voluntarily agreed to follow safeguards set by the White House, such as testing and sharing information about their models. The California bill would have mandated AI developers to follow requirements similar to those commitments, said the measure's supporters. But critics, including former U.S. House Speaker Nancy Pelosi, argued that the bill would "kill California tech" and stifle innovation. It would have discouraged AI developers from investing in large models or sharing open-source software, they said. Newsom's decision to veto the bill marks another win in California for big tech companies and AI developers, many of whom spent the past year lobbying alongside the California Chamber of Commerce to sway the governor and lawmakers from advancing AI regulations. Two other sweeping AI proposals, which also faced mounting opposition from the tech industry and others, died ahead of a legislative deadline last month. The bills would have required AI developers to label AI-generated content and ban discrimination from AI tools used to make employment decisions. The governor said earlier this summer he wanted to protect California's status as a global leader in AI, noting that 32 of the world's top 50 AI companies are located in the state. He has promoted California as an early adopter as the state could soon deploy generative AI tools to address highway congestion, provide tax guidance and streamline homelessness programs. The state also announced last month a voluntary partnership with AI giant Nvidia to help train students, college faculty, developers and data scientists. California is also considering new rules against AI discrimination in hiring practices. Earlier this month, Newsom signed some of the toughest laws in the country to crack down on election deepfakes and measures to protect Hollywood workers from unauthorized AI use. But even with Newsom's veto, the California safety proposal is inspiring lawmakers in other states to take up similar measures, said Tatiana Rice, deputy director of the Future of Privacy Forum, a nonprofit that works with lawmakers on technology and privacy proposals. "They are going to potentially either copy it or do something similar next legislative session," Rice said. "So it's not going away." -- - The Associated Press and OpenAI have a licensing and technology agreement that allows OpenAI access to part of AP's text archives.
[18]
California Governor Vetoes Sweeping A.I. Legislation
Gov. Gavin Newsom on Sunday vetoed a California artificial intelligence safety bill, blocking the most ambitious proposal in the nation aimed at curtailing the growth of the new technology. The first-of-its-kind bill, S.B. 1047, required safety testing of large A.I. systems, or models, before their release to the public. It also gave the state's attorney general the right to sue companies over serious harm caused by their technologies, like death or property damage. And it mandated a kill switch to turn off A.I. systems in case of potential biowarfare, mass casualties or property damage. Mr. Newsom said that the bill was flawed because it focused too much on regulating the biggest A.I. systems, known as frontier models, without considering potential risks and harms from the technology. He said that legislators should go back to rewrite it for the next session. "I do not believe this is the best approach to protecting the public from real threats posed by the technology," Mr. Newsom said in a statement. "Instead, the bill applies stringent standards to even the most basic functions -- so long as a large system deploys it." The decision to kill the bill is expected to set off fierce criticism from some tech experts and academics who have pushed for the legislation. Governor Newsom, a Democrat, had faced strong pressure to veto the bill, which became embroiled in a fierce national debate over how to regulate A.I. A flurry of lobbyists descended on his office in recent weeks, some promoting the technology's potential for great benefits. Others warned of its potential to cause irreparable harm to humanity. California was poised to become a standard-bearer for regulating a technology that has exploded into public consciousness with the release of chatbots and realistic image and video generators in recent years. In the absence of federal legislation, California's Legislature took an aggressive approach to reining in the technology with its proposal, which both houses passed nearly unanimously. While lawmakers and regulators globally have sounded the alarm over the technology, few have taken action. Congress has held hearings, but no legislation has made meaningful progress. The European Union passed the A.I. Act, which restricts the use of riskier technology like facial recognition software.
[19]
California Governor Gavin Newsom vetoes controversial bill on AI safety
Driving the news: Newsom said in returning Senate Bill 1047 without his signature that while SB 1047 was "well-intentioned," it didn't take into account "whether an AI system is deployed in high-risk environments, involves critical decision-making or the use of sensitive data." What they're saying: Google in an emailed statement Sunday thanked Newsom "for helping California continue to lead in building responsible AI tools" and said it looked forward to "working with the Governor's responsible AI initiative and the federal government on creating appropriate safeguards and developing tools that help everyone." The other side: Scott Wiener, a state senator from San Francisco who authored the bill in California's Senate, said in a statement Sunday the veto represented a "missed opportunity for California to once again lead on innovative tech regulation -- just as we did around data privacy and net neutrality -- and we are all less safe as a result."
[20]
California Gov. Newsom vetoes bill SB 1047 that aims to prevent AI disasters
Newsom called the bill 'well-intentioned,' but said, 'I do not believe this is the best approach.' California Gov. Gavin Newsom has vetoed bill SB 1047, which aims to prevent bad actors from using AI to cause "critical harm" to humans. The California state assembly passed the legislation by a margin of 41-9 on August 28, but several organizations including the Chamber of Commerce had urged Newsom to veto the bill. In his veto message on Sept. 29, Newsom said the bill is "well-intentioned" but "does not take into account whether an Al system is deployed in high-risk environments, involves critical decision-making or the use of sensitive data. Instead, the bill applies stringent standards to even the most basic functions - so long as a large system deploys it." SB 1047 would have made the developers of AI models liable for adopting safety protocols that would stop catastrophic uses of their technology. That includes preventive measures such as testing and outside risk assessment, as well as an "emergency stop" that would completely shut down the AI model. A first violation would cost a minimum of $10 million and $30 million for subsequent infractions. However, the bill was revised to eliminate the state attorney general's ability to sue AI companies with negligent practices if a catastrophic event does not occur. Companies would only be subject to injunctive relief and could be sued if their model caused critical harm. This law would apply to AI models that cost at least $100 million to use and 10^26 FLOPS for training. It also would have covered derivative projects in instances where a third party has invested $10 million or more in developing or modifying the original model. Any company doing business in California would be subject to the rules if it meets the other requirements. Addressing the bill's focus on large-scale systems, Newsom said, "I do not believe this is the best approach to protecting the public from real threats posed by the technology." The veto message adds: The earlier version of SB 1047 would have created a new department called the Frontier Model Division to oversee and enforce the rules. Instead, the bill was altered ahead of a committee vote to place governance at the hands of a Board of Frontier Models within the Government Operations Agency. The nine members will be appointed by the state's governor and legislature. The bill faced a complicated path to the final vote. SB 1047 was authored by California State Sen. Scott Wiener, who told TechCrunch: "We have a history with technology of waiting for harms to happen, and then wringing our hands. Let's not wait for something bad to happen. Let's just get out ahead of it." Notable AI researchers Geoffrey Hinton and Yoshua Bengio backed the legislation, as did the Center for AI Safety, which has been raising the alarm about AI's risks over the past year. But SB 1047 also drew heavy-hitting opposition from across the tech space. Researcher Fei-Fei Li critiqued the bill, as did Meta Chief AI Scientist Yann LeCun, for limiting the potential to explore new uses of AI. The trade group repping tech giants such as Amazon, Apple and Google said SB 1047 would limit new developments in the state's tech sector. Venture capital firm Andreeson Horowitz and several startups also questioned whether the bill placed unnecessary financial burdens on AI innovators. Anthropic and other opponents of the original bill pushed for amendments that were adopted in the version of SB 1047 that passed California's Appropriations Committee on August 15.
[21]
California Gov. Newsom vetoes AI bill, considered strictest in nation
Gov. Gavin Newsom of California on Sunday vetoed a bill that would have enacted the nation's most far-reaching regulations on the booming artificial intelligence industry. California legislators overwhelmingly passed the bill, called SB 1047, which was seen as a potential blueprint for national AI legislation. The measure would have made tech companies legally liable for harms caused by AI models. In addition, the bill would have required tech companies to enable a "kill switch" for AI technology in the event the systems were misused or went rogue. It also would have forced the industry to conduct safety tests on "massively powerful AI models," according to California Senator Scott Wiener, the bill's co-author. "Each and every one of the large AI labs has promised to perform tests that SB 1047 requires them to do - the same safety tests that some are now claiming would somehow harm innovation." Indeed, many powerful players in Silicon Valley, including venture capital firm Andreessen Horowitz, OpenAI and trade groups representing Google and Meta, lobbied against the bill, arguing it would slow the development of AI and stifle growth for early-stage companies. "SB 1047 would threaten that growth, slow the pace of innovation, and lead California's world-class engineers and entrepreneurs to leave the state in search of greater opportunity elsewhere," OpenAI's Chief Strategy Officer Jason Kwon wrote in a letter sent last month to Wiener. Other tech leaders, however, backed the bill, including Elon Musk and pioneering AI scientists like Geoffrey Hinton and Yoshua Bengio, who signed a letter urging Newsom to sign it. "We believe that the most powerful AI models may soon pose severe risks, such as expanded access to biological weapons and cyberattacks on critical infrastructure. It is feasible and appropriate for frontier AI companies to test whether the most powerful AI models can cause severe harms, and for these companies to implement reasonable safeguards against such risks," wrote Hinton and dozens of former and current employees of leading AI companies. Other states, like Colorado and Utah, have enacted laws more narrowly tailored to address how AI could perpetuate bias in employment and health-care decisions, as well as other AI-related consumer protection concerns. Newsom has recently signed other AI bills into law, including one to crack down on the spread of deepfakes during elections. Another protects actors against their likenesses being replicated by AI without their consent. As billions of dollars pour into the development of AI, and as it permeates more corners of everyday life, lawmakers in Washington still have not proposed a single piece of federal legislation to protect people from its potential harms, nor to provide oversight of its rapid development.
[22]
California Governor Vetoes AI Bill Aimed at Preventing Catastrophic Harms
California Governor Gavin Newsom said that by focusing only on the largest harms caused by the largest AI models the bill would harm innovation and fail to keep up with the pace of technology. California Governor Gavin Newsom on Sunday vetoed a bill aimed at preventing large AI systems from causing catastrophic harms, saying the legislation would have created a "false sense of security." His decision came after weeks of deliberation and competing lobbying efforts from big tech firms, celebrities, billionaires, and the workers who build AI. The law, Senate Bill 1047, introduced by state senator Scott Weiner (D-San Francisco) back in May, would have required companies that spend more than $100 million on computing resources to create a foundation AI model, or $10 million on computing resources to fine-tune a foundation model, to perform safety tests, hire independent auditors to review the model annually and take “reasonable care†to ensure the model doesn’t cause mass casualty incidents, more than $500 million in damage to physical or cyberinfrastructure, or act without human oversight to commit comparably serious crimes. It also instructed developers to build a kill switch into qualifying models that would allow them to be immediately shut off and empowered the state’s attorney general to sue a developer for violating the act and, in the most serious cases of harm, seek damages up to 10 percent of the cost of training the model. Newsom in recent weeks has signed a series of bills into law that address immediate, ongoing harms caused by AI systems, including bills that criminalize the creation of non-consensual deepfaked sexual imagery and require generative AI models to watermark their content so it’s easier to identify. But SB 1047, which would have applied only to the wealthiest and most influential AI companies, has become the focal point of debate over AI regulation in recent months. Companies including Meta, Google, OpenAI, and Anthropic vehemently opposed the bill, saying it would undermine innovation and hurt small businesses, despite the rules applying only to corporations with hundreds of millions to spend on training AI systems, Meta, for example, said the bill would unfairly punish the developers of foundation models for disasters caused by downstream users and de-incentivize the creation of open-source models because developers fear being held responsible for how others use their products. In his veto statement, Newsom pointed out that 32 of the world's 50 largest AI companies are based in California and he echoed the industry's complaints that the legislation would harm innovation. "Adaptability is critical as we race to regulate a technology still in its infancy," Newsom wrote. "This will require a delicate balance. While well-intentioned, SB 1047 does not take into account whether an Al system is deployed in high-risk environments, involves critical decision-making or the use of sensitive data. Instead, the bill applies stringent standards to even the most basic functions - so long as a large system deploys it." Despite the corporate lobbying, many in the AI industry believed the legislation was necessary. Dozens of AI researchers at leading AI companies called on Newsom in an open letter to sign the bill into law. “We believe that the most powerful AI models may soon pose severe risks, such as expanded access to biological weapons and cyberattacks on critical infrastructure,†they wrote. “It is feasible and appropriate for frontier AI companies to test whether the most powerful AI models can cause severe harms, and for these companies to implement reasonable safeguards against such risks.†The tech workers were joined in their advocacy by some of the biggest names in Hollywoodâ€"from JJ Abrams and Ava DuVernay to Mark Hamil and Whoopi Goldbergâ€"who penned their own open letter in support of SB 1047. Meanwhile, Elon Musk made strange bedfellows with civil society groups like the Electronic Frontier Foundation in calling for its enactment. While Rep. Nancy Pelosi and other influential members of California’s congressional delegation told Newsom to veto the law, calling it “well intentioned but ill-informed.†Following Newsom's veto, Weiner, the bill's sponsor, said the decision was a "setback for everyone who believes in oversight of massive corporations" “The governor’s veto message lists a range of criticisms of SB 1047: that the bill doesn’t go far enough, yet goes too far; that the risks are urgent but we must move with caution," Wiener said in a statement. "SB 1047 was crafted by some of the leading AI minds on the planet, and any implication that it is not based in empirical evidence is patently absurd."
[23]
California governor vetoes controversial AI bill, setting back regulation push
Tech executives opposed the measure, which would have required companies to test the most powerful AI systems before release. California Gov. Gavin Newsom (D) vetoed a bill on Sunday that would have instituted the nation's strictest artificial intelligence regulations -- a major win for tech companies and venture capitalists who had lobbied fiercely against the law, and a setback for proponents of tougher AI regulation. The legislation, known as S.B. 1047, would have required companies to test the most powerful AI systems before release and held them liable if their technology was used to harm people, for example by helping plan a terrorist attack. Tech executives, investors and prominent California politicians, including Rep. Nancy Pelosi (D), had argued the bill would quash innovation by making it legally risky to develop new AI systems, since it could be difficult or impossible to test for all the potential harms of the multipurpose technology. Opponents also argued that those who used AI for harm -- not the developers -- should be penalized. The bill's proponents, including pioneering AI researchers Geoffrey Hinton and Yoshua Bengio, argued that the law only formalized commitments that tech companies had already made voluntarily. California state Sen. Scott Wiener, the Democrat who authored the law, said the state must act to fill the vacuum left by lawmakers in Washington, where no new AI regulations have passed despite vocal support for the idea. Hollywood also weighed in, with Star Wars director J.J. Abrams and "Moonlight" actor Mahershala Ali among more than 120 actors and producers who signed a letter this past week asking Newsom to sign the bill. California's AI bill had already been weakened several times by the state's legislature, and the law gained support from AI company Anthropic and X owner Elon Musk. But lobbyists from Meta, Google and major venture capital firms, as well as founders of many tech start-ups, still opposed it. At a tech conference in San Francisco earlier this month, Newsom said the measure had "created its own weather system," triggering an outpouring of emotional comments. He also noted that the California legislature had passed other AI bills that were more "surgical" than 1047. Newsom's veto came after he signed 17 other AI-related laws, which impose new restrictions on some of the same tech companies that opposed the law he blocked. The regulations include a ban on AI-generated images that seek to deceive voters in the months ahead of elections; a requirement that movie studios negotiate with actors for the right to use their likeness in AI-generated videos; and rules forcing AI companies to create digital "watermark" technology to make it easier to detect AI videos, images and audio. Newsom's veto is a major setback for the AI safety movement, a collection of researchers and advocates who believe smarter-than-human AI could be invented soon and argue that humanity must urgently prepare for that scenario. The group is closely connected to the effective altruism community, which has funded think tanks and fellowships on Capitol Hill to influence AI policy and been derided as a "cult" by some tech leaders, such as Meta's chief AI scientist Yann LeCun. Despite those concerns, the majority of AI executives, engineers and researchers are focused on the challenges of building and selling workable AI products, not the risk of the technology one day developing into an able assistant for terrorists or swinging elections with disinformation. This is a developing story. Andrea Jimenez contributed to this report.
[24]
California governor Gavin Newsom shoots down divisive AI safety bill SB 1047 - SiliconANGLE
California governor Gavin Newsom shoots down divisive AI safety bill SB 1047 California Governor Gavin Newsom shot down a sweeping bill that was proposed to impose safety vetting requirements on developers of the most powerful artificial intelligence models, taking the side of most of Silicon Valley and a number of leading Democrats. In a message explaining his decision, Newsom argued that the SB 1047 bill doesn't take into account whether or not AI systems are deployed in high-risk environments, are using sensitive data or involved in critical decision-making. "Instead, the bill applies stringent standards to even the most basic functions, so long as a large system deploys it," Newsom said in a statement explaining his veto decision. "I do not believe this is the best approach to protecting the public from real threats posed by the technology." The bill was seen as one of the most crucial pieces of legislation ever proposed for the AI industry, as it would have become a de facto standard for the vast majority of developers, given that most technology giants are headquartered in California. As such, Newsom faced intense lobbying from some of the biggest economic and political actors in the state, including prominent tech companies, Hollywood actors, venture capital firms and politicians such as former House Speaker Nancy Pelosi. The governor, who is known for his tech-friendly stance on most issues, had for months warned against imposing legislation that would restrict innovation in California's burgeoning AI industry, saying it could undermine the state's economic competitiveness. However, he has also acknowledged there must be some balance, with California's unique position as the home of many AI developers meaning it must take the lead on responsible regulation. Earlier this year, Newsom signed a less-sweeping bill that required the state's emergency response agencies to study the risks of AI. Alongside the veto, he announced a commitment to formulate less stringent legislation that would still implement effective guardrails for AI, saying he is working alongside AI industry luminaries such as Dr. Fei-Fei Li, a professor at Stanford University, to come up with this. The governor also promised he will work closely with organized labor and the private sector to expand on the possible workplace applications for AI. He said these initiatives would build on earlier projects with state agencies that have experimented with AI systems for managing road traffic and streamlining customer service for public benefits. The SB 1047 bill was originally proposed by Sen. Scott Wiener, a San Francisco Democrat, and would have required AI developers to certify their largest models had undergone extensive safety testing before being deployed, in order to protect people from potential risks, such as creating bioweapons. Proponents of the bill argued that it represented the best opportunity yet to regulate the fast-moving AI industry, where the biggest advances are being made by companies either based in, or with operations in California. Wiener and others had hoped that the state might lead the way on AI regulation in the face of what they perceive to be inaction from Congress. In a statement, Wiener said the veto means that AI developers building the most powerful systems can do so with no restrictions imposed on them by policymakers. "This veto is a missed opportunity for California to once again lead on innovative tech regulation," he said. "We are all less safe as a result." However, the bill was extremely divisive and attracted just as many opponents as supporters. The likes of Google LLC and OpenAI said the requirements would burden AI developers, especially those working at smaller startups, while venture capital firms hired lobbyists to fight the bill. On the other hand, a number of leading researchers and the tech industry's most influential and vocal entrepreneur, Elon Musk, supported the bill. Wiener's opponents also included a number of Democrats in Congress who represent areas in and around Silicon Valley, including Pelosi and Rep. Ro Khanna. He also faced opposition from the San Francisco Mayor London Breed, who warned that SB 1047 would undermine the city's economy. Rep. Zoe Lofgren, the top Democrat on the House committee overseeing technology, also opposed the bill and personally lobbied lawmakers. Newsom's decision to veto the bill was not a surprise, as he has long been seen as an advocate of AI technology and maintains close ties with Silicon Valley. In the last year, he has embraced the use of AI in state government, notably partnering with Nvidia Corp. on the creation of AI training programs. However, Newsom hasn't always bowed over to the AI industry. Earlier this year, he signed a bill backed by actor's unions to limit the use of digital likenesses that could replace entertainers and enacted a law that aims to crack down on so-called deepfakes that impersonate political candidates.
[25]
California governor vetoes expansive AI safety bill | Digital Trends
California Gov. Gavin Newsom has vetoed SB 1047, the Safe and Secure Innovation for Frontier Artificial Models Act, arguing in a letter to lawmakers that it "establishes a regulatory framework that could give the public a false sense of security about controlling this fast-moving technology." "I do not believe this is the best approach to protecting the public from real threats posed by the technology," he wrote. SB 1047 would have required "that a developer, before beginning to initially train a covered model ... comply with various requirements, including implementing the capability to promptly enact a full shutdown ... and implement a written and separate safety and security protocol." Recommended Videos However, Newsom noted that 32 of the top 50 AI companies are based in California, and that the bill would focus on only the largest firms. "Smaller, specialized models may emerge as equally or even more dangerous than the models targeted by SB 1047," he stated. "While well-intentioned, SB 1047 does not take into account whether an Al system is deployed in high-risk environments, involves critical decision-making or the use of sensitive data," Newsom wrote. "Instead, the bill applies stringent standards to even the most basic functions - so long as a large system deploys it." SB 1047 sparked heated debate within the AI industry as it made its way through the legislature. OpenAI stridently opposed the measure, resulting in researchers William Saunders and Daniel Kokotajlo publicly resigning in protest, while xAI CEO Elon Musk came out in favor of the bill. Many in Hollywood also expressed support for SB 1047, including J.J. Abrams, Jane Fonda, Pedro Pascal, Shonda Rhimes, and Mark Hamill. "We cannot afford to wait for a major catastrophe to occur before taking action to protect the public. California will not abandon its responsibility. Safety protocols must be adopted. Proactive guardrails should be implemented, and severe consequences for bad actors must be clear and enforceable," Newsom wrote. However, "ultimately, any framework for effectively regulating Al needs to keep pace with the technology itself." Monday's announcement comes less than a month after the governor signed AB 2602 and AB 1836, both backed by the SAG-AFTRA union. AB 2602 requires performers grant informed consent prior to using their "digital replicas," while AB 1836 strengthened protections against ripping off the voice and likeness of deceased performers.
[26]
California governor vetoes bill to regulate artificial intelligence
California governor Gavin Newsom has vetoed a controversial attempt to regulate artificial intelligence, citing concerns that the bill could stifle innovation after intense pressure from tech firms. Newsom, a Democrat, waited until the eleventh hour to announce his decision after the bill passed through the state legislature at the end of August. The bill would have forced those developing the most powerful AI models to adhere to strict rules, including implementing a kill switch, to prevent catastrophic harm. Leading AI companies, including Google, OpenAI and Meta, all opposed the bill and lobbied heavily against it, complaining that premature legislation could stifle the development of AI and threaten California's leading role in the development of the technology. Amazon-backed Anthropic and Elon Musk, who owns start-up xAI, supported the legislation. In a letter to the state senate, Newsom defended his veto of the Safe and Secure Innovation for Frontier Artificial Intelligence Systems Act, known as SB 1047, on Sunday. He said the framework could "curtail the very innovation that fuels advancement in favour of the public good", noting that California was home to 32 of the world's leading AI companies. In particular, he said targeting models by size -- the bill would require safety testing and other guardrails for models that cost more than $100mn to develop -- was the wrong metric. It could give "the public a false sense of security about controlling this fast-moving technology" when "smaller, specialised models may emerge as equally or even more dangerous". Senator Scott Wiener, who put forward the bill, said it "requires only the largest AI developers to do what each and every one of them has repeatedly committed to do: perform basic safety testing on massively powerful AI models". But Newsom insisted that: "While well-intentioned, SB 1047 does not take into account whether an AI system is deployed in high-risk environments, involves critical decision-making or the use of sensitive data." "Instead, the bill applies stringent standards to even the most basic functions -- so long as a large system deploys it. I do not believe this is the best approach to protecting the public from real threats posed by the technology." In the past 30 days, Newsom has signed bills covering the deployment and regulation of generative AI technology -- the type that creates text or imagery -- including on deepfakes, AI watermarking and misinformation. Experts on the technology have also partnered with the state to help to develop "workable guardrails" for deploying generative AI backed up by empirical and scientific evidence, he said. The Artificial Intelligence Policy Institute, a think-tank, called the governor's veto "misguided, reckless and out of step with the people he's tasked with governing." "Newsom had the opportunity to serve as a leader on regulation of democratic governance of AI development -- a path he has taken on other industries -- but has chosen to take our hands off the wheel, potentially allowing AI development to veer uncontrollably off the road," said executive director Daniel Colson. "Newsom and lawmakers must return to Sacramento next session to come to an agreement on a set of measures that will install sensible guardrails on AI development."
[27]
Gavin Newsom Vetoes California's Contentious AI Safety Bill
California Governor Gavin Newsom has vetoed a contentious artificial intelligence safety bill that would have required companies to make sure their AI models don't cause major harm. The bill, called SB 1047, was poised to be one of the most consequential pieces of AI regulation in the US, given California's central position in the tech ecosystem and the lack of federal legislation for artificial intelligence. In recent weeks, the bill had divided the AI industry and drawn significant criticism from some tech leaders and prominent Democrats.
[28]
California governor vetoes contentious AI safety bill
California Governor Gavin Newsom speaks to the press ahead of the presidential debate between US Vice President and Democratic presidential candidate Kamala Harris and former US President and Republican presidential candidate Donald Trump at the National Constitution Center in Philadelphia, Pennsylvania, on September 10, 2024. California Gov. Gavin Newsom on Sunday vetoed a hotly contested artificial intelligence safety bill, after the tech industry raised objections, saying it could drive AI companies from the state and hinder innovation. Newsom said he had asked leading experts on Generative AI to help California "develop workable guardrails" that focus "on developing an empirical, science-based trajectory analysis." He also ordered state agencies to expand their assessment of the risks from potential catastrophic events tied to AI use.
[29]
California governor vetoes the US' first AI safety bill
The bill, which would have been the first AI law in the US, would have required tech companies to test models and provided whistleblower protections to workers. The governor of California Gavin Newsom vetoed a landmark bill aimed at establishing first-in-the-nation safety measures for large artificial intelligence (AI) models, which several technology companies voiced opposition to. The decision on Sunday is a major blow to efforts attempting to rein in the homegrown industry that is rapidly evolving with little oversight. The bill would have established some of the first regulations on large-scale AI models in the nation and paved the way for AI safety regulations across the country, supporters said. Earlier this month, the Democratic governor told an audience at Dreamforce, an annual conference hosted by software giant Salesforce, that California must lead in regulating AI in the face of federal inaction but that the proposal "can have a chilling effect on the industry". The proposal, which drew fierce opposition from startups, tech giants, and several Democratic House members, could have hurt the homegrown industry by establishing rigid requirements, Newsom said. Newsom announced on Sunday that the state would partner with several industry experts, including AI pioneer Fei-Fei Li, to develop guardrails around powerful AI models. Li opposed the AI safety proposal. The measure, aimed at reducing potential risks created by AI, would have required companies to test their models and publicly disclose their safety protocols to prevent the models from being manipulated to, for example, wipe out the state's electric grid or help build chemical weapons. Experts say those scenarios could be possible in the future as the industry continues to rapidly advance. It also would have provided whistleblower protections to workers. The bill's author, Democratic state Senator Scott Weiner, called the veto "a setback for everyone who believes in oversight of massive corporations that are making critical decisions that affect the safety and the welfare of the public and the future of the planet". "The companies developing advanced AI systems acknowledge that the risks these models present to the public are real and rapidly increasing. While the large AI labs have made admirable commitments to monitor and mitigate these risks, the truth is that voluntary commitments from industry are not enforceable and rarely work out well for the public," Wiener said in a statement on Sunday. Wiener said the debate around the bill has dramatically advanced the issue of AI safety, and that he would continue pressing that point. The legislation is among a host of bills passed by the Legislature this year to regulate AI, fight deepfakes and protect workers. State lawmakers said California must take actions this year, citing hard lessons they learned from failing to rein in social media companies when they might have had a chance. Proponents of the measure, including Elon Musk and Anthropic, said the proposal could have injected some levels of transparency and accountability around large-scale AI models, as developers and experts say they still don't have a full understanding of how AI models behave and why. The bill targeted systems that require a high level of computing power and more than $100 million (€90 million) to build. No current AI models have hit that threshold, but some experts said that could change within the next year. "This is because of the massive investment scale-up within the industry," said Daniel Kokotajlo, a former OpenAI researcher who resigned in April over what he saw as the company's disregard for AI risks. "This is a crazy amount of power to have any private company control unaccountably, and it's also incredibly risky". The United States is already behind Europe in regulating AI to limit risks. The California proposal wasn't as comprehensive as regulations in Europe, but it would have been a good first step to set guardrails around the rapidly growing technology that is raising concerns about job loss, misinformation, invasions of privacy and automation bias, supporters said. A number of leading AI companies last year voluntarily agreed to follow safeguards set by the White House, such as testing and sharing information about their models. The California bill would have mandated AI developers to follow requirements similar to those commitments, said the measure's supporters. But critics, including former US House Speaker Nancy Pelosi, argued that the bill would "kill California tech" and stifle innovation. It would have discouraged AI developers from investing in large models or sharing open-source software, they said. Newsom's decision to veto the bill marks another win in California for big tech companies and AI developers, many of whom spent the past year lobbying alongside the California Chamber of Commerce to sway the governor and lawmakers from advancing AI regulations. Two other sweeping AI proposals, which also faced mounting opposition from the tech industry and others, died ahead of a legislative deadline last month. The bills would have required AI developers to label AI-generated content and ban discrimination from AI tools used to make employment decisions. The governor said earlier this summer he wanted to protect California's status as a global leader in AI, noting that 32 of the world's top 50 AI companies are located in the state. He has promoted California as an early adopter as the state could soon deploy generative AI tools to address highway congestion, provide tax guidance and streamline homelessness programs. The state also announced last month a voluntary partnership with AI giant Nvidia to help train students, college faculty, developers and data scientists. California is also considering new rules against AI discrimination in hiring practices. But even with Newsom's veto, the California safety proposal is inspiring lawmakers in other states to take up similar measures, said Tatiana Rice, deputy director of the Future of Privacy Forum, a non-profit that works with lawmakers on technology and privacy proposals. "They are going to potentially either copy it or do something similar next legislative session," Rice said. "So it's not going away".
[30]
California Governor Blocks AI Safety Bill
Californian, 'the wild wild west' for AI technologies as its AI sector remains lawless California Governor Gavin Newsom vetoed a contentious AI safety bill. This has fired up a hotly contested debate in the tech industry. Many tech leaders said that the bill would stifle innovation and push AI companies out of the state. Newsome justified his decision by saying that the bill used really strict regulations even for the most mundane AI application. He further noted that the bill makes no distinction between high-risk and simpler AI applications.
[31]
US-Election 2024-Senate-Montana
SACRAMENTO, Calif. (AP) -- California Gov. Gavin Newsom vetoed a landmark bill aimed at establishing first-in-the-nation safety measures for large artificial intelligence models Sunday. The decision is a major blow to efforts attempting to rein in the homegrown industry that is rapidly evolving with little oversight. The bill would have established some of the first regulations on large-scale AI models in the nation and paved the way for AI safety regulations across the country, supporters said. Earlier this month, the Democratic governor told an audience at Dreamforce, an annual conference hosted by software giant Salesforce, that California must lead in regulating AI in the face of federal inaction but that the proposal "can have a chilling effect on the industry." The proposal, which drew fierce opposition from startups, tech giants and several Democratic House members, could have hurt the homegrown industry by establishing rigid requirements, Newsom said. "While well-intentioned, SB 1047 does not take into account whether an AI system is deployed in high-risk environments, involves critical decision-making or the use of sensitive data," Newsom said in a statement. "Instead, the bill applies stringent standards to even the most basic functions -- so long as a large system deploys it. I do not believe this is the best approach to protecting the public from real threats posed by the technology." Newsom on Sunday instead announced that the state will partner with several industry experts, including AI pioneer Fei-Fei Li, to develop guardrails around powerful AI models. Li opposed the AI safety proposal. The measure, aimed at reducing potential risks created by AI, would have required companies to test their models and publicly disclose their safety protocols to prevent the models from being manipulated to, for example, wipe out the state's electric grid or help build chemical weapons. Experts say those scenarios could be possible in the future as the industry continues to rapidly advance. It also would have provided whistleblower protections to workers. The legislation is among a host of bills passed by the Legislature this year to regulate AI, fight deepfakes and protect workers. State lawmakers said California must take actions this year, citing hard lessons they learned from failing to rein in social media companies when they might have had a chance. Proponents of the measure, including Elon Musk and Anthropic, said the proposal could have injected some levels of transparency and accountability around large-scale AI models, as developers and experts say they still don't have a full understanding of how AI models behave and why. The bill targeted systems that require more than $100 million to build. No current AI models have hit that threshold, but some experts said that could change within the next year. "This is because of the massive investment scale-up within the industry," said Daniel Kokotajlo, a former OpenAI researcher who resigned in April over what he saw as the company's disregard for AI risks. "This is a crazy amount of power to have any private company control unaccountably, and it's also incredibly risky." The United States is already behind Europe in regulating AI to limit risks. The California proposal wasn't as comprehensive as regulations in Europe, but it would have been a good first step to set guardrails around the rapidly growing technology that is raising concerns about job loss, misinformation, invasions of privacy and automation bias, supporters said. A number of leading AI companies last year voluntarily agreed to follow safeguards set by the White House, such as testing and sharing information about their models. The California bill would have mandated AI developers to follow requirements similar to those commitments, said the measure's supporters. But critics, including former U.S. House Speaker Nancy Pelosi, argued that the bill would "kill California tech" and stifle innovation. It would have discouraged AI developers from investing in large models or sharing open-source software, they said. Newsom's decision to veto the bill marks another win in California for big tech companies and AI developers, many of whom spent the past year lobbying alongside the California Chamber of Commerce to sway the governor and lawmakers from advancing AI regulations. Two other sweeping AI proposals, which also faced mounting opposition from the tech industry and others, died ahead of a legislative deadline last month. The bills would have required AI developers to label AI-generated content and ban discrimination from AI tools used to make employment decisions. The governor said earlier this summer he wanted to protect California's status as a global leader in AI, noting that 32 of the world's top 50 AI companies are located in the state. He has promoted California as an early adopter as the state could soon deploy generative AI tools to address highway congestion, provide tax guidance and streamline homelessness programs. The state also announced last month a voluntary partnership with AI giant Nvidia to help train students, college faculty, developers and data scientists. California is also considering new rules against AI discrimination in hiring practices. Earlier this month, Newsom signed some of the toughest laws in the country to crack down on election deepfakes and measures to protect Hollywood workers from unauthorized AI use. But even with Newsom's veto, the California safety proposal is inspiring lawmakers in other states to take up similar measures, said Tatiana Rice, deputy director of the Future of Privacy Forum, a nonprofit that works with lawmakers on technology and privacy proposals. "They are going to potentially either copy it or do something similar next legislative session," Rice said. "So it's not going away." -- - The Associated Press and OpenAI have a licensing and technology agreement that allows OpenAI access to part of AP's text archives.
[32]
California governor vetoes contentious AI safety bill
Sept 29 (Reuters) - California Governor Gavin Newsom on Sunday vetoed a hotly contested artificial intelligence safety bill, after the tech industry raised objections, saying it could drive AI companies from the state and hinder innovation. Newsom said he had asked leading experts on Generative AI to help California "develop workable guardrails" that focus "on developing an empirical, science-based trajectory analysis." He also ordered state agencies to expand their assessment of the risks from potential catastrophic events tied to AI use. Reporting by David Shepardson; Editing by Leslie Adler Our Standards: The Thomson Reuters Trust Principles., opens new tab
[33]
California Governor Vetoes Bill to Regulate AI
California Gov. Gavin Newsom vetoed legislation Sunday that would have imposed new regulations on artificial intelligence and exposed deep divisions in the tech industry. SB 1047, authored by state Sen. Scott Wiener, would have created a new oversight body to issue regulations and approve new AI models before they are deployed. The bill also would have imposed penalties on AI developers
[34]
California governor Gavin Newsom vetoes controversial AI safety bill
Newsom rejects bill targeting firms developing generative AI after tech industry says it could drive firms from state California governor Gavin Newsom on Sunday vetoed a hotly contested artificial intelligence safety bill after the tech industry raised objections, saying it could drive AI companies from the state and hinder innovation. The bill, officially known as SB 1047, targets companies developing generative AI - which can respond to prompts with fully formed text, images or audio, as well as run repetitive tasks with minimal intervention. Newsom said he had asked leading experts on generative AI to help California "develop workable guardrails" that focus "on developing an empirical, science-based trajectory analysis". He also ordered state agencies to expand their assessment of the risks from potential catastrophic events tied to AI use.
[35]
US-Election 2024-Senate-Montana
SACRAMENTO, Calif. (AP) -- California Gov. Gavin Newsom vetoed a landmark bill aimed at establishing first-in-the-nation safety measures for large artificial intelligence models Sunday. The decision is a major blow to efforts attempting to rein in the homegrown industry that is rapidly evolving with little oversight. The bill would have established some of the first regulations on large-scale AI models in the nation and paved the way for AI safety regulations across the country, supporters said. Earlier this month, the Democratic governor told an audience at Dreamforce, an annual conference hosted by software giant Salesforce, that California must lead in regulating AI in the face of federal inaction but that the proposal "can have a chilling effect on the industry." The proposal, which drew fierce opposition from startups, tech giants and several Democratic House members, could have hurt the homegrown industry by establishing rigid requirements, Newsom said. "While well-intentioned, SB 1047 does not take into account whether an AI system is deployed in high-risk environments, involves critical decision-making or the use of sensitive data," Newsom said in a statement. "Instead, the bill applies stringent standards to even the most basic functions -- so long as a large system deploys it. I do not believe this is the best approach to protecting the public from real threats posed by the technology." Newsom on Sunday instead announced that the state will partner with several industry experts, including AI pioneer Fei-Fei Li, to develop guardrails around powerful AI models. Li opposed the AI safety proposal. The measure, aimed at reducing potential risks created by AI, would have required companies to test their models and publicly disclose their safety protocols to prevent the models from being manipulated to, for example, wipe out the state's electric grid or help build chemical weapons. Experts say those scenarios could be possible in the future as the industry continues to rapidly advance. It also would have provided whistleblower protections to workers. The legislation is among a host of bills passed by the Legislature this year to regulate AI, fight deepfakes and protect workers. State lawmakers said California must take actions this year, citing hard lessons they learned from failing to rein in social media companies when they might have had a chance. Proponents of the measure, including Elon Musk and Anthropic, said the proposal could have injected some levels of transparency and accountability around large-scale AI models, as developers and experts say they still don't have a full understanding of how AI models behave and why. The bill targeted systems that require more than $100 million to build. No current AI models have hit that threshold, but some experts said that could change within the next year. "This is because of the massive investment scale-up within the industry," said Daniel Kokotajlo, a former OpenAI researcher who resigned in April over what he saw as the company's disregard for AI risks. "This is a crazy amount of power to have any private company control unaccountably, and it's also incredibly risky." The United States is already behind Europe in regulating AI to limit risks. The California proposal wasn't as comprehensive as regulations in Europe, but it would have been a good first step to set guardrails around the rapidly growing technology that is raising concerns about job loss, misinformation, invasions of privacy and automation bias, supporters said. A number of leading AI companies last year voluntarily agreed to follow safeguards set by the White House, such as testing and sharing information about their models. The California bill would have mandated AI developers to follow requirements similar to those commitments, said the measure's supporters. But critics, including former U.S. House Speaker Nancy Pelosi, argued that the bill would "kill California tech" and stifle innovation. It would have discouraged AI developers from investing in large models or sharing open-source software, they said. Newsom's decision to veto the bill marks another win in California for big tech companies and AI developers, many of whom spent the past year lobbying alongside the California Chamber of Commerce to sway the governor and lawmakers from advancing AI regulations. Two other sweeping AI proposals, which also faced mounting opposition from the tech industry and others, died ahead of a legislative deadline last month. The bills would have required AI developers to label AI-generated content and ban discrimination from AI tools used to make employment decisions. The governor said earlier this summer he wanted to protect California's status as a global leader in AI, noting that 32 of the world's top 50 AI companies are located in the state. He has promoted California as an early adopter as the state could soon deploy generative AI tools to address highway congestion, provide tax guidance and streamline homelessness programs. The state also announced last month a voluntary partnership with AI giant Nvidia to help train students, college faculty, developers and data scientists. California is also considering new rules against AI discrimination in hiring practices. Earlier this month, Newsom signed some of the toughest laws in the country to crack down on election deepfakes and measures to protect Hollywood workers from unauthorized AI use. But even with Newsom's veto, the California safety proposal is inspiring lawmakers in other states to take up similar measures, said Tatiana Rice, deputy director of the Future of Privacy Forum, a nonprofit that works with lawmakers on technology and privacy proposals. "They are going to potentially either copy it or do something similar next legislative session," Rice said. "So it's not going away." -- - The Associated Press and OpenAI have a licensing and technology agreement that allows OpenAI access to part of AP's text archives. Copyright 2024 The Associated Press. All rights reserved. This material may not be published, broadcast, rewritten or redistributed.
[36]
Newsom vetoes bill for stricter AI regulations
International community calls for ceasefire across Israel-Lebanon border California Gov. Gavin Newsom (D) on Sunday vetoed a landmark artificial intelligence (AI) bill that would have created new safety rules for the emerging tech, handing much of Silicon Valley a major win. Newsom's veto caps off weeks of skepticism over how he would act on the controversial legislation, known as California Senate Bill 104, or the Safe and Secure Innovation for Frontier Artificial Intelligence Models Act. In a veto message published Sunday, the governor said the bill's focus on the "most expensive and large-scale models" "could give the public a false sense of security about controlling" AI. "Smaller, specialized models may emerge equally or even more dangerous than the models targeted by SB 1074 - at the potential expense of curtailing the very innovation that fuels advancement in favor of the public good," he wrote. The legislation was sent to his desk late last month after it passed the state Legislature, and his veto came just a day before the Monday deadline. The bill, also called SB 1047, would have required powerful AI models to undergo safety testing before being released to the public. This might include, for example, testing whether their models can be manipulated to hack into the state's electric grid. It also intended to hold developers liable for severe harm caused by their models but would have only applied to AI systems that cost more than $100 million to train. No current models have hit that number yet. "While well-intentioned, SB 1047 does not take into account whether an AI system is deployed in high-risk environments, involves critical decision-making or the use of sensitive data," Newsom wrote. "Instead, the bill applies stringent standards to even the most basic functions - so as long as a large system deploys it." Newsom has often indicated skepticism about reining in AI technology, which stands to bring large amounts of money to the Golden State. California is home to 32 of the world's "50 leading AI companies," according to Newsom's office, and has become a major hub for AI-related legislation as a result. The governor stressed his veto does not mean he does not agree with the author's argument that there is an urgent need to act on the advancing tech to prevent major catastrophe. "California will not abandon its responsibility," he said, adding, "Proactive guardrails should be implemented and severe consequences for bad actors must be clear and enforceable." Any solution, Newsom argued, should be informed by an "empirical trajectory analysis" of AI systems and their capabilities. The bill received mixed opinions from AI startups, major technology firms, researchers and even some lawmakers who were divided over whether it would throttle the development of the technology or establish much-needed guardrails. Those on both sides of the argument have piled on pressure on Newsom over the past few months. Several of the country's leading tech firms, including OpenAI, Google and Meta - the parent company of Facebook and Instagram - expressed concerns the legislation would have targeted developers rather than the abusers of AI and argued safety regulations for the technology should be decided on a federal level. Meanwhile, Anthropic, a leading AI startup, said last month the benefits of the bill likely would have outweighed the risks. Last week, more than 120 Hollywood figures wrote an open letter pushing him to sign the legislation, writing the "most powerful AI models may soon pose severe risks." Earlier this month, over 100 current or former employees of leading AI companies - including OpenAI, Anthropic, Google's DeepMind and Meta - also wrote to Newsom, warning of these same risks. Congressional lawmakers joined the debate too, with former Speaker Nancy Pelosi (D-Calif.) and some other California politicians coming out against the bill. Pelosi last month said "many" in Congress viewed the legislation as "well-intentioned but ill informed." Newsom pushed back on the argument that California should not have a role in a bill with nationwide implications. "A California-only approach may well be warranted - especially absent federal action by Congress - but it must be based on empirical evidence and science," he said, pointing to the federal and state-based risk analyses currently being done on AI. The governor signed a series of other bills earlier this month aimed at preventing abuses of AI and placing guardrails on the emerging tech. Three of these bills are aimed at preventing the misuse of sexually explicit deepfakes, which can generate images, audio, and video and digitally alter likenesses and voices. He signed two other bills aimed at protecting actors and performers from having their names, images and likenesses copied by artificial intelligence without authorization.
[37]
California governor Gavin Newsom vetoes landmark AI bill
The governor of California Gavin Newsom has blocked a landmark artificial intelligence (AI) safety bill, which had faced strong opposition from major technology companies. The legislation would have imposed some of the first regulations on AI in the US. Mr Newsom said the bill could stifle innovation and prompt AI developers to move out of the state. Senator Scott Wiener, who authored the bill, said the veto allows companies to continue developing an "extremely powerful technology" without any government oversight.
[38]
California Gov. Gavin Newsom Vetoes Bill To Create First-In-Nation AI Safety Measures
The proposal, which drew fierce opposition from startups, tech giants and several Democratic House members, could have hurt the homegrown industry by establishing rigid requirements, Newsom said. "While well-intentioned, SB 1047 does not take into account whether an AI system is deployed in high-risk environments, involves critical decision-making or the use of sensitive data," Newsom said in a statement. "Instead, the bill applies stringent standards to even the most basic functions -- so long as a large system deploys it. I do not believe this is the best approach to protecting the public from real threats posed by the technology."
[39]
California's Gov. Newsom Vetoes Controversial AI Safety Bill, WSJ Reports
(Reuters) - California Governor Gavin Newsom has vetoed an artificial intelligence safety bill as it applies only to the biggest and most expensive AI models and leaves others unregulated, the Wall Street Journal reported on Sunday, citing a person with knowledge of his thinking. The bill, officially known as SB 1047, targets companies developing generative AI - which can respond to prompts with fully formed text, images or audio, as well as run repetitive tasks with minimal intervention. (Reporting by Urvi Dugar; Editing by Marguerita Choy)
[40]
Major AI bill vetoed by California governor, despite calls for regulation - 9to5Mac
California is home to many of the country's largest AI innovators, including Apple. Today, they're all breathing a sigh of relief. Governor Gavin Newsom has vetoed a major AI regulatory bill that came to his desk, while nonetheless highlighting the 'real threats' posed by AI. SB 1047 is the AI bill that California's senate approved and sent to the governor's desk for a signature. Newsom has decided to exercise his veto right, however, by leaving the bill unsigned. Here is his reasoning, straight from the veto message itself: Adaptability is critical as we race to regulate a technology still in its infancy. This will require a delicate balance. While well-intentioned, SB 1047 does not take into account whether an Al system is deployed in high-risk environments; involves critical decision-making or the use of sensitive data. Instead, the bill applies stringent standards to even the most basic functions -- so long as a large system deploys it. I do not believe this is the best approach to protecting the public from real threats posed by the technology. The bill sparked a lot of debate over the role of government in the latest tech innovations. AI in particular has been highly controversial at all levels of society, and Newsom has recently signed other AI bills into law. However, he stresses his belief that SB 1047 didn't take the right approach. By focusing only on the most expensive and large-scale models, SB 1047 establishes a regulatory framework that could give the public a false sense of security about controlling this fast-moving technology. Smaller, specialized models may emerge as equally or even more dangerous than the models targeted by SB 1047 - at the potential expense of curtailing the very innovation that fuels advancement in favor of the public good. Many companies have been building and shipping AI tech for years. Apple, meanwhile, is just getting started with its Apple Intelligence features coming in October. Newsom signing SB 1047 could have brought further delays to Apple's implementation of AI in its various devices, as well as impacting partners (and competitors) like OpenAI, Google, and more. There will undoubtedly be bills that succeed this one and attempt to accomplish much of the same goal, but in a way Newsom will sign off on. For his part, the governor outlines much of what he's looking for in his veto statement. Unfortunately, some of what he has to say appears vague and unhelpful. For example, he remarks, "Ultimately, any framework for effectively regulating Al needs to keep pace with the technology itself." Have you followed the AI regulation debate? What do you think of SB 1047 getting vetoed? Let us know in the comments.
[41]
California's Gov. Newsom vetoes controversial AI safety bill, WSJ reports
(Reuters) - California Governor Gavin Newsom has vetoed an artificial intelligence safety bill as it applies only to the biggest and most expensive AI models and leaves others unregulated, the Wall Street Journal reported on Sunday, citing a person with knowledge of his thinking. The bill, officially known as SB 1047, targets companies developing generative AI - which can respond to prompts with fully formed text, images or audio, as well as run repetitive tasks with minimal intervention. (Reporting by Urvi Dugar; Editing by Marguerita Choy)
[42]
Gov. Gavin Newsom Vetoes Sweeping AI Safety Bill
MSNBC Picks Up Celebrity-Stacked Docuseries 'My Generation' (Exclusive) California Governor Gavin Newsom has vetoed SB 1047, a sweeping artificial intelligence safety bill, arguing that it is not the best way to deal with the looming threats and opportunities presented by AI. In a statement explaining the decision to veto the bill, Newsom noted that 32 of the top 50 AI companies are based in California, and that the bill would really focus on only the largest companies, which would potentially undercut any safety benefits. "While well-intentioned, SB 1047 does not take into account whether an AI system is deployed in high-risk environments, involves critical decision-making or the use of sensitive data," Newsom wrote in his letter explaining the veto. "Instead, the bill applies stringent standards to even the most basic functions -- so long as a large system deploys it. I do not believe this is the best approach to protecting the public from real threats posed by the technology." He also noted that he had signed many bills that focus or touch on risks associated with AI (earlier this month, for example, he signed bills backed by SAG-AFTRA regulating AI performance replicas). "This year, the legislature sent me several thoughtful proposals to regulate Al companies in response to current, rapidly evolving risks -- including threats to our democratic process, the spread of misinformation and deepfakes, risks to online privacy, threats to critical infrastructure and disruptions in the workforce," Newsom said Sunday. "These bills, and actions by my administration, are guided by principles of accountability, fairness and transparency of Al systems and deployment of Al technology in California." SB 1047 was a hot button bill, opposed by Silicon Valley by backed by many in Hollywood, including J.J. Abrams, Jane Fonda, Pedro Pascal, Shonda Rhimes and Mark Hamill. However, it also garnered opposition from connected power players like former House Speaker Nancy Pelosi, who argued that federal legislation should step in and fill that safety gap.
[43]
California governor vetoes major AI safety bill
In late August, SB 1047 arrived on Gov. Newsom's desk, poised to become the strictest legal framework around AI in the US, with a deadline to either sign or veto it as of September 30th. It would have applied to covered AI companies doing business in California with a model that costs over $100 million to train or over $10 million to fine-tune, adding requirements that developers implement safeguards like a "kill switch" and lay out protocols for testing to reduce the chance of disastrous events like a cyberattack or a pandemic. The text also establishes protections for whistleblowers to report violations and enables the AG to sue for damages caused by safety incidents.
[44]
California Governor Blocks AI Safety Bill, Calls for More Targeted Approach | PYMNTS.com
California Gov. Gavin Newsom killed the "kill switch" on artificial intelligence (AI) Sunday (Sept. 29), vetoing a bill that would have introduced safety testing requirements for AI companies developing models that cost more than $100 million or those using substantial computing power. The bill also would have mandated that AI developers in California establish fail-safe mechanisms -- or a "kill switch" -- to shut down their models in case of emergencies or unforeseen consequences. "By focusing only on the most expensive and large-scale models, SB 1047 establishes a regulatory framework that could give the public a false sense of security about controlling this fast-moving technology," Newsom wrote in the correspondence to legislators that accompanies the decision. "Smaller, specialized models may emerge as equally or even more dangerous than the models targeted by SB 1047 -- at the potential expense of curtailing the very innovation that fuels advancement in favor of the public good." Newsom argued that while he agrees with the need to protect the public from AI risks, the bill's approach is too broad and inflexible. Newsom believes effective AI regulation should be based on empirical evidence, consider the specific risks of different AI applications, and be adaptable to rapidly evolving technology. He emphasized California's commitment to addressing AI risks through other initiatives, including executive orders and recently signed legislation, and expressed his willingness to work with various stakeholders to develop more targeted and scientifically informed AI regulations in the future. "California is home to 32 of the world's 50 leading Al companies, pioneers in one of the most significant technological advances in modern history," Newsom wrote. "We lead in this space because of our research and education institutions, our diverse and motivated workforce, and our free-spirited cultivation of intellectual freedom. As stewards and innovators of the future, I take seriously the responsibility to regulate this industry." The move is a win for those AI companies, though over 100 employees from AI companies urged Newsom to sign the bill, citing concerns about potential risks posed by AI models. Signatories include employees from OpenAI, Google DeepMind, Anthropic, Meta and xAI. Supporters include Turing Award winner Geoffrey Hinton and University of Texas professor Scott Aaronson. "We believe that the most powerful AI models may soon pose severe risks, such as expanded access to biological weapons and cyberattacks on critical infrastructure," a Sept. 9 statement from the employees said. "It is feasible and appropriate for frontier AI companies to test whether the most powerful AI models can cause severe harms, and for these companies to implement reasonable safeguards against such risks."
[45]
Gavin Newsom Vetoes Controversial AI Regulation Bill-Backed By Elon Musk: Framework Must 'Keep Pace With The Technology' - Alphabet (NASDAQ:GOOG), Alphabet (NASDAQ:GOOGL)
Following significant lobbying from major tech companies, California Governor Gavin Newsom (D-Calif.) has vetoed a bill designed to regulate AI, citing concerns that it could stifle innovation. What Happened: Newsom made his decision at the eleventh hour, after the bill had been passed by the state legislature in late August. The proposed legislation, known as SB 1047, would have imposed stringent rules on the development of powerful AI models, including the implementation of a kill switch to prevent potential harm. On Sunday, in a letter to the state senate, Newsom defended his veto of the Safe and Secure Innovation for Frontier Artificial Intelligence Systems Act. He expressed concern that the proposed framework could limit innovation beneficial to the public good, noting that California is home to 32 of the world's leading AI companies. See Also: OpenAI Execs Feared ChatGPT-Parent Would Collapse After Departure Of Ilya Sutskever, Tried To Woo Him Back And Almost Succeeded: Report "Let me be clear - I agree with the author - we cannot afford to wait for a major catastrophe to occur before taking action to protect the public," he said in the letter, adding, "I do not agree, however, that to keep the public safe, we must settle for a solution that is not informed by an empirical trajectory analysis of Al systems and capabilities." "Ultimately, any framework for effectively regulating Al needs to keep pace with the technology itself," Newsom stated. Senator Scott Wiener (D-Calif.), the bill's author, called the veto a setback for those who support oversight of large corporations making key decisions that impact public safety and the planet's future. Subscribe to the Benzinga Tech Trends newsletter to get all the latest tech developments delivered to your inbox. Why It Matters: Major AI companies, including Alphabet Inc.'s GOOG GOOGL Google, ChatGPT-parent OpenAI, and Meta Platforms Inc. META, were against the bill. They argued that premature regulation could hinder AI development and jeopardize California's leading role in the technology's advancement. However, Tesla and SpaceX CEO Elon Musk publicly endorsed the SB 1047 AI safety bill, stating that it was a tough call but necessary for California. AI startup Anthropic also supported the bill. Interestingly, earlier in September, Newsom had signed three bills aimed at curbing the use of AI in creating misleading images or videos in political advertisements, indicating his willingness to regulate AI in certain contexts. Image via Shutterstock Check out more of Benzinga's Consumer Tech coverage by following this link. Read Next: Elon Musk's X Suspends 5.3M Accounts, Flags 10.6M Posts In First Transparency Report Amid User And Advertiser Decline Disclaimer: This content was partially produced with the help of AI tools and was reviewed and published by Benzinga editors. Market News and Data brought to you by Benzinga APIs
[46]
California Forces a Rethink of A.I. Regulation
How should A.I. be regulated? The most sweeping effort yet to regulate artificial intelligence, a California bill that could have informed laws around the world, is going back to the drawing board. Gov. Gavin Newsom vetoed the legislation, known as S.B. 1047 -- under strong pressure from Silicon Valley giants. Now, governments must again try to figure out the best way to rein in the fast-growing technology's excesses, while letting innovation flourish. "I do not believe this is the best approach to protecting the public," Newsom said of his veto. It was a rebuke that underscored the divide over S.B. 1047, which mandated safety testing of A.I. models that required a certain level of computing power and cost at least $100 million to train. Proponents, including Geoffrey Hinton, an A.I. pioneer, and Elon Musk, said that S.B. 1047 provided necessary guardrails, and they urged California policymakers to reject intense pressure from software giants against the bill. Hollywood actors and writers also supported the legislation. Opponents, including prominent venture capitalists and tech executives, called S.B. 1047 a blunt instrument that threatened to choke off innovation. Smaller tech companies also pushed back, worried that A.I. giants might not make their models publicly available if the legislation passed. (Representative Nancy Pelosi also urged state legislators to reject the bill and applauded Newsom's decision.) The Wall Street Journal notes that there are nuances in how A.I. models work: Some smaller models handle decision-making for critical situations such as power grids, while larger models are sometimes deployed for relatively safe matters including customer service. Regulating A.I. has proved tricky to do. While governments around the world (and A.I. leaders including Sam Altman of OpenAI and Demis Hassabis at Google) broadly agree that guardrails are needed, none has passed anything as sweeping as S.B. 1047. The broadest law so far is the European Union's A.I. Act, which focuses on the riskiest use of the technology but also includes transparency requirements for the largest models. California would be among the most influential potential regulators of A.I. The bill would have affected any companies that do business in the state; included requiring a kill switch for rogue A.I. systems; and gave the state the right to sue companies for harm caused by their technologies. (Newsom has already approved some A.I. legislation, including crackdowns on deepfakes.) Newsom said he would convene a board of experts to help create a more acceptable set of limits. They include Fei-Fei Li, the Stanford computer scientist whom he called the "godmother of A.I.," who has founded an A.I. start-up and argued last month against S.B. 1047.
[47]
Here is what's illegal under California's 18 (and counting) new AI laws
In September, California Governor Gavin Newsom considered 38 AI-related bills, including the highly contentious SB 1047, which the state's legislature sent to his desk for final approval. He vetoed SB 1047 on Sunday, but signed more than a dozen other AI bills into law over the course of the month. These bills try to address the most pressing issues in artificial intelligence: everything from futuristic AI systems creating existential risk, deepfake nudes from AI image generators, to Hollywood studios creating AI clones of dead performers. "Home to the majority of the world's leading AI companies, California is working to harness these transformative technologies to help address pressing challenges while studying the risks they present," said Governor Newsom's office in a press release. So far, Governor Newsom has signed 18 AI bills into law, some of which are America's most far reaching laws on generative AI yet. Here's what they do. AI risk On Sunday, Governor Newsom signed SB 896 into law, which requires California's Office of Emergency Services to perform risk analyses on potential threats posed by generative AI. CalOES will work with frontier model companies, such as OpenAI and Anthropic, to analyze AI's potential threats to critical state infrastructure, as well as threats that could lead to mass casualty events. Training data Another law Newsom signed this month requires generative AI providers to reveal the data used to train their AI systems in documentation published on their website. AB 2013 goes into effect in 2026, and requires AI providers to publish: the sources of its datasets, a description of how the data is used, the number of data points in the set, whether copyrighted or licensed data is included, the time period the data was collected, among other standards. Privacy and AI systems Newsom also signed AB 1008 on Sunday, which clarifies that California's existing privacy laws are extended to generative AI systems as well. That means that if an AI system, like ChatGPT, exposes someone's personal information (name, address, biometric data), California's existing privacy laws will limit how businesses can use and profit off of that data. Education Newsom signed AB 2876 this month, which requires California's State Board of Education to consider "AI literacy" in its math, science, and history curriculum frameworks and instructional materials. This means California's schools may begin teaching students the basics of how artificial intelligence works, as well as the limitations, impacts, and ethical considerations of using the technology. Another new law, SB 1288, requires California superintendents to create working groups to explore how AI is being used in public school education. Defining AI This month, Newsom signed a bill that establishes a uniform definition for artificial intelligence in California law. AB 2885 states that artificial intelligence is defined as "an engineered or machine-based system that varies in its level of autonomy and that can, for explicit or implicit objectives, infer from the input it receives how to generate outputs that can influence physical or virtual environments." Healthcare Another bill signed in September is AB 3030, which requires healthcare providers to disclose when they use generative AI to communicate with a patient, specifically when those messages contain a patient's clinical information. Meanwhile, Newsom recently signed SB 1120, which puts limitations on how health care service providers and health insurers can automate their services. The law ensures licensed physicians supervise the use of AI tools in these settings. AI robocalls Last Friday, Governor Newsom signed a bill into law requiring robocalls to disclose whether they've use AI-generated voices. AB 2905 aims to prevent another instance of the deepfake robocall resembling Joe Biden's voice that confused many New Hampshire voters earlier this year. Deepfake pornography On Sunday, Newsom signed AB 1831 into law, which expands the scope of existing child pornography laws to include matter that is generated by AI systems. Newsom signed two laws that address the creation and spread of deepfake nudes last week. SB 926 criminalizes the act, making it illegal to blackmail someone with AI-generated nude images that resemble them. SB 981, which also became law on Thursday, requires social media platforms to establish channels for users to report deepfake nudes that resemble them. The content must then be temporarily blocked while the platform investigates it, and permanently removed if confirmed. Watermarks Also on Thursday, Newsom signed a bill into law to help the public identify AI-generated content. SB 942 requires widely used generative AI systems to disclose they are AI-generated in their content's provenance data. For example, all images created by OpenAI's Dall-E now need a little tag in their metadata saying they're AI generated. Many AI companies already do this, and there are several free tools out there that can help people read this provenance data and detect AI-generated content. Election deepfakes Earlier this week, California's governor signed three laws cracking down on AI deepfakes that could influence elections. One of California's new laws, AB 2655, requires large online platforms, like Facebook and X, to remove or label AI deepfakes related to elections, as well as create channels to report such content. Candidates and elected officials can seek injunctive relief if a large online platform is not complying with the act. Another law, AB 2839, takes aim at social media users who post, or repost, AI deepfakes that could deceive voters about upcoming elections. The law went into effect immediately on Tuesday, and Newsom suggested Elon Musk may be at risk of violating it. AI-generated political advertisements now require outright disclosures under California's new law, AB 2355. That means moving forward, Trump may not be able to get away with posting AI deepfakes of Taylor Swift endorsing him on Truth Social (she endorsed Kamala Harris). The FCC has proposed a similar disclosure requirement at a national level and has already made robocalls using AI-generated voices illegal. Actors and AI Two laws that Newsom signed earlier this month -- which SAG-AFTRA, the nation's largest film and broadcast actors union, was pushing for -- create new standards for California's media industry. AB 2602 requires studios to obtain permission from an actor before creating an AI-generated replica of their voice or likeness. Meanwhile, AB 1836 prohibits studios from creating digital replicas of deceased performers without consent from their estates (e.g., legally cleared replicas were used in the recent "Alien" and "Star Wars" movies, as well as in other films). SB 1047 gets vetoed Governor Newsom still has a few AI-related bills to decide on before the end of September. However, SB 1047 is not one of them - the bill was vetoed on Sunday. During a chat with Salesforce CEO Marc Benioff earlier this month during the 2024 Dreamforce conference, Newsom may have tipped his hat about SB 1047, and how he's thinking about regulating the AI industry more broadly. "There's one bill that is sort of outsized in terms of public discourse and consciousness; it's this SB 1047," said Newsom onstage this month. "What are the demonstrable risks in AI and what are the hypothetical risks? I can't solve for everything. What can we solve for? And so that's the approach we're taking across the spectrum on this." Check back on this article for updates on what AI laws California's governor signs, and what he doesn't.
Share
Share
Copy Link
California Governor Gavin Newsom vetoes an AI safety bill, citing concerns about its approach. The decision ignites discussions on the future of AI regulation and its impact on innovation and public safety.
In a move that has sparked intense debate in the tech world, California Governor Gavin Newsom has vetoed Senate Bill 1047, a proposed legislation aimed at regulating artificial intelligence (AI) safety in the state 1. The bill, authored by State Senator Scott Wiener, sought to establish safety and transparency requirements for AI companies operating in California 2.
Newsom's decision to veto the bill was based on concerns that it could potentially stifle innovation and create regulatory confusion. In his veto message, the governor emphasized the need for a more comprehensive and thoughtful approach to AI regulation 3. He argued that the rapid pace of AI development necessitates a flexible regulatory framework that can adapt to emerging technologies and challenges.
The veto has significant implications for the AI industry, particularly in Silicon Valley. Proponents of the bill argue that it would have provided necessary safeguards for consumers and established California as a leader in AI regulation 4. Critics, however, align with Newsom's view that overly restrictive regulations could hamper innovation and drive AI companies out of the state.
Despite the veto, the debate surrounding AI regulation is far from over. Senator Wiener has vowed to continue pushing for AI safety measures, stating that the issue is too important to ignore 5. The incident has highlighted the complex balance between fostering technological advancement and ensuring public safety in the rapidly evolving field of AI.
The California bill's fate also raises questions about the role of state-level regulations in the broader context of national and international AI governance. As other states and countries grapple with similar issues, the need for coordinated efforts to address AI safety becomes increasingly apparent 1.
Governor Newsom has indicated his commitment to addressing AI safety through alternative means. He announced plans to work with legislators, industry leaders, and experts to develop a more comprehensive approach to AI regulation that balances innovation with public interest 4. This collaborative effort aims to position California at the forefront of responsible AI development while maintaining its status as a tech innovation hub.
Reference
[1]
[2]
[3]
California Governor Gavin Newsom voices apprehension about the potential "chilling effect" of the state's proposed AI regulation bill. The governor's comments highlight the delicate balance between innovation and regulation in the rapidly evolving AI landscape.
4 Sources
California Governor Gavin Newsom's veto of Senate Bill 1047, a proposed AI safety regulation, has ignited discussions about balancing innovation with public safety in the rapidly evolving field of artificial intelligence.
9 Sources
A groundbreaking artificial intelligence regulation bill has passed the California legislature and now awaits Governor Gavin Newsom's signature. The bill, if signed, could set a precedent for AI regulation in the United States.
14 Sources
California Governor Gavin Newsom vetoes the Safe and Secure Innovation for Frontier Artificial Intelligence Models Act (SB 1047), igniting discussions on balancing AI innovation with safety measures.
4 Sources
California's legislature has approved a groundbreaking bill to regulate large AI models, setting the stage for potential nationwide standards. The bill, if signed into law, would require companies to evaluate AI systems for risks and implement mitigation measures.
7 Sources
The Outpost is a comprehensive collection of curated artificial intelligence software tools that cater to the needs of small business owners, bloggers, artists, musicians, entrepreneurs, marketers, writers, and researchers.
© 2024 TheOutpost.AI All rights reserved