Curated by THEOUTPOST
On Sat, 24 Aug, 8:01 AM UTC
4 Sources
[1]
California AI Bill Divides Silicon Valley
A bill aimed at regulating powerful artificial intelligence models is under consideration in California's legislature, despite outcry that it could kill the technology it seeks to control. "With Congress gridlocked over AI regulation... California must act to get ahead of the foreseeable risks presented by rapidly advancing AI while also fostering innovation," said Democratic state senator Scott Wiener of San Francisco, the bill's sponsor. But critics, including Democratic members of US Congress, argue that threats of punitive measures against developers in a nascent field can throttle innovation. "The view of many of us in Congress is that SB 1047 is well-intentioned but ill-informed," influential Democratic congresswoman Nancy Pelosi of California said in a release, noting that top party members have shared their concerns with Wiener. "While we want California to lead in AI in a way that protects consumers, data, intellectual property and more, SB 1047 is more harmful than helpful in that pursuit," Pelosi said. Pelosi pointed out that Stanford University computer science professor Fei-Fei Li, whom she referred to as the "Godmother of AI" for her status in the field, is among those opposing the bill. The bill, called the Safe and Secure Innovation for Frontier Artificial Intelligence Models Act, will not solve what it is meant to fix and will "deeply harm AI academia, little tech and the open-source community," Li wrote earlier this month on X. Little tech refers to startups and small companies, as well as researchers and entrepreneurs. Weiner said the legislation is intended to ensure safe development of large-scale AI models by establishing safety standards for developers of systems costing more than $100 million to train. The bill requires developers of large "frontier" AI models to take precautions such as pre-deployment testing, simulating hacker attacks, installing cyber security safeguards, as well as providing protection for whistleblowers. Recent changes to the bill include replacing criminal penalties for violations with civil penalties such as fines. Wiener argues that AI safety and innovation are not mutually exclusive, and that tweaks to the bill have addressed some concerns of critics. OpenAI, the creator of ChatGPT, has also come out against the bill, saying it would prefer national rules, fearing a chaotic patchwork of AI regulations across the US states. At least 40 states have introduced bills this year to regulate AI, and a half dozen have adopted resolutions or enacted legislation aimed at the technology, according to The National Conference of State Legislatures. OpenAI said the California bill could also chase innovators out of the state, home to Silicon Valley. But Anthropic, another generative AI player that would be potentially affected by the measure, has said that after some welcome modifications, the bill has more benefits than flaws. The bill also has high-profile backers from the AI community. "Powerful AI systems bring incredible promise, but the risks are also very real and should be taken extremely seriously," computer scientist Geoffrey Hinton the "Godfather of AI," said in a Fortune op-ed piece cited by Wiener. "SB 1047 takes a very sensible approach to balance those concerns." AI regulation with "real teeth" is critical, and California is a natural place to start since it has been a launch pad for the technology, according to Hinton. Meanwhile, professors and students at the California Institute of Technology are urging people to sign a letter against the bill. "We believe that this proposed legislation poses a significant threat to our ability to advance research by imposing burdensome and unrealistic regulations on AI development," CalTech professor Anima Anandkumar said on X.
[2]
California artificial intelligence bill divides Silicon Valley
A bill aimed at regulating powerful artificial intelligence models is under consideration in California's legislature, despite outcry that it could kill the technology it seeks to control. At least 40 states have introduced bills this year to regulate AI, and a half dozen have adopted resolutions or enacted legislation aimed at the technology, according to The National Conference of State Legislatures.A bill aimed at regulating powerful artificial intelligence models is under consideration in California's legislature, despite outcry that it could kill the technology it seeks to control. "With Congress gridlocked over AI regulation... California must act to get ahead of the foreseeable risks presented by rapidly advancing AI while also fostering innovation," said Democratic state senator Scott Wiener of San Francisco, the bill's sponsor. But critics, including Democratic members of US Congress, argue that threats of punitive measures against developers in a nascent field can throttle innovation. "The view of many of us in Congress is that SB 1047 is well-intentioned but ill-informed," influential Democratic congresswoman Nancy Pelosi of California said in a release, noting that top party members have shared their concerns with Wiener. "While we want California to lead in AI in a way that protects consumers, data, intellectual property and more, SB 1047 is more harmful than helpful in that pursuit," Pelosi said. Pelosi pointed out that Stanford University computer science professor Fei-Fei Li, whom she referred to as the "Godmother of AI" for her status in the field, is among those opposing the bill. The bill, called the Safe and Secure Innovation for Frontier Artificial Intelligence Models Act, will not solve what it is meant to fix and will "deeply harm AI academia, little tech and the open-source community," Li wrote earlier this month on X. Little tech refers to startups and small companies, as well as researchers and entrepreneurs. Weiner said the legislation is intended to ensure safe development of large-scale AI models by establishing safety standards for developers of systems costing more than $100 million to train. The bill requires developers of large "frontier" AI models to take precautions such as pre-deployment testing, simulating hacker attacks, installing cyber security safeguards, as well as providing protection for whistleblowers. Recent changes to the bill include replacing criminal penalties for violations with civil penalties such as fines. Wiener argues that AI safety and innovation are not mutually exclusive, and that tweaks to the bill have addressed some concerns of critics. OpenAI, the creator of ChatGPT, has also come out against the bill, saying it would prefer national rules, fearing a chaotic patchwork of AI regulations across the US states. At least 40 states have introduced bills this year to regulate AI, and a half dozen have adopted resolutions or enacted legislation aimed at the technology, according to The National Conference of State Legislatures. OpenAI said the California bill could also chase innovators out of the state, home to Silicon Valley. But Anthropic, another generative AI player that would be potentially affected by the measure, has said that after some welcome modifications, the bill has more benefits than flaws. The bill also has high-profile backers from the AI community. "Powerful AI systems bring incredible promise, but the risks are also very real and should be taken extremely seriously," computer scientist Geoffrey Hinton the "Godfather of AI," said in a Fortune op-ed piece cited by Wiener. "SB 1047 takes a very sensible approach to balance those concerns." AI regulation with "real teeth" is critical, and California is a natural place to start since it has been a launch pad for the technology, according to Hinton. Meanwhile, professors and students at the California Institute of Technology are urging people to sign a letter against the bill. "We believe that this proposed legislation poses a significant threat to our ability to advance research by imposing burdensome and unrealistic regulations on AI development," CalTech professor Anima Anandkumar said on X.
[3]
California AI bill divides Silicon Valley
San Francisco (AFP) - A bill aimed at regulating powerful artificial intelligence models is under consideration in California's legislature, despite outcry that it could kill the technology it seeks to control. "With Congress gridlocked over AI regulation... California must act to get ahead of the foreseeable risks presented by rapidly advancing AI while also fostering innovation," said Democratic state senator Scott Wiener of San Francisco, the bill's sponsor. But critics, including Democratic members of US Congress, argue that threats of punitive measures against developers in a nascent field can throttle innovation. "The view of many of us in Congress is that SB 1047 is well-intentioned but ill-informed," influential Democratic congresswoman Nancy Pelosi of California said in a release, noting that top party members have shared their concerns with Wiener. "While we want California to lead in AI in a way that protects consumers, data, intellectual property and more, SB 1047 is more harmful than helpful in that pursuit," Pelosi said. Pelosi pointed out that Stanford University computer science professor Fei-Fei Li, whom she referred to as the "Godmother of AI" for her status in the field, is among those opposing the bill. Harm or help? The bill, called the Safe and Secure Innovation for Frontier Artificial Intelligence Models Act, will not solve what it is meant to fix and will "deeply harm AI academia, little tech and the open-source community," Li wrote earlier this month on X. Little tech refers to startups and small companies, as well as researchers and entrepreneurs. Weiner said the legislation is intended to ensure safe development of large-scale AI models by establishing safety standards for developers of systems costing more than $100 million to train. The bill requires developers of large "frontier" AI models to take precautions such as pre-deployment testing, simulating hacker attacks, installing cyber security safeguards, as well as providing protection for whistleblowers. Recent changes to the bill include replacing criminal penalties for violations with civil penalties such as fines. Wiener argues that AI safety and innovation are not mutually exclusive, and that tweaks to the bill have addressed some concerns of critics. OpenAI, the creator of ChatGPT, has also come out against the bill, saying it would prefer national rules, fearing a chaotic patchwork of AI regulations across the US states. At least 40 states have introduced bills this year to regulate AI, and a half dozen have adopted resolutions or enacted legislation aimed at the technology, according to The National Conference of State Legislatures. OpenAI said the California bill could also chase innovators out of the state, home to Silicon Valley. But Anthropic, another generative AI player that would be potentially affected by the measure, has said that after some welcome modifications, the bill has more benefits than flaws. The bill also has high-profile backers from the AI community. "Powerful AI systems bring incredible promise, but the risks are also very real and should be taken extremely seriously," computer scientist Geoffrey Hinton the "Godfather of AI," said in a Fortune op-ed piece cited by Wiener. "SB 1047 takes a very sensible approach to balance those concerns." AI regulation with "real teeth" is critical, and California is a natural place to start since it has been a launch pad for the technology, according to Hinton. Meanwhile, professors and students at the California Institute of Technology are urging people to sign a letter against the bill. "We believe that this proposed legislation poses a significant threat to our ability to advance research by imposing burdensome and unrealistic regulations on AI development," CalTech professor Anima Anandkumar said on X.
[4]
Anthropic says it is closer to backing California's AI bill after lawmakers made some fixes. Here's what changed.
This story is available exclusively to Business Insider subscribers. Become an Insider and start reading now. Have an account? Log in. Anthropic's cautious approval comes just a month after the company proposed a series of amendments to SB 1047 -- which was first introduced to the state legislature by Sen. Scott Wiener in February. In a letter sent to state leaders in July, Anthropic called for a greater focus on deterring companies from building unsafe models rather than enforcing stringent laws before catastrophic incidents happen. It also suggested that companies be able to set their standards for safety testing instead of adhering to regulations prescribed by the state. An amended version of the bill published on August 19 includes several modifications. First, it limits the scope of civil penalties for violations that don't result in harm or imminent risk, according to a post by Nathan Calvin, senior policy counsel at the Center for AI Safety Action Fund, which is a cosponsor of the bill and has been working with Anthropic since the bill was first introduced. There are also some key language changes. Where the bill originally called for companies to demonstrate "reasonable assurance" against potential harms, it now calls for them to demonstrate "reasonable care," which "helps clarify the focus of the bill on testing and risk mitigation," according to Calvin. It's also "the most common standard existing in tort liability," he wrote. The updated version also downsized a new government agency that would enforce AI regulations, once called the Frontier Model Division, to a board known as the Board of Frontier Models and placed it within the already existing Government Operations Agency. That board now has nine members instead of five members. With that, however, the reporting requirements on companies have also increased. Companies must publicly release safety reports and send unredacted versions to the state's attorney general. The updated bill removes the penalty of perjury, thereby eliminating all criminal liability for companies and imposing only civil liabilities. Companies are now required to submit "statements of compliance" to the attorney general rather than "certifications of compliance" to the Frontier Model Division. Amodei said the bill now "appears to us to be halfway between our suggested version and the original bill." The benefits of developing publicly available safety and security protocols, mitigating downstream harms, and forcing companies to seriously question the risks of their technologies will "meaningfully improve" the industry's ability to combat threats. Anthropic bills itself as a "safety and research company" and has won some $4 billion in backing from Amazon. In 2021, a group of former OpenAI staffers, including Dario Amodei and his sister Daniela, started the company because they believed AI would have a dramatic impact on the world and wanted to build a company that would ensure it was aligned with human values. Wiener was "really pleased to see the kind of detailed engagement that Anthropic brought in its 'support if amended letter," Calvin told Business Insider. "I really hope that this encourages other companies to also engage substantively and to try to approach some of this with nuance and realize that this kind of false trade-off between innovation and safety is not going to be in the long run interest of this industry." Other companies that the new legislation will impact have been more hesitant. OpenAI sent a letter this week to California state leaders opposing the bill. One of its key concerns was that the new regulations would push AI companies out of California. Meta has also argued that the bill "actively discourages the release of open-source AI."
Share
Share
Copy Link
A proposed California bill aimed at regulating artificial intelligence has created a divide among tech companies in Silicon Valley. The legislation has garnered support from some firms while facing opposition from others, highlighting the complex challenges in AI governance.
California lawmakers have introduced a bill, SB 1047, aimed at regulating artificial intelligence (AI) technologies. The proposed legislation has ignited a heated debate within Silicon Valley, with tech companies taking opposing stances on its potential impact and necessity 1.
SB 1047 would require companies developing "high-risk" AI systems to conduct thorough testing and risk assessments before deployment. The bill also mandates the disclosure of training data sources and the implementation of cybersecurity measures to protect AI systems from unauthorized access 2.
Several prominent tech companies, including Microsoft, Adobe, and Anthropic, have expressed support for the bill. Anthropic, an AI research company, believes that the legislation strikes a balance between innovation and responsible AI development. The company argues that the bill's requirements align with best practices already adopted by leading AI firms 4.
However, the bill faces opposition from other tech industry players. Critics argue that the legislation could stifle innovation and place an undue burden on smaller companies. The Internet Association, representing major tech firms like Amazon and Google, contends that the bill's provisions are overly broad and could hinder AI development in California 3.
If passed, the California AI bill could have far-reaching consequences for the tech industry. As the home of Silicon Valley, California's regulations often set precedents for other states and even countries. Supporters argue that the bill would enhance transparency and safety in AI development, while opponents fear it may drive innovation out of the state 1.
The debate surrounding SB 1047 reflects a larger global conversation about AI governance. As AI technologies continue to advance rapidly, policymakers worldwide are grappling with how to balance innovation with ethical considerations and public safety. The outcome of this California bill could influence future regulatory efforts in other jurisdictions 2.
As the bill progresses through the California legislature, lawmakers will need to navigate the competing interests of various stakeholders. The ongoing debate highlights the complexity of regulating emerging technologies and the challenges of finding common ground among diverse industry players 3.
Reference
[1]
[2]
[3]
California's AI safety bill, AB-1047, moves forward with significant amendments following tech industry input. The bill aims to regulate AI development while balancing innovation and safety concerns.
10 Sources
10 Sources
California's proposed AI safety bill, SB 1047, has ignited a fierce debate in the tech world. While some industry leaders support the legislation, others, including prominent AI researchers, argue it could stifle innovation and favor large tech companies.
3 Sources
3 Sources
Major tech companies, including OpenAI and Google, are opposing California's proposed AI accountability bill, arguing it could stifle innovation. The bill aims to regulate AI development and hold companies accountable for potential harms.
12 Sources
12 Sources
California's legislature has approved a groundbreaking bill to regulate large AI models, setting the stage for potential nationwide standards. The bill, if signed into law, would require companies to evaluate AI systems for risks and implement mitigation measures.
7 Sources
7 Sources
A groundbreaking artificial intelligence regulation bill has passed the California legislature and now awaits Governor Gavin Newsom's signature. The bill, if signed, could set a precedent for AI regulation in the United States.
14 Sources
14 Sources
The Outpost is a comprehensive collection of curated artificial intelligence software tools that cater to the needs of small business owners, bloggers, artists, musicians, entrepreneurs, marketers, writers, and researchers.
© 2025 TheOutpost.AI All rights reserved