Curated by THEOUTPOST
On Thu, 29 Aug, 8:01 AM UTC
7 Sources
[1]
California advances landmark legislation to regulate large AI models
SACRAMENTO, Calif. -- A California landmark legislation to establish first-in-the-nation safety measures for the largest artificial intelligence systems cleared an important vote Wednesday that could pave the way for U.S. regulations on the technology evolving at warp speed. The proposal, aiming to reduce potential risks created by AI, would require companies to test their models and publicly disclose their safety protocols to prevent the models from being manipulated to, for example, wipe out the state's electric grid or help build chemical weapons -- scenarios experts say could be possible in the future with such rapid advancements in the industry. The bill is among hundreds lawmakers are voting on during its final week of session. Gov. Gavin Newsom then has until the end of September to decide whether to sign them into law, veto them or allow them to become law without his signature. The measure squeaked by in the Assembly Wednesday and requires a final Senate vote before reaching the governor's desk. Supporters said it would set some of the first much-needed safety ground rules for large-scale AI models in the United States. The bill targets systems that require more than $100 million in data to train. No current AI models have hit that threshold. "It's time that Big Tech plays by some kind of a rule, not a lot, but something," Republican Assemblymember Devon Mathis said in support of the bill Wednesday. "The last thing we need is for a power grid to go out, for water systems to go out." The proposal, authored by Democratic Sen. Scott Wiener, faced fierce opposition from venture capital firms and tech companies, including OpenAI, Google and Meta, the parent company of Facebook and Instagram. They say safety regulations should be established by the federal government and that the California legislation takes aim at developers instead of targeting those who use and exploit the AI systems for harm. A group of several California House members also opposed the bill, with Former House Speaker Nancy Pelosi calling it " well-intentioned but ill informed." Chamber of Progress, a left-leaning Silicon Valley-funded industry group, said the law is "based on science fiction fantasies of what AI could look like." "This bill has more in common with Blade Runner or The Terminator than the real world," Senior Tech Policy Director Todd O'Boyle said in a statement after the Wednesday vote. "We shouldn't hamstring California's leading economic sector over a theoretical scenario." The legislation is supported by Anthropic, an AI startup backed by Amazon and Google, after Wiener adjusted the bill earlier this month to include some of the company's suggestions. The current bill removed the penalty of perjury provision, limited the state attorney general's power to sue violators and narrowed the responsibilities of a new AI regulatory agency. Social media platform X owner Elon Musk also threw his support behind the proposal this week. Anthropic said in a letter to Newsom that the bill is crucial to prevent catastrophic misuse of powerful AI systems and that "its benefits likely outweigh its costs." Wiener said his legislation took a "light touch" approach. "Innovation and safety can go hand in hand -- and California is leading the way," Weiner said in a statement after the vote. He also slammed critics earlier this week for dismissing potential catastrophic risks from powerful AI models as unrealistic: "If they really think the risks are fake, then the bill should present no issue whatsoever." Wiener's proposal is among dozens of AI bills California lawmakers proposed this year to build public trust, fight algorithmic discrimination and outlaw deepfakes that involve elections or pornography. With AI increasingly affecting the daily lives of Americans, state legislators have tried to strike a balance of reigning in the technology and its potential risks without stifling the booming homegrown industry. California, home of 35 of the world's top 50 AI companies, has been an early adopter of AI technologies and could soon deploy generative AI tools to address highway congestion and road safety, among other things. Newsom, who declined to weigh in on the measure earlier this summer, had warned against AI overregulation.
[2]
California advances landmark legislation to regulate large AI models
SACRAMENTO, Calif. (AP) -- A California landmark legislation to establish first-in-the-nation safety measures for the largest artificial intelligence systems cleared an important vote Wednesday that could pave the way for U.S. regulations on the technology evolving at warp speed. The proposal, aiming to reduce potential risks created by AI, would require companies to test their models and publicly disclose their safety protocols to prevent the models from being manipulated to, for example, wipe out the state's electric grid or help build chemical weapons -- scenarios experts say could be possible in the future with such rapid advancements in the industry. The bill is among hundreds lawmakers are voting on during its final week of session. Gov. Gavin Newsom then has until the end of September to decide whether to sign them into law, veto them or allow them to become law without his signature. The measure squeaked by in the Assembly Wednesday and requires a final Senate vote before reaching the governor's desk. Supporters said it would set some of the first much-needed safety ground rules for large-scale AI models in the United States. The bill targets systems that require more than $100 million in data to train. No current AI models have hit that threshold. "It's time that Big Tech plays by some kind of a rule, not a lot, but something," Republican Assemblymember Devon Mathis said in support of the bill Wednesday. "The last thing we need is for a power grid to go out, for water systems to go out." The proposal, authored by Democratic Sen. Scott Wiener, faced fierce opposition from venture capital firms and tech companies, including OpenAI, Google and Meta, the parent company of Facebook and Instagram. They say safety regulations should be established by the federal government and that the California legislation takes aim at developers instead of targeting those who use and exploit the AI systems for harm. A group of several California House members also opposed the bill, with Former House Speaker Nancy Pelosi calling it " well-intentioned but ill informed." Chamber of Progress, a left-leaning Silicon Valley-funded industry group, said the law is "based on science fiction fantasies of what AI could look like." "This bill has more in common with Blade Runner or The Terminator than the real world," Senior Tech Policy Director Todd O'Boyle said in a statement after the Wednesday vote. "We shouldn't hamstring California's leading economic sector over a theoretical scenario." The legislation is supported by Anthropic, an AI startup backed by Amazon and Google, after Wiener adjusted the bill earlier this month to include some of the company's suggestions. The current bill removed the penalty of perjury provision, limited the state attorney general's power to sue violators and narrowed the responsibilities of a new AI regulatory agency. Social media platform X owner Elon Musk also threw his support behind the proposal this week. Anthropic said in a letter to Newsom that the bill is crucial to prevent catastrophic misuse of powerful AI systems and that "its benefits likely outweigh its costs." Wiener said his legislation took a "light touch" approach. "Innovation and safety can go hand in hand -- and California is leading the way," Weiner said in a statement after the vote. He also slammed critics earlier this week for dismissing potential catastrophic risks from powerful AI models as unrealistic: "If they really think the risks are fake, then the bill should present no issue whatsoever." Wiener's proposal is among dozens of AI bills California lawmakers proposed this year to build public trust, fight algorithmic discrimination and outlaw deepfakes that involve elections or pornography. With AI increasingly affecting the daily lives of Americans, state legislators have tried to strike a balance of reigning in the technology and its potential risks without stifling the booming homegrown industry. California, home of 35 of the world's top 50 AI companies, has been an early adopter of AI technologies and could soon deploy generative AI tools to address highway congestion and road safety, among other things. Newsom, who declined to weigh in on the measure earlier this summer, had warned against AI overregulation.
[3]
California advances landmark legislation to regulate large AI models
SACRAMENTO, Calif. (AP) -- A California landmark legislation to establish first-in-the-nation safety measures for the largest artificial intelligence systems cleared an important vote Wednesday that could pave the way for U.S. regulations on the technology evolving at warp speed. The proposal, aiming to reduce potential risks created by AI, would require companies to test their models and publicly disclose their safety protocols to prevent the models from being manipulated to, for example, wipe out the state's electric grid or help build chemical weapons -- scenarios experts say could be possible in the future with such rapid advancements in the industry. The bill is among hundreds lawmakers are voting on during its final week of session. Gov. Gavin Newsom then has until the end of September to decide whether to sign them into law, veto them or allow them to become law without his signature. The measure squeaked by in the Assembly Wednesday and requires a final Senate vote before reaching the governor's desk. Supporters said it would set some of the first much-needed safety ground rules for large-scale AI models in the United States. The bill targets systems that require more than $100 million in data to train. No current AI models have hit that threshold. "It's time that Big Tech plays by some kind of a rule, not a lot, but something," Republican Assemblymember Devon Mathis said in support of the bill Wednesday. "The last thing we need is for a power grid to go out, for water systems to go out." The proposal, authored by Democratic Sen. Scott Wiener, faced fierce opposition from venture capital firms and tech companies, including OpenAI, Google and Meta, the parent company of Facebook and Instagram. They say safety regulations should be established by the federal government and that the California legislation takes aim at developers instead of targeting those who use and exploit the AI systems for harm. A group of several California House members also opposed the bill, with Former House Speaker Nancy Pelosi calling it " well-intentioned but ill informed." Chamber of Progress, a left-leaning Silicon Valley-funded industry group, said the law is "based on science fiction fantasies of what AI could look like." "This bill has more in common with Blade Runner or The Terminator than the real world," Senior Tech Policy Director Todd O'Boyle said in a statement after the Wednesday vote. "We shouldn't hamstring California's leading economic sector over a theoretical scenario." The legislation is supported by Anthropic, an AI startup backed by Amazon and Google, after Wiener adjusted the bill earlier this month to include some of the company's suggestions. The current bill removed the penalty of perjury provision, limited the state attorney general's power to sue violators and narrowed the responsibilities of a new AI regulatory agency. Social media platform X owner Elon Musk also threw his support behind the proposal this week. Anthropic said in a letter to Newsom that the bill is crucial to prevent catastrophic misuse of powerful AI systems and that "its benefits likely outweigh its costs." Wiener said his legislation took a "light touch" approach. "Innovation and safety can go hand in hand -- and California is leading the way," Weiner said in a statement after the vote. He also slammed critics earlier this week for dismissing potential catastrophic risks from powerful AI models as unrealistic: "If they really think the risks are fake, then the bill should present no issue whatsoever." Wiener's proposal is among dozens of AI bills California lawmakers proposed this year to build public trust, fight algorithmic discrimination and outlaw deepfakes that involve elections or pornography. With AI increasingly affecting the daily lives of Americans, state legislators have tried to strike a balance of reigning in the technology and its potential risks without stifling the booming homegrown industry. California, home of 35 of the world's top 50 AI companies, has been an early adopter of AI technologies and could soon deploy generative AI tools to address highway congestion and road safety, among other things. Newsom, who declined to weigh in on the measure earlier this summer, had warned against AI overregulation.
[4]
California Advances Landmark Legislation to Regulate Large AI Models
SACRAMENTO, Calif. (AP) -- A California landmark legislation to establish first-in-the-nation safety measures for the largest artificial intelligence systems cleared an important vote Wednesday that could pave the way for U.S. regulations on the technology evolving at warp speed. The proposal, aiming to reduce potential risks created by AI, would require companies to test their models and publicly disclose their safety protocols to prevent the models from being manipulated to, for example, wipe out the state's electric grid or help build chemical weapons -- scenarios experts say could be possible in the future with such rapid advancements in the industry. The bill is among hundreds lawmakers are voting on during its final week of session. Gov. Gavin Newsom then has until the end of September to decide whether to sign them into law, veto them or allow them to become law without his signature. The measure squeaked by in the Assembly Wednesday and requires a final Senate vote before reaching the governor's desk. Supporters said it would set some of the first much-needed safety ground rules for large-scale AI models in the United States. The bill targets systems that require more than $100 million in data to train. No current AI models have hit that threshold. "It's time that Big Tech plays by some kind of a rule, not a lot, but something," Republican Assemblymember Devon Mathis said in support of the bill Wednesday. "The last thing we need is for a power grid to go out, for water systems to go out." The proposal, authored by Democratic Sen. Scott Wiener, faced fierce opposition from venture capital firms and tech companies, including OpenAI, Google and Meta, the parent company of Facebook and Instagram. They say safety regulations should be established by the federal government and that the California legislation takes aim at developers instead of targeting those who use and exploit the AI systems for harm. A group of several California House members also opposed the bill, with Former House Speaker Nancy Pelosi calling it " well-intentioned but ill informed." Chamber of Progress, a left-leaning Silicon Valley-funded industry group, said the law is "based on science fiction fantasies of what AI could look like." "This bill has more in common with Blade Runner or The Terminator than the real world," Senior Tech Policy Director Todd O'Boyle said in a statement after the Wednesday vote. "We shouldn't hamstring California's leading economic sector over a theoretical scenario." The legislation is supported by Anthropic, an AI startup backed by Amazon and Google, after Wiener adjusted the bill earlier this month to include some of the company's suggestions. The current bill removed the penalty of perjury provision, limited the state attorney general's power to sue violators and narrowed the responsibilities of a new AI regulatory agency. Social media platform X owner Elon Musk also threw his support behind the proposal this week. Anthropic said in a letter to Newsom that the bill is crucial to prevent catastrophic misuse of powerful AI systems and that "its benefits likely outweigh its costs." Wiener said his legislation took a "light touch" approach. "Innovation and safety can go hand in hand -- and California is leading the way," Weiner said in a statement after the vote. He also slammed critics earlier this week for dismissing potential catastrophic risks from powerful AI models as unrealistic: "If they really think the risks are fake, then the bill should present no issue whatsoever." Wiener's proposal is among dozens of AI bills California lawmakers proposed this year to build public trust, fight algorithmic discrimination and outlaw deepfakes that involve elections or pornography. With AI increasingly affecting the daily lives of Americans, state legislators have tried to strike a balance of reigning in the technology and its potential risks without stifling the booming homegrown industry. California, home of 35 of the world's top 50 AI companies, has been an early adopter of AI technologies and could soon deploy generative AI tools to address highway congestion and road safety, among other things. Newsom, who declined to weigh in on the measure earlier this summer, had warned against AI overregulation. Copyright 2024 The Associated Press. All rights reserved. This material may not be published, broadcast, rewritten or redistributed.
[5]
California's "AI Safety" Bill Will Have Global Effects
The recently passed bill will impact most large AI firms in the world On Wednesday, California lawmakers passed a bill that aims to prevent catastrophic damages caused by artificial intelligence software. The legislation, known as the Safe and Secure Innovation for Frontier Artificial Intelligence Models Act, requires certain AI companies doing business in California to curb massive loss of life or damages surpassing US $500 million with safety evaluations and other measures. Most of the world's largest AI companies are based in the Golden State, and nearly all that aren't will still do business there. As a result, the bill will have far reaching -- perhaps even global -- effectsif Governor Gavin Newsom signs it into law in the coming weeks. The bill is currently awaiting his signature after being passed by both the state assembly and the state senate. The bill was hotly debated. It passed after nine rounds of amendments that resulted from back-and-forth between lawmakers and the AI industry. It also prompted disagreement within the AI industry, with some backing the bill, even if hesitantly, and others saying it would stifle innovation and deter smaller companies and investors from developing AI products.Open source advocates also expressed concerns that the bill would put onerous requirements on those who publish AI models for others to build on freely. Yoshua Bengio, a computer scientist and a so-called "godfather of AI," said the potential for both extremely good and extremely bad consequences calls for balance. Speaking at a press conference held Monday by California State Senator Scott Wiener, the bill's sponsor, Bengio said foreseeable risks call for action. "We should aim for a path that enables innovation but also keeps us safe in the plausible scenarios identified by scientists," Bengio, who supports the bill, said. At stake is the future of a technology with revolutionary potential. As programmers create software that replicates aspects of human intelligence, the potential to automate and significantly speed up tasks that require advanced cognition grows. The possibilities inherent in AI mean that governments should adopt a "moonshot mentality" to supporting the tech's development, Fei-Fei Li wrote in an essay for Forbes. Li -- a computer scientists often referred to as a "godmother of AI" -- also wrote that an earlier version of the bill faltered by holding the original developer of AI software liable for misuse by a third party (the bill also holds the third party accountable.). Following Li's remarks, Wiener went through multiple rounds of amendments that aimed to lessen the burdens on original programmers. The consequences of AI for business, military, and government sectors are difficult to predict, but both boosters and concerned watchdogs agree that the widespread use of the technology will be transformative. Concerns over AI include doomsday scenarios like the creation of a biological weapon, as well as the amplification of more mundane horrors like identity theft (think of hackers getting much faster at stealing and selling your personal information). Then there's the specter of human biases becoming supercharged in software programs that approve mortgages, offer job interviews, or decide whether someone charged with a crime should receive bail. Wednesday's bill looks to cap the most catastrophic outcomes from AI models with a level of computational power higher than current models are capable of that cost more than $100 million to train. It allows the California attorney general to seek a court injunction against companies offering software that doesn't meet the bill's safety requirements, and allows the office to sue if the AI leads to large numbers of deaths or cyberattacks on infrastructure causing $500 million more in damages. As a state that often puts itself at the forefront of emerging policy issues, California is in a unique position to put guardrails on AI. Its laws have a history of influencing regulations throughout the United States, sometimes by serving as a proof of concept, but also by defining how companies must operate if they want to do business in the state. For example, egg farmers anywhere in the world must keep their chickens in cage-free systems if they want to sell their products to California's market of more than 39 million consumers. In the tech realm, companies must allow California residents a certain level of control of their personal data. Many firms said they'd extend those rights to all U.S. users when the privacy regulations went into effect because it's costly and complicated to offer two different levels of control to users depending on where they live. It's also not always possible to know if a user is a California resident logging in from somewhere else. Some lawmakers, including Rep. Nancy Pelosi, joined AI companies in calling for a federal solution, fearing that a state-by-state approach would create a complicated patchwork of regulation. But State Senator Wiener said the state has an imperative to act. With no regulations coming out of the U.S. Congress, it's up to California, he said, to turn voluntary commitments by AI companies into legal requirements. Wiener said in a press conference Monday that the risks presented by AI require action. "We should try to get ahead of those risks," he said, "instead of playing catch up." Some advocates for the open source community say the bill threatens to discourage programmers from openly releasing AI software, despite amendments meant to address their concerns. Ben Brooks, an incoming fellow at the Berkman Klein Center for Internet & Society, said he's concerned that the updated bill still requires original programmers to track what their models do once in the hands of other users. These requirements, he says, are "simply not compatible with the open release of this technology." Weiner has argued that the bill's amendments keep enforcement focused on the user of a given AI model. Geoffrey Hinton, another so-called godparent of AI, said in a statement Wednesday that the bill balances critics' concerns with the need to protect humanity from misuse. "I am still passionate about the potential for AI to save lives through improvements in science and medicine," he said, "but it's critical that we have legislation with real teeth to address the risks."
[6]
California passes controversial bill regulating AI model training
As the world debates what is right and what is wrong about generative AI, the California State Assembly and Senate have just passed the Safe and Secure Innovation for Frontier Artificial Intelligence Models Act bill (SB 1047), which is one of the first significant regulations for AIs in the United States. The bill, which was voted on Thursday (via The Verge), has been the subject of debate in Silicon Valley as it essentially mandates that AI companies operating in California implement a series of precautions before training a "sophisticated foundation model." With the new law, developers will have to make sure that they can quickly and completely shut down an AI model if it is deemed unsafe. Language models will also need to be protected against "unsafe post-training modifications" or anything that could cause "critical harm." Senators describe the bill as "safeguards to protect society" from the misuse of AI. Professor Hinton, former AI lead at Google, praised the bill for considering that the risks of powerful AI systems are "very real and should be taken extremely seriously." However, companies like OpenAI and even small developers have criticized the AI safety bill, as it establishes potential criminal penalties for those who don't comply. Some argue that the bill will harm indie developers, who will need to hire lawyers and deal with bureaucracy when working with AI models. Governor Gavin Newsom now has until the end of September to decide whether to approve or veto the bill. Earlier this year, Apple and other tech companies such as Amazon, Google, Meta, and OpenAI agreed to a set of voluntary AI safety rules established by the Biden administration. The safety rules outline commitments to test behavior of AI systems, ensuring they do not exhibit discriminatory tendencies or have security concerns. The results of conducted tests must be shared with governments and academia for peer review. At least for now, the White House AI guidelines are not enforceable in law. Apple, of course, has a keen interest in such regulations as the company has been working on Apple Intelligence features, which will be released to the public later this year with iOS 18.1 and macOS Sequoia 15.1.
[7]
Controversial AI Bill Passes Legislative Vote
The so-called kill switch bill would create more oversight into how AI is developed and deployed in California. A bill seeking to regulate artificial intelligence developed in California is moving toward becoming law. SB-1047, the Safe and Secure Innovation for Frontier Artificial Intelligence Models Act, cleared the legislature with a 41-9 vote on Wednesday, as reported by Politico. After a procedural vote, it will head to the desk of Gov. Gavin Newsom to either become law or be vetoed by the end of September. The bill is an attempt to curb the growing power of generative AI (AI tools that can generate information and answers by scouring through the data they've been trained on) being developed by companies including OpenAI, Google, Apple, Meta and many others. The so-called Doomer bill has been endorsed by Elon Musk but is being opposed by politicians including Nancy Pelosi, tech firms and venture capitalists who say the bill will stifle innovation in the birthplace of many of these technologies. The bill itself would require safety testing of AI models costing more than $100 million to develop or that need a large amount of computing power. AI companies would also need to build in a kill switch to prevent AI from running amok and would be overseen by the state's attorney general, who would have the power to sue over compliance. The bill also requires AI companies to have third-party auditors and to provide protections to whistleblowers. "SB-1047 would stifle AI development in California, hurt business growth and job creation, and break from the state's long tradition of fostering open-source innovation. This bill is well intended but not ready to become law," a Meta spokesperson said in an emailed statement. Apple, Google and OpenAI didn't immediately respond to a request for comment. Since it was introduced in February by Democratic state Sen. Scott Wiener, who represents San Francisco, the bill has drawn discussion and worry over how it could impact the state's tech industry. A coalition of several tech-focused groups, including Chamber of Progress, NetChoice and Silicon Valley Leadership Group, sent an open letter to Newsom urging him to veto the bill.
Share
Share
Copy Link
California's legislature has approved a groundbreaking bill to regulate large AI models, setting the stage for potential nationwide standards. The bill, if signed into law, would require companies to evaluate AI systems for risks and implement mitigation measures.
In a significant move towards regulating artificial intelligence, California's legislature has approved a landmark bill aimed at overseeing large AI models. The legislation, which passed the state Assembly with a 62-0 vote, is now headed to Governor Gavin Newsom's desk for final approval 1.
The proposed law, known as the California Artificial Intelligence Accountability Act, would require companies developing or using large AI models to:
These requirements would apply to AI models with over 100 million users or annual revenues exceeding $100 million 2.
If signed into law, this bill would have far-reaching implications for tech giants and AI developers. Companies like OpenAI, Google, and Anthropic, which are at the forefront of AI development, would need to adapt their practices to comply with the new regulations 3.
Proponents of the bill argue that it strikes a balance between fostering innovation and ensuring public safety. Assemblymember Rebecca Bauer-Kahan, who introduced the legislation, emphasized the need for guardrails to protect against potential harms while allowing for technological advancement 4.
However, the bill has faced criticism from some quarters. The California Chamber of Commerce and TechNet, a network of tech executives, have expressed concerns that the legislation could stifle innovation and create compliance challenges for businesses 5.
As the home to many leading tech companies, California's move could set a precedent for AI regulation across the United States and potentially influence global standards. The bill's passage comes amid growing calls for AI oversight, with the European Union also working on comprehensive AI regulations 1.
With the bill now awaiting Governor Newsom's signature, all eyes are on California to see if it will indeed become the first U.S. state to implement such comprehensive AI regulations. If signed, the law would take effect in 2025, giving companies time to prepare for compliance 2.
Reference
[3]
[4]
[5]
IEEE Spectrum: Technology, Engineering, and Science News
|California's "AI Safety" Bill Will Have Global EffectsA groundbreaking artificial intelligence regulation bill has passed the California legislature and now awaits Governor Gavin Newsom's signature. The bill, if signed, could set a precedent for AI regulation in the United States.
14 Sources
14 Sources
California has passed a controversial AI safety bill, SB1047, aimed at regulating artificial intelligence. The bill introduces new requirements for AI companies and has sparked debates about innovation and safety.
3 Sources
3 Sources
A proposed California bill aimed at regulating artificial intelligence has created a divide among tech companies in Silicon Valley. The legislation has garnered support from some firms while facing opposition from others, highlighting the complex challenges in AI governance.
4 Sources
4 Sources
California's proposed AI safety bill, SB 1047, has ignited a fierce debate in the tech world. While some industry leaders support the legislation, others, including prominent AI researchers, argue it could stifle innovation and favor large tech companies.
3 Sources
3 Sources
California's AI Safety Bill SB 1047, backed by Elon Musk, aims to regulate AI development. The bill has garnered support from some tech leaders but faces opposition from Silicon Valley, highlighting the complex debate surrounding AI regulation.
3 Sources
3 Sources
The Outpost is a comprehensive collection of curated artificial intelligence software tools that cater to the needs of small business owners, bloggers, artists, musicians, entrepreneurs, marketers, writers, and researchers.
© 2025 TheOutpost.AI All rights reserved