Curated by THEOUTPOST
On Sat, 31 Aug, 8:02 AM UTC
3 Sources
[1]
Elon Musk-Backed California AI Safety Bill SB 1047 Has 57% Chance Of Being Signed By Gavin Newsom, Polymarket Shows
Major tech firms, including Alphabet and Microsoft, oppose the bill, fearing it could stifle innovation in California. Prediction market Polymarket indicates a 57% likelihood that California will pass the controversial AI safety bill SB 1047, up 7% from previous estimates. What Happened: The bill, which has sparked intense debate in the tech industry, is now awaiting Governor Gavin Newsom's signature after passing through the California legislature. SB 1047, introduced by Democratic Senator Scott Wiener, mandates safety testing for advanced AI models with development costs exceeding $100 million or requiring substantial computing power. The bill authorizes the state attorney general to take legal action against non-compliant developers, particularly in cases of potential threats such as AI systems compromising government infrastructure. The legislation has divided opinion among tech industry leaders. Tesla and SpaceX CEO Elon Musk, who also founded xAI, has publicly endorsed the bill, stating it's "a tough call" but necessary. Conversely, tech giants like Alphabet, Microsoft, and Meta Platforms have expressed concerns about the bill's potential to stifle innovation and deter AI companies from operating in California. OpenAI, the company behind ChatGPT, argues that AI regulation should be addressed at the federal level to avoid an uncertain legal landscape. Also Read: Ethereum Co-Founder's $2.1M Ether Transfer Raises Eyebrows: Why Is Vitalik Buterin Dumping? However, two former OpenAI employees have criticized the company's stance, warning of "catastrophic harm to society" without proper safety precautions. The Polymarket data, which shows a $16,204 bet volume as of Aug. 30, reflects the ongoing uncertainty surrounding the bill's fate. The market currently values "Yes" outcomes at 59 cents and "No" outcomes at 44 cents, indicating a slight lean towards the bill's passage. As the AI industry continues to evolve rapidly, the outcome of SB 1047 could have far-reaching implications for AI development and regulation across the United States. What's Next: This topic is likely to be a key point of discussion at Benzinga's upcoming Future of Digital Assets event on Nov. 19, where industry experts and policymakers are expected to debate the balance between innovation and safety in the AI sector. Read Next: Underwater Bitcoin Buyers Refuse To Book Losses, But On-Chain Data Shows This May Be The Calm Before The Storm Image: Shutterstock Market News and Data brought to you by Benzinga APIs
[2]
Looking Ahead as California Signs Landmark AI Safety Bill
California's newly passed artificial intelligence (AI) safety bill could dramatically alter the landscape of AI development and deployment, with far-reaching implications for tech giants, eCommerce platforms and startups alike as the industry grapples with stringent new regulations aimed at mitigating AI risks. The legislation, known as Senate Bill 1047, now awaits a final procedural vote before reaching Gov. Gavin Newsom's desk. If Newsom signs the bill, it could shape the future of AI development in the state. The bill introduces stringent safety testing requirements for AI companies developing models with a price tag exceeding $100 million or those utilizing substantial computing power. Additionally, it mandates that AI developers in California establish fail-safe mechanisms to shut down their models in case of emergencies or unforeseen consequences. However, the legislative action has ignited debate within the tech community, with industry giants vehemently opposing the measure. Critics argue that the bill could potentially drive AI firms out of California, stifling innovation and hampering the state's position as a global tech leader. "The California legislature is passing laws based on science fiction fantasies of what AI could look like," Chamber of Progress Senior Tech Policy Director Todd O'Boyle said in a statement after the vote. "This bill has more in common with Blade Runner or The Terminator than the real world. We shouldn't hamstring California's leading economic sector over a theoretical scenario. Lawmakers should focus on addressing real-life bad actors and harms while empowering the best minds in California to continue innovating." Industry leaders warn that the departure of AI companies could lead to a significant brain drain and economic downturn in Silicon Valley and beyond. However, the bill has found an unlikely ally in Elon Musk, the CEO of Tesla and owner of X. Musk's public support for the legislation on his social media platform has added a layer of complexity to the industry's response, highlighting the divide even among tech leaders on how to approach AI regulation. More here: Musk's AI Safety Push Could Have Other Companies Following Suit The implications of this bill extend into the realm of eCommerce, where AI has become an integral part of operations. Industry experts warn that the legislation could have widespread consequences for online retailers and platforms that rely heavily on AI for personalized shopping experiences, dynamic pricing and recommendation engines. Critics of the bill point to its broad language and lack of granularity as potential pitfalls, particularly for smaller players and startups in the eCommerce space. They argue that the mandatory safety testing requirements could create insurmountable barriers for innovative companies leveraging AI to enhance customer experiences and streamline operations. The tech industry is now at a crossroads, grappling with the tension between innovation and regulation. Proponents of the bill argue that it's necessary to mitigate the potentially catastrophic risks associated with unchecked AI development. They contend that establishing a regulatory framework now will prevent more severe restrictions in the future and help build public trust in AI technologies. Looking ahead, the tech industry faces a period of uncertainty and adaptation. If the new safety testing and shutdown requirements are signed into law, companies must quickly pivot to meet them. This could temporarily slow AI development as firms recalibrate their processes and redesign their AI models to comply with the latest regulations. However, some industry analysts see a silver lining, suggesting that the new regulations could foster a more robust and trustworthy AI ecosystem. By setting clear safety standards and accountability measures, the bill could help alleviate public concerns about AI's potential risks and pave the way for broader acceptance and adoption of AI technologies across various sectors.
[3]
California AI bill SB 1047 aims to prevent AI disasters, but Silicon Valley warns it will cause one
Update: California's Appropriations Committee passed SB 1047 with significant amendments that change the bill on Thursday, August 15. You can read about them here. Outside of sci-fi films, there's no precedent for AI systems killing people or being used in massive cyberattacks. However, some lawmakers want to implement safeguards before bad actors make that dystopian future a reality. A California bill, known as SB 1047, tries to stop real-world disasters caused by AI systems before they happen. It passed the state's senate in August, and now awaits an approval or veto from California Governor Gavin Newsom. While this seems like a goal we can all agree on, SB 1047 has drawn the ire of Silicon Valley players large and small, including venture capitalists, big tech trade groups, researchers and startup founders. A lot of AI bills are flying around the country right now, but California's Safe and Secure Innovation for Frontier Artificial Intelligence Models Act has become one of the most controversial. Here's why. What would SB 1047 do? SB 1047 tries to prevent large AI models from being used to cause "critical harms" against humanity. The bill gives examples of "critical harms" as a bad actor using an AI model to create a weapon that results in mass casualties, or instructing one to orchestrate a cyberattack causing more than $500 million in damages (for comparison, the CrowdStrike outage is estimated to have caused upwards of $5 billion). The bill makes developers -- that is, the companies that develop the models -- liable for implementing sufficient safety protocols to prevent outcomes like these. What models and companies are subject to these rules? SB 1047's rules would only apply to the world's largest AI models: ones that cost at least $100 million and use 10^26 FLOPS during training -- a huge amount of compute, but OpenAI CEO Sam Altman said GPT-4 cost about this much to train. These thresholds could be raised as needed. Very few companies today have developed public AI products large enough to meet those requirements, but tech giants such as OpenAI, Google, and Microsoft are likely to very soon. AI models -- essentially, massive statistical engines that identify and predict patterns in data -- have generally become more accurate as they've grown larger, a trend many expect to continue. Mark Zuckerberg recently said the next generation of Meta's Llama will require 10x more compute, which would put it under the authority of SB 1047. When it comes to open source models and their derivatives, the bill determined the original developer is responsible unless another developer spends another $10 million creating a derivative of the original model. The bill also requires a safety protocol to prevent misuses of covered AI products, including an "emergency stop" button that shuts down the entire AI model. Developers must also create testing procedures that address risks posed by AI models, and must hire third-party auditors annually to assess their AI safety practices. The result must be "reasonable assurance" that following these protocols will prevent critical harms -- not absolute certainty, which is of course impossible to provide. Who would enforce it, and how? A new California agency, the Board of Frontier Models, would oversee the rules. Every new public AI model that meets SB 1047's thresholds must be individually certified with a written copy of its safety protocol. The Board of Frontier Models, would be governed by nine people, including representatives from the AI industry, open source community and academia, appointed by California's governor and legislature. The board will advise California's attorney general on potential violations of SB 1047, and issue guidance to AI model developers on safety practices. A developer's chief technology officer must submit an annual certification to the board assessing its AI model's potential risks, how effective its safety protocol is and a description of how the company is complying with SB 1047. Similar to breach notifications, if an "AI safety incident" occurs, the developer must report it to the FMD within 72 hours of learning about the incident. If a developer's safety measures are found insufficient, SB 1047 allows California's attorney general to bring an injunctive order against the developer. That could mean the developer would have to cease operating or training its model. If an AI model is actually found to be used in a catastrophic event, California's attorney general can sue the company. For a model costing $100 million to train, penalties could reach up to $10 million on the first violation and $30 million on subsequent violations. That penalty rate scales as AI models become more expensive. Lastly, the bill includes whistleblower protections for employees if they try to disclose information about an unsafe AI model to California's attorney general. What do proponents say? California State Senator Scott Wiener, who authored the bill and represents San Francisco, tells TechCrunch that SB 1047 is an attempt to learn from past policy failures with social media and data privacy, and protect citizens before it's too late. "We have a history with technology of waiting for harms to happen, and then wringing our hands," said Wiener. "Let's not wait for something bad to happen. Let's just get out ahead of it." Even if a company trains a $100 million model in Texas, or for that matter France, it will be covered by SB 1047 as long as it does business in California. Wiener says Congress has done "remarkably little legislating around technology over the last quarter century," so he thinks it's up to California to set a precedent here. When asked whether he's met with OpenAI and Meta on SB 1047, Wiener says "we've met with all the large labs." Two AI researchers who are sometimes called the "godfathers of AI," Geoffrey Hinton and Yoshua Bengio, have thrown their support behind this bill. These two belong to a faction of the AI community concerned about the dangerous, doomsday scenarios that AI technology could cause. These "AI doomers" have existed for a while in the research world, and SB 1047 could codify some of their preferred safeguards into law. Another group sponsoring SB 1047, the Center for AI Safety, wrote an open letter in May 2023 asking the world to prioritize "mitigating the risk of extinction from AI" as seriously as pandemics or nuclear war. "This is in the long-term interest of industry in California and the US more generally because a major safety incident would likely be the biggest roadblock to further advancement," said director of the Center for AI Safety, Dan Hendrycks, in an email to TechCrunch. Recently, Hendrycks' own motivations have been called into question. In July, he publicly launched a startup, Gray Swan, which builds "tools to help companies assess the risks of their AI systems," according to a press release. Following criticisms that Hendrycks' startup could stand to gain if the bill passes, potentially as one of the auditors SB 1047 requires developers to hire, he divested his equity stake in Gray Swan. "I divested in order to send a clear signal," said Hendrycks in an email to TechCrunch. "If the billionaire VC opposition to commonsense AI safety wants to show their motives are pure, let them follow suit." After several of Anthropic's suggested amendments were added to SB 1047, CEO Dario Amodei issued a letter saying the bill's "benefits likely outweigh its costs." It's not an endorsement, but it's a lukewarm signal of support. Shortly after that, Elon Musk signaled he was in favor of the bill. What do opponents say? A growing chorus of Silicon Valley players oppose SB 1047. Hendrycks' "billionaire VC opposition" likely refers to a16z, the venture firm founded by Marc Andreessen and Ben Horowitz, which has strongly opposed SB 1047. In early August, the venture firm's chief legal officer, Jaikumar Ramaswamy, submitted a letter to Senator Wiener, claiming the bill "will burden startups because of its arbitrary and shifting thresholds," creating a chilling effect on the AI ecosystem. As AI technology advances, it will get more expensive, meaning that more startups will cross that $100 million threshold and will be covered by SB 1047; a16z says several of their startups already receive that much for training models. Fei-Fei Li, often called the godmother of AI, broke her silence on SB 1047 in early August, writing in a Fortune column that the bill will "harm our budding AI ecosystem." While Li is a well-regarded pioneer in AI research from Stanford, she also reportedly created an AI startup called World Labs in April, valued at a billion dollars and backed by a16z. She joins influential AI academics such as fellow Stanford researcher Andrew Ng, who called the bill "an assault on open source" during a speech at a Y Combinator event in July. Open source models may create additional risk to their creators, since like any open software, they are more easily modified and deployed to arbitrary and potentially malicious purposes. Meta's chief AI scientist, Yann LeCun, said SB 1047 would hurt research efforts, and is based on an "illusion of 'existential risk' pushed by a handful of delusional think-tanks," in a post on X. Meta's Llama LLM is one of the foremost examples of an open source LLM. Startups are also not happy about the bill. Jeremy Nixon, CEO of AI startup Omniscience and founder of AGI House SF, a hub for AI startups in San Francisco, worries that SB 1047 will crush his ecosystem. He argues that bad actors should be punished for causing critical harms, not the AI labs that openly develop and distribute the technology. "There is a deep confusion at the center of the bill, that LLMs can somehow differ in their levels of hazardous capability," said Nixon. "It's more than likely, in my mind, that all models have hazardous capabilities as defined by the bill." OpenAI opposed SB 1047 in late August, noting that national security measures related to AI models should be regulated at the federal level. They've supported a federal bill that would do so. But Big Tech, which the bill directly focuses on, is panicked about SB 1047 as well. The Chamber of Progress -- a trade group representing Google, Apple, Amazon and other Big Tech giants -- issued an open letter opposing the bill saying SB 1047 restrains free speech and "pushes tech innovation out of California." Last year, Google CEO Sundar Pichai and other tech executives endorsed the idea of federal AI regulation. U.S. Congressman Ro Khanna, who represents Silicon Valley, released a statement opposing SB 1047 in August. He expressed concerns the bill "would be ineffective, punishing of individual entrepreneurs and small businesses, and hurt California's spirit of innovation." He's since been joined by speaker Nancy Pelosi and the United States Chamber of Commerce, who have also said the bill would hurt innovation. Silicon Valley doesn't traditionally like when California sets broad tech regulation like this. In 2019, Big Tech pulled a similar card when another state privacy bill, California's Consumer Privacy Act, also threatened to change the tech landscape. Silicon Valley lobbied against that bill, and months before it went into effect, Amazon founder Jeff Bezos and 50 other executives wrote an open letter calling for a federal privacy bill instead. What happens next? SB 1047 currently sits on California Governor Gavin Newsom's desk where he will ultimately decide whether to sign the bill into law before the end of August. Wiener says he has not spoken to Newsom about the bill, and does not know his position. This bill would not go into effect immediately, as the Board of Frontier Models is set to be formed in 2026. Further, if the bill does pass, it's very likely to face legal challenges before then, perhaps from some of the same groups that are speaking up about it now.
Share
Share
Copy Link
California's AI Safety Bill SB 1047, backed by Elon Musk, aims to regulate AI development. The bill has garnered support from some tech leaders but faces opposition from Silicon Valley, highlighting the complex debate surrounding AI regulation.
California has taken a significant step towards regulating artificial intelligence with the introduction of the AI Safety Bill SB 1047. This landmark legislation, backed by tech mogul Elon Musk, aims to establish safety standards for AI development and deployment in the state 1. The bill, which has a 57% chance of being signed into law by Governor Gavin Newsom, has sparked intense debate within the tech community.
The bill proposes several measures to ensure AI safety:
These provisions aim to create a framework for responsible AI development and protect consumers from potential harm caused by advanced AI systems.
While the bill has garnered support from some tech leaders, including Elon Musk, it has faced significant opposition from Silicon Valley. Supporters argue that the legislation is necessary to prevent AI-related disasters and ensure public safety 3. They believe that proactive regulation is crucial as AI technology continues to advance rapidly.
However, critics from the tech industry warn that the bill could stifle innovation and drive AI development out of California. They argue that the proposed regulations are too broad and may hinder the state's competitive edge in the AI sector 3.
If signed into law, SB 1047 could have far-reaching implications for the AI industry:
The bill's passage could also set a precedent for other states and countries considering AI regulation, potentially leading to a more standardized approach to AI safety globally.
As the debate continues, all eyes are on Governor Gavin Newsom, who will ultimately decide the fate of SB 1047. The outcome of this legislation could shape the future of AI development not only in California but across the United States and beyond 1.
Regardless of the bill's fate, it has undoubtedly sparked an important conversation about the balance between innovation and safety in the rapidly evolving field of artificial intelligence. As AI continues to play an increasingly significant role in our lives, finding this balance will be crucial for ensuring responsible development and deployment of these powerful technologies.
Reference
A groundbreaking artificial intelligence regulation bill has passed the California legislature and now awaits Governor Gavin Newsom's signature. The bill, if signed, could set a precedent for AI regulation in the United States.
14 Sources
14 Sources
California's proposed AI safety bill, SB 1047, has ignited a fierce debate in the tech world. While some industry leaders support the legislation, others, including prominent AI researchers, argue it could stifle innovation and favor large tech companies.
3 Sources
3 Sources
California's legislature has approved a groundbreaking bill to regulate large AI models, setting the stage for potential nationwide standards. The bill, if signed into law, would require companies to evaluate AI systems for risks and implement mitigation measures.
7 Sources
7 Sources
California's AI safety bill, AB-1047, moves forward with significant amendments following tech industry input. The bill aims to regulate AI development while balancing innovation and safety concerns.
10 Sources
10 Sources
California has passed a controversial AI safety bill, SB1047, aimed at regulating artificial intelligence. The bill introduces new requirements for AI companies and has sparked debates about innovation and safety.
3 Sources
3 Sources
The Outpost is a comprehensive collection of curated artificial intelligence software tools that cater to the needs of small business owners, bloggers, artists, musicians, entrepreneurs, marketers, writers, and researchers.
© 2025 TheOutpost.AI All rights reserved