Curated by THEOUTPOST
On Thu, 12 Sept, 8:03 AM UTC
3 Sources
[1]
The AI bill driving a wedge through Silicon Valley
California's bid to regulate artificial intelligence has riven Silicon Valley, as opponents warn that the legal framework could undermine competition and America's position as the world leader in the technology. Having waged a fierce battle to amend or water down the bill as it passed through California's legislature, executives at companies including OpenAI and Meta are waiting anxiously to see if Gavin Newsom, the state's Democratic governor, will sign it into law. He has until September 30 to decide. California is the heart of the burgeoning AI industry, and with no federal law to regulate the technology across the US -- let alone a uniform global standard -- the ramifications would extend far beyond the state. "The rest of the world is certainly paying close attention to what is happening in California and in the US more broadly right now, and the outcome there will most likely have repercussions on other nations' regulatory efforts," Yoshua Bengio, a professor at the University of Montreal and a "godfather" of AI, told the Financial Times. The rapid development of AI tools that can generate humanlike responses to questions have magnified perceived risks around the technology, ranging from legal disputes such as copyright infringement to misinformation and a proliferation of deepfakes. Some even think it could pose a threat to humanity. US President Joe Biden issued an executive order last year aiming to set national standards for AI safety, but the US Congress has not made any progress in passing national laws. Liberal California has often jumped in to regulate on issues where the federal government has lagged. Artificial intelligence is now in focus with California's Safe and Secure Innovation for Frontier Artificial Intelligence Systems Act, which was put forward by state senator Scott Wiener. Of the various bills filed in different states, the one in California is the most likely to have a real impact, since the state is at the centre of the AI boom, home to top companies including OpenAI, Anthropic, Meta and Google. "The big AI companies which have been the most vocal on this issue are currently locked in their race for market share and profit maximisation, which can lead to cutting corners when it comes to safety, and that's why we need some rules for those leading this race," said Bengio. Wiener has said his bill "requires only the largest AI developers to do what each and every one of them has repeatedly committed to do: perform basic safety testing on massively powerful AI models". The bill would require developers building large models to assess whether they are "reasonably capable of causing or materially enabling a critical harm", ranging from malicious use or theft to the creation of a biological weapon. Companies would then be expected to take reasonable safeguards against those identified risks. Developers would have to build a "kill switch" into any new models over a certain size in case they are misused or go rogue. They would also be obliged to draft a safety report before training a new model and to be more transparent -- they would have to "report each artificial intelligence safety incident" to the state's attorney-general and undertake a third-party audit to ensure compliance every year. It is directed at models that cost more than $100mn to train, roughly the amount required to train today's top models. But that is a fast-moving target: Anthropic chief executive Dario Amodei has predicted the next group of cutting-edge models will cost $1bn to train and $10bn by 2026. The bill would apply to all companies doing business in California, regardless of where they are based, which would effectively cover every company currently capable of developing top AI models, said Bengio. It would introduce civil penalties of up to 10 per cent of the cost of training a model against developers whose tools cause death, theft or harm to property. It would also create liabilities for companies offering computing resources to train those models and auditing firms, making them responsible for gathering and retaining detailed information about customers' identities and intentions. Failure to do so could result in fines of up to $10mn. Wiener and his colleagues say there is strong public support for new AI guardrails. He has also won qualified support from leading AI start-up Anthropic and Elon Musk, as well as SAG-AFTRA, an actors' union, and two women's groups. On Monday, 100 employees at top AI companies including OpenAI, xAI and Google DeepMind signed a letter calling on Newsom to sign the bill. "It is feasible and appropriate for frontier AI companies to test whether the most powerful AI models can cause severe harms, and for these companies to implement reasonable safeguards against such risks," they wrote. Critics -- including academics such as Stanford AI professor Fei-Fei Li, venture capital firm Andreessen Horowitz and start-up accelerator Y Combinator -- argue that the bill would hobble early-stage companies and open-source developers who publicly share the code underlying their models. SB 1047 would "slow the pace of innovation, and lead California's world-class engineers and entrepreneurs to leave the state in search of greater opportunity elsewhere", warned OpenAI chief strategy officer Jason Kwon in a letter to Wiener last month. He echoed one of the most common complaints: that Wiener was meddling in an area that should be dealt with at the federal level. Opponents also say it would stifle innovation by piling onerous requirements on to developers and making them accountable for the use of their AI models by bad actors. It legislates for risks that do not yet exist, they add. "Philosophically, anticipating the consequences of how people are going to use your code in software is a very difficult problem. How will people use it, how will you anticipate that somebody will do harm? It's a great inhibitor. It's a very slippery slope," said Dario Gil, director of research at IBM. Dan Hendrycks, director of the Center for AI Safety, which played a critical role in formulating the bill, said opponents "want governments to give them a blank cheque to build and deploy whatever technologies they want, regardless of risk or harm to society". Hendrycks, who is also an adviser to Musk's xAI, has come under fire from critics who cast the CAIS as a fringe outfit overly concerned about existential risks from AI. Opponents also expressed concerns that CAIS had lobbied for influence over a "Board of Frontier Models" that the bill would create, staffed with nine directors drawn from industry and academia and tasked with updating regulations around AI models and ensuring compliance. Wiener rejected those arguments as "a conspiracy theory". "The opposition tried to paint anyone supporting the bill as 'doomers'," Wiener said. "They said these were science fiction risks; that we were focused on the Terminator. We're not, we're focused on very real risks like shutting down the electric grid, or the banking system, or creating a chemical or biological weapon." Wiener said he and his team have spent the past 18 months engaging with "anyone that would meet with us" to discuss the bill, including Li and partners at Andreessen and Y Combinator. One of their concerns was that requiring a kill switch for open-source models would prevent other developers from modifying or building on them for fear they might be turned off at a moment's notice. That could be fatal for young companies and academia, which are reliant on cheaper or free-to-access open-source models. Wiener's bill has been amended to exclude open-source models that have been fine-tuned beyond a certain level by third parties. They will also not be required to have a kill switch. Some of the bill's original strictures have also been moderated, including narrowing the scope for civil penalties and limiting the number of models covered by the new rules. SB 1047 easily passed the state's legislature. Now Newsom has to decide whether to sign the bill, allow it to become law without his signature or veto it. If he does veto, California's legislature could override that with a two-thirds-majority vote. But, according to a spokesperson for Wiener, there is virtually no chance of that happening. The last time a California governor's veto was overridden was in 1980. The governor is in a tough spot, given the importance of the tech industry to his state. But letting AI grow unchecked could be even more problematic. "I would love for this to be federal legislation: if Congress were to act in this space and pass a strong AI safety bill I'd be happy to pack up and go home," said Wiener. "But the sad reality is that while Congress has been very, very successful on healthcare, infrastructure and climate, it's really struggled with technology regulation . . . Until Congress acts, California has an obligation to lead because we are the heartland of the tech industry."
[2]
AI safety showdown: Yann LeCun slams California's SB 1047 as Geoffrey Hinton backs new regulations
Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More Yann LeCun, chief AI scientist at Meta, publicly rebuked supporters of California's contentious AI safety bill, SB 1047, on Wednesday. His criticism came just one day after Geoffrey Hinton, often referred to as the "godfather of AI," endorsed the legislation. This stark disagreement between two pioneers in artificial intelligence highlights the deep divisions within the AI community over the future of regulation. California's legislature has passed SB 1047, which now awaits Governor Gavin Newsom's signature. The bill has become a lightning rod for debate about AI regulation. It would establish liability for developers of large-scale AI models that cause catastrophic harm if they failed to take appropriate safety measures. The legislation applies only to models costing at least $100 million to train and operating in California, the world's fifth-largest economy. The battle of the AI titans: LeCun vs. Hinton on SB 1047 LeCun, known for his pioneering work in deep learning, argued that many of the bill's supporters have a "distorted view" of AI's near-term capabilities. "The distortion is due to their inexperience, naïveté on how difficult the next steps in AI will be, wild overestimates of their employer's lead and their ability to make fast progress," he wrote on Twitter, now known as X. His comments were a direct response to Hinton's endorsement of an open letter signed by over 100 current and former employees of leading AI companies, including OpenAI, Google DeepMind, and Anthropic. The letter, submitted to Governor Newsom on September 9th, urged him to sign SB 1047 into law, citing potential "severe risks" posed by powerful AI models, such as expanded access to biological weapons and cyberattacks on critical infrastructure. This public disagreement between two AI pioneers underscores the complexity of regulating a rapidly evolving technology. Hinton, who left Google last year to speak more freely about AI risks, represents a growing contingent of researchers who believe that AI systems could soon pose existential threats to humanity. LeCun, on the other hand, consistently argues that such fears are premature and potentially harmful to open research. Inside SB 1047: The controversial bill reshaping AI regulation The debate surrounding SB 1047 has scrambled traditional political alliances. Supporters include Elon Musk, despite his previous criticism of the bill's author, State Senator Scott Wiener. Opponents include Speaker Emerita Nancy Pelosi and San Francisco Mayor London Breed, along with several major tech companies and venture capitalists. Anthropic, an AI company that initially opposed the bill, changed its stance after several amendments were made, stating that the bill's "benefits likely outweigh its costs." This shift highlights the evolving nature of the legislation and the ongoing negotiations between lawmakers and the tech industry. Critics of SB 1047 argue that it could stifle innovation and disadvantage smaller companies and open-source projects. Andrew Ng, founder of DeepLearning.AI, wrote in TIME magazine that the bill "makes the fundamental mistake of regulating a general purpose technology rather than applications of that technology." Proponents, however, insist that the potential risks of unregulated AI development far outweigh these concerns. They argue that the bill's focus on models with budgets exceeding $100 million ensures that it primarily affects large, well-resourced companies capable of implementing robust safety measures. Silicon Valley divided: How SB 1047 is splitting the tech world The involvement of current employees from companies opposing the bill adds another layer of complexity to the debate. It suggests internal disagreements within these organizations about the appropriate balance between innovation and safety. As Governor Newsom considers whether to sign SB 1047, he faces a decision that could shape the future of AI development not just in California, but potentially across the United States. With the European Union already moving forward with its own AI Act, California's decision could influence whether the U.S. takes a more proactive or hands-off approach to AI regulation at the federal level. The clash between LeCun and Hinton serves as a microcosm of the larger debate surrounding AI safety and regulation. It highlights the challenge policymakers face in crafting legislation that addresses legitimate safety concerns without unduly hampering technological progress. As the AI field continues to advance at a breakneck pace, the outcome of this legislative battle in California may set a crucial precedent for how societies grapple with the promises and perils of increasingly powerful artificial intelligence systems. The tech world, policymakers, and the public alike will be watching closely as Governor Newsom weighs his decision in the coming weeks.
[3]
Top AI companies push for AI safety law in California to mitigate risks, despite criticism from startups - MEDIANAMA
Multiple current and former employees at AI companies like OpenAI, Google DeepMind, Anthropic, Meta, and xAI have expressed their support for the California AI Safety Bill, in a letter addressed to the state's Governor, Gavin Newsom. The State and Senate passed the Bill in August, and must now wait for Newsom to sign it into law. If passed, the law would prohibit large-scale and powerful AI systems from aiding in the development of chemical, biological, radiological, or nuclear weapons. It would also require "frontier' models to take basic precautions, such as pre-deployment safety testing, red-teaming, etc. The AI companies said in the letter, "despite the inherent uncertainty in regulating advanced technology, we believe SB 1047 represents a meaningful step forward." The Bill applies to developers of AI models with computing power greater than 10 floating-point operations that cost over $100 million to train. The California Government will appoint "A Frontier Model Divison" to classify and monitor these "frontier" models. The Bill also gives California's Attorney General the power to take legal action against any AI developers, if they deem that their model or their negligence poses an imminent threat to public safety. A developer must also implement the capability to promptly shut down the entire model, in case of a safety or security threat and report each safety incident to the Division no later than 72 hours. It also calls for transparent pricing and prohibiting price discrimination of AI models, to ensure competition in the AI landscape. The Bill calls to create protections for whistleblowers and allow employees of developers and those assisting with the development of the model to report any wrongdoing to the Division without consequence. Companies are required to have a "reasonable internal process" through which an employee can reveal any violations or safety concerns, anonymously. Startups have criticised the Bill for restricting innovation. Earlier in July, venture capitalist Y Combinator (YC) and Andreessen Horowitz's criticised the Bill in a letter stating that the Bill introduced new liability for AI developers and made them liable for perjury for stating unverified facts on its models. It also stated that a mandated "kill-switch" would stop efforts to build open-source AI. However, California Senator Scott Weiner, the sponsor of the Bill refuted these claims, emphasising that the Bill does not apply to startups and stating that the letter was "inaccurate, including some highly inflammatory distortions." Further, AGI House, a community of AI founders, builders, and researchers, shared their criticisms of the Bill stating that the Bill's mandate to monitor AI models before their deployment could potentially violate the US' Freedom of Speech Laws. They cited previous legal precedents in the US that determined computer code as free speech and argued the same for neural network weights that are used to train AI models.
Share
Share
Copy Link
California's proposed AI safety bill, SB 1047, has ignited a fierce debate in the tech world. While some industry leaders support the legislation, others, including prominent AI researchers, argue it could stifle innovation and favor large tech companies.
California's Senate Bill 1047 (SB 1047), aimed at regulating artificial intelligence (AI) development, has become the center of a heated debate within the tech industry. The proposed legislation, which seeks to establish safety standards for AI systems, has drawn both support and criticism from various quarters 1.
Several major tech companies, including Microsoft, Google, and Anthropic, have thrown their weight behind SB 1047. These industry leaders argue that the bill is necessary to mitigate potential risks associated with advanced AI systems. Their support stems from concerns about the rapid development of AI technologies and the need for responsible innovation 3.
However, the bill has faced significant opposition from prominent AI researchers and smaller tech companies. Yann LeCun, Chief AI Scientist at Meta, has been particularly vocal in his criticism. LeCun argues that SB 1047 could stifle innovation and disproportionately benefit large tech corporations at the expense of startups and academic research 2.
SB 1047 proposes several measures to ensure AI safety, including:
Critics argue that these requirements could be overly burdensome for smaller companies and researchers, potentially hampering progress in the field 1.
The debate has also revealed a split among AI experts. While Geoffrey Hinton, often referred to as the "godfather of AI," supports the need for new regulations, others like LeCun believe that existing laws are sufficient to address AI-related concerns 2.
The outcome of this debate in California could have far-reaching consequences for AI regulation worldwide. As a hub of technological innovation, California's approach to AI safety could set a precedent for other regions considering similar legislation 3.
As the discussion continues, stakeholders are grappling with the challenge of balancing the need for innovation with the imperative of ensuring AI safety. The debate surrounding SB 1047 highlights the complex issues facing policymakers as they attempt to regulate a rapidly evolving technology landscape 1.
Reference
[1]
[2]
California's AI Safety Bill SB 1047, backed by Elon Musk, aims to regulate AI development. The bill has garnered support from some tech leaders but faces opposition from Silicon Valley, highlighting the complex debate surrounding AI regulation.
3 Sources
3 Sources
A proposed California bill aimed at regulating artificial intelligence has created a divide among tech companies in Silicon Valley. The legislation has garnered support from some firms while facing opposition from others, highlighting the complex challenges in AI governance.
4 Sources
4 Sources
A groundbreaking artificial intelligence regulation bill has passed the California legislature and now awaits Governor Gavin Newsom's signature. The bill, if signed, could set a precedent for AI regulation in the United States.
14 Sources
14 Sources
California's AI safety bill, AB-1047, moves forward with significant amendments following tech industry input. The bill aims to regulate AI development while balancing innovation and safety concerns.
10 Sources
10 Sources
California's legislature has approved a groundbreaking bill to regulate large AI models, setting the stage for potential nationwide standards. The bill, if signed into law, would require companies to evaluate AI systems for risks and implement mitigation measures.
7 Sources
7 Sources
The Outpost is a comprehensive collection of curated artificial intelligence software tools that cater to the needs of small business owners, bloggers, artists, musicians, entrepreneurs, marketers, writers, and researchers.
© 2025 TheOutpost.AI All rights reserved