The Outpost is a comprehensive collection of curated artificial intelligence software tools that cater to the needs of small business owners, bloggers, artists, musicians, entrepreneurs, marketers, writers, and researchers.
© 2025 TheOutpost.AI All rights reserved
Curated by THEOUTPOST
On Fri, 20 Sept, 4:04 PM UTC
2 Sources
[1]
Commentary: California's AI safety bill is under fire. Making it law is the best way to improve it
On Aug. 29, the California Legislature passed Senate Bill 1047- the Safe and Secure Innovation for Frontier Artificial Intelligence Models Act - and sent it to Gov. Gavin Newsom for signature. Newsom's choice, due by Sept. 30, is binary: Kill it or make it law. Acknowledging the possible harm that could come from advanced AI, SB 1047 requires technology developers to integrate safeguards as they develop and deploy what the bill calls "covered models." The California attorney general can enforce these requirements by pursuing civil actions against parties that aren't taking "reasonable care" that 1) their models won't cause catastrophic harms, or 2) their models can be shut down in case of emergency. Many prominent AI companies oppose the bill either individually or through trade associations. Their objections include concerns that the definition of covered models is too inflexible to account for technological progress, that it's unreasonable to hold them responsible for harmful applications that others develop, and that the bill overall will stifle innovation and hamstring small startup companies without the resources to devote to compliance. These objections are not frivolous; they merit consideration and very likely some further amendment to the bill. But the governor should sign or approve it regardless because a veto would signal that no regulation of AI is acceptable now and probably until or unless catastrophic harm occurs. Such a position is not the right one for governments to take on such technology. The bill's author, Sen. Scott Wiener (D-San Francisco), engaged with the AI industry on a number of iterations of the bill before its final legislative passage. At least one major AI firm - Anthropic - asked for specific and significant changes to the text, many of which were incorporated in the final bill. Since the Legislature passed it, the CEO of Anthropic has said that its "benefits likely outweigh its costs ... [although] some aspects of the bill [still] seem concerning or ambiguous." Public evidence to date suggests that most other AI companies chose simply to oppose the bill on principle, rather than engage with specific efforts to modify it. What should we make of such opposition, especially since the leaders of some of these companies have publicly expressed concerns about the potential dangers of advanced AI? In 2023, the CEOs of OpenAI and Google's DeepMind, for example, signed an open letter that compared AI's risks to pandemic and nuclear war. A reasonable conclusion is that they, unlike Anthropic, oppose any kind of mandatory regulation at all. They want to reserve for themselves the right to decide when the risks of an activity or a research effort or any other deployed model outweigh its benefits. More importantly, they want those who develop applications based on their covered models to be fully responsible for risk mitigation. Recent court cases have suggested that parents who put guns in the hands of their children bear some legal responsibility for the outcome. Why should the AI companies be treated any differently? The AI companies want the public to give them a free hand despite an obvious conflict of interest - profit-making companies should not be trusted to make decisions that might impede their profit-making prospects. We've been here before. In November 2023, the board of OpenAI fired its CEO because it determined that, under his direction, the company was heading down a dangerous technological path. Within several days, various stakeholders in OpenAI were able to reverse that decision, reinstating him and pushing out the board members who had advocated for his firing. Ironically, OpenAI had been specifically structured to allow the board to act as it it did - despite the company's profit-making potential, the board was supposed to ensure that the public interest came first. If SB 1047 is vetoed, anti-regulation forces will proclaim a victory that demonstrates the wisdom of their position, and they will have little incentive to work on alternative legislation. Having no significant regulation works to their advantage, and they will build on a veto to sustain that status quo. Alternatively, the governor could make SB 1047 law, adding an open invitation to its opponents to help correct its specific defects. With what they see as an imperfect law in place, the bill's opponents would have considerable incentive to work - and to work in good faith - to fix it. But the basic approach would be that industry, not the government, puts forward its view of what constitutes appropriate reasonable care about the safety properties of its advanced models. Government's role would be to make sure that industry does what industry itself says it should be doing. The consequences of killing SB 1047 and preserving the status quo are substantial: Companies could advance their technologies without restraint. The consequences of accepting an imperfect bill would be a meaningful step toward a better regulatory environment for all concerned. It would be the beginning rather than the end of the AI regulatory game. This first move sets the tone for what's to come and establishes the legitimacy of AI regulation. The governor should sign SB 1047. ____ Herbert Lin is senior research scholar at the Center for International Security and Cooperation at Stanford University, and a fellow at the Hoover Institution. He is the author of "Cyber Threats and Nuclear Weapons."
[2]
Tom Siebel says there's no need for new AI regulatory agency
The landmark AI safety bill sitting on California Governor Gavin Newsom's desk has another detractor in longtime Silicon Valley figure Tom Siebel. SB 1047, as the bill is known, is among the most comprehensive, and therefore polarizing, pieces of AI legislation. The main focus of the bill is to hold major AI companies accountable in the event their models cause catastrophic harm, such as mass casualties, shutting down critical infrastructure, or being used to create biological or chemical weapons, according to the bill. The bill would apply to AI developers that produce so-called "frontier models," meaning those that took at least $100 million to develop. Another key provision is the establishment of a new regulatory body, the Board of Frontier Models, that would oversee these AI models. Setting up such a group is unnecessary, according to Siebel, who is CEO of C3.ai. "This is just whacked," he told Fortune. Prior to founding C3.ai (which trades under the stock ticker $AI), Siebel founded and helmed Siebel Systems, a pioneer in CRM software, which he eventually sold to Oracle for $5.8 billion in 2005. (Disclosure: The former CEO of Fortune Media, Alan Murray, is on the board of C3.ai). Other provisions in the bill would create reporting standards for AI developers requiring they demonstrate their models' safety. Firms would also be legally required to include a "kill switch" in all AI models. In the U.S. at least five states passed AI safety laws. California has passed dozens of AI bills, five of which were signed into law this week alone. Other countries have also raced to pass legislation against AI. Last summer China published a series of preliminary regulations for generative AI. In March the EU, long at the forefront of tech regulation, passed an extensive AI law. Siebel, who also criticized the EU's law, said California's version risked stifling innovation. "We're going to criminalize science," he said. A new regulatory agency would slow down AI research because its developers would have to submit their models for review and keep detailed logs of all their training and testing procedures, according to Siebel. "How long is it going to take this board of people to evaluate an AI model to determine that it's going to be safe?," Siebel said. "It's going to take approximately forever." The complexity of AI models, which are not fully understood even by the researchers and scientists that created them, would prove too tall a task for a newly established regulatory body, Siebel says. "The idea that we're going to have these agencies who are going to look at these algorithms and ensure that they're safe, I mean there's no way," Siebel said. "The reality is, and I know that a lot of people don't want to admit this, but when you get into deep learning, when you get into neural networks, when you get into generative AI, the fact is, we don't know how they work." A number of AI experts in both academia and the business world have acknowledged that certain aspects of AI models remain unknown. In an interview with 60 Minutes last April Google CEO Sundar Pichai described certain parts of AI models as a "black box" that experts in the field didn't "fully understand." The Board of Frontier Models established in California's bill would consist of experts in AI, cybersecurity, and researchers in academia. Siebel had little faith that a government agency would be suited to overseeing AI. "If the person who developed this thing -- experienced PhD level data scientists out of the finest universities on earth -- can not figure out how it could work," Siebel said of AI models. "How is this government bureaucrat going to figure out how it works? It's impossible. They're inexplicable." Instead of establishing the board, or any other dedicated AI regulator, the government should rely on new legislation that would be enforced by existing court systems and the Department of Justice, according to Siebel. The government should pass laws that make it illegal to publish AI models that could facilitate crimes, cause large scale human health hazards, interfere in democratic processes, and collect personal information about users, Siebel said. "We don't need new agencies," Siebel said. "We have a system of jurisprudence in the Western world, whether it's based on French law or British law, that is well established. Pass some laws." Supporters and critics of SB 1047 don't fall neatly along political lines. Opponents of the bill include both top VCs and avowed supporters of former President Donald Trump, Marc Andreesen and Ben Horowitz, and former Speaker of the House Nancy Pelosi, whose congressional district includes parts of Silicon Valley. On the other side of the argument is an equally hodge podge group of AI experts. They include AI pioneers such as Geoffrey Hinton, Yoshua Bengio, and Stuart Russell, and Tesla CEO Elon Musk, all of whom warned of the technology's great risks. "For over 20 years, I have been an advocate for AI regulation, just as we regulate any product/technology that is a potential risk to the public," Musk wrote on X in August. Siebel too was not blind to the dangers of AI. It "can be used for enormous deleterious effect. Hard stop," he said. Newsom, the man who will decide the ultimate fate of the bill, has remained rather tight lipped. Only breaking his silence earlier this week to say he was concerned about the bill's possible "chilling effect" on AI research, during an appearance at Salesforce's Dreamforce conference. When asked about which portions of the bill might have a chilling effect and to respond to Siebel's comments, Alex Stack, a spokesperson for Newsom, replied "this measure will be evaluated on its merits." Stack did not respond to a follow up question regarding what merits were being evaluated.
Share
Share
Copy Link
California Governor Gavin Newsom signs a groundbreaking AI safety bill into law, sparking debate among tech leaders and policymakers about the future of AI regulation and its impact on innovation.
In a significant move that has sent ripples through the tech industry, California Governor Gavin Newsom has signed into law a groundbreaking artificial intelligence (AI) safety bill. This legislation, the first of its kind in the United States, aims to establish a framework for regulating AI technologies and ensuring their safe development and deployment 1.
The new law introduces several key provisions designed to address the potential risks associated with AI:
These measures reflect growing concerns about the rapid advancement of AI technology and its potential impact on society, privacy, and security 2.
The bill has elicited mixed reactions from tech industry leaders. Some, like Tom Siebel, CEO of C3.ai, have praised the legislation as a necessary step towards responsible AI development. Siebel argues that the bill will help prevent potential catastrophic outcomes and ensure that AI benefits society as a whole 2.
However, critics within the tech industry have voiced concerns about the potential impact on innovation and competitiveness. They argue that overly stringent regulations could stifle creativity and hamper California's position as a global tech leader 1.
California's move is likely to have far-reaching consequences beyond its borders. As the home to Silicon Valley and many of the world's leading tech companies, the state's policies often set precedents for national and even global standards 1.
The Biden administration has already shown interest in AI regulation, and California's bill may serve as a model for federal legislation. Additionally, other states are closely watching California's approach and may consider similar measures 2.
One of the central challenges highlighted by this legislation is the need to balance innovation with safety and ethical considerations. Proponents of the bill argue that it will foster responsible AI development and increase public trust in these technologies 1.
Critics, however, warn that excessive regulation could drive AI research and development to other jurisdictions with less stringent oversight. This raises questions about the potential for a "regulatory arbitrage" where companies might relocate to avoid compliance 2.
As California begins to implement this new law, tech companies, policymakers, and the public will be closely monitoring its effects. The success or failure of this regulatory approach could shape the future of AI governance not only in the United States but around the world 12.
The coming months and years will likely see continued debate and refinement of AI policies as society grapples with the profound implications of this rapidly evolving technology.
California's proposed AI safety bill, SB 1047, has ignited a fierce debate in the tech world. While some industry leaders support the legislation, others, including prominent AI researchers, argue it could stifle innovation and favor large tech companies.
3 Sources
3 Sources
California Governor Gavin Newsom's veto of Senate Bill 1047, a proposed AI safety regulation, has ignited discussions about balancing innovation with public safety in the rapidly evolving field of artificial intelligence.
9 Sources
9 Sources
A groundbreaking artificial intelligence regulation bill has passed the California legislature and now awaits Governor Gavin Newsom's signature. The bill, if signed, could set a precedent for AI regulation in the United States.
14 Sources
14 Sources
California's AI Safety Bill SB 1047, backed by Elon Musk, aims to regulate AI development. The bill has garnered support from some tech leaders but faces opposition from Silicon Valley, highlighting the complex debate surrounding AI regulation.
3 Sources
3 Sources
California Governor Gavin Newsom vetoes the Safe and Secure Innovation for Frontier Artificial Intelligence Models Act (SB 1047), igniting discussions on balancing AI innovation with safety measures.
4 Sources
4 Sources