Curated by THEOUTPOST
On Tue, 1 Oct, 4:02 PM UTC
9 Sources
[1]
California's AI Bill Veto Sparks Debate: CMU Experts Weigh In
When California Gov. Gavin Newsom vetoed the state's proposed artificial intelligence (AI) bill,(opens in new window) SB 1047 on Sept. 29, it sparked a heated debate about how to regulate artificial intelligence effectively. This bill(opens in new window) aimed to introduce some of the toughest regulations yet for large-scale AI models, like those costing more than $100 million to train, such as OpenAI's GPT-4. It proposed safety measures such as a "human button" to shut down models if they posed a risk, regular third-party audits and compliance reporting to the attorney general. Violations would lead to fines of up to $10 million, and the bill also sought to protect whistleblowers while establishing a board to oversee AI governance. After passing the California Senate on Aug. 29, the bill had been seen by many as a bold step toward ensuring safe and transparent AI development. However, the governor's veto has brought renewed attention to the challenges of balancing AI innovation with public safety. Experts from Carnegie Mellon University's Tepper School of Business(opens in new window) shared their thoughts on the bill: Some see it as important for protecting people from AI's risks, while others think it is unclear and might hurt innovation, especially for startups. While creating a black hole in the Large Hadron Collider was unlikely, theories suggested possibilities of micro black holes that would disintegrate quickly. Alarmists (including scientists) were concerned that such a black hole may end up swallowing the Earth. AI going rogue is a risk that must be monitored carefully. Self-regulated mechanisms to evaluate and assess such risk with transparent reporting might be helpful. When it is hard to find ways to mitigate hallucinations quickly, it is unclear what exactly the specific regulations would accomplish. Overzealous rules may achieve nothing more than introducing compliance layers that are likely costly and may not reduce the risk. Such laws may favor the cash-rich big tech while seriously handicapping innovation by new entrants who can ill-afford compliance costs. As we gain greater visibility on the mechanisms that AI can potentially take over, if ever, then effective regulations can be designed. The government has a responsibility to enact industrial safeguards to protect the populace. This is true for AI just like it is true for chemical plants. The government also has a responsibility to enact the least intrusive safeguards that are practical. This legislation seems to fail the second test. The most effective safeguards would be just to put the chemical plant or AI model out of business, but this is not the role of the government. Some things in this model seem to be sensible -- for example prohibiting nonconsensual "deepfakes." Although there is a subtle line around parody that the Supreme Court would likely have to try to discern. Other things, like requiring cluster managers to police their users, seems unwieldy and intrusive. The final sentence, which seems almost deliberately difficult to parse, reads to me like the hearings of all of these bodies are to be closed to the public. I find this to be very bad governance in almost all cases. Do we need legislation? Certainly. This seems like a start. AI is unlike anything we've regulated before. In the past, whether it was the financial crisis, environmental issues, or nuclear safety, we had some understanding of the potential harms, making it easier to craft targeted regulations. With AI, we're navigating unknown territory. The complexity and unpredictability of AI systems, especially large-scale ones, mean we're not yet fully aware of the risks they may pose. This makes the challenge especially daunting, which is why the language of the California AI Bill remains unclear in parts. The bill puts significant responsibility on AI experts to ensure that their systems are safe and secure, preventing potential harms that we can't yet fully foresee. Despite the uncertainties, I support this bill because it takes a proactive approach to AI's development, encouraging experts to think critically about the risks inherent in the technology. In the race to lead the market, companies often neglect safety and security in favor of developing the next wave of innovation. The California AI Bill changes that dynamic, compelling AI developers to think critically about potential risks while encouraging investors to prioritize secure, responsible AI technologies. The bill isn't perfect, but it's a crucial first step toward ensuring that AI evolves in a way that benefits society without introducing unchecked dangers. I do not support it as it is currently written but believe that there should be some regulation. It appears that there is openness to modification, which is what I would like to see happen, and revisit the issues in a revised version. In large part, I echo the sentiments of Zoe Lofgren and Andrew Ng. To be clear, I firmly support AI governance to guard against demonstrable risks to public safety; unfortunately, this bill would fall short of these goals -- creating unnecessary risks for both the public and California's economy. SB 1047 seems heavily skewed toward addressing hypothetical existential risks while largely ignoring demonstrable AI risks like misinformation, discrimination, nonconsensual deepfakes, environmental impacts and workforce displacement. There are clauses in there that, honestly as an AI developer, I have no idea what to do. While the AI safety bill is well-intentioned, I am concerned about its potential implications for competition within the AI industry. Large tech companies already possess extensive data, existing models and the necessary infrastructure, giving them a significant advantage in ensuring their AI systems comply with the proposed safety standards. They also have the legal resources to navigate the regulatory landscape effectively. In contrast, smaller tech startups may struggle to meet the same safety thresholds, lacking both the data and computational power necessary to refine their models to the required standards. Without the ability to launch and gather more data to improve their systems, these startups risk being left behind, unable to compete. Over time, this could widen the gap between established players and new entrants. If left unchecked, it may eventually reach a point where smaller companies, and the public sector, have little opportunity to accumulate the adequate resources -- such as access to large datasets and sufficient computational power -- to assess the safety of larger companies' models. This would make it difficult to detect potential corner cases or hold big tech accountable, increasing the risk of even greater systemic safety issues. Algorithmic bias and discrimination have become increasingly significant as AI models are deployed on a larger scale. Currently, no binding regulations ensure algorithmic fairness, leaving the issue to the tech industry's voluntary efforts. However, self-regulation has proven insufficient in curbing bias in AI systems. Existing laws, such as the California AI Transparency Act (2024) and New York City AI Bias Law (2023), emphasize transparency, while the Algorithmic Accountability Act (proposed federally) calls for impact assessments. These regulations primarily focus on detecting bias after deployment, but they fall short of preventing bias or discrimination before it occurs. Given the widespread use of AI systems, significant societal harm can arise before issues are detected. SB 1047 aimed to address this gap by requiring companies to disclose information on the safety and fairness of AI models before deployment. Mandatory pre-deployment audits would help prevent harm before it happens. Additionally, the bill proposed holding companies legally accountable for discriminatory outcomes caused by their AI models, which would enhance fairness enforcement. This approach would incentivize developers to prioritize fairness from the outset. While the bill shifts much of the responsibility to developers -- who are better positioned to address AI design and testing -- it is important to acknowledge that new problems may arise when a well-tested AI model is applied in specific contexts. A balanced approach to liability between developers and users could encourage collaboration between them to prevent negative outcomes.
[2]
The future of AI rules
While Newsom's veto of California Senate Bill 1047 may have put the contentious measure to rest -- at least for now -- it has left different sectors of the tech industry largely in disagreement on the best path forward. Some tech advocacy groups quickly voiced their disappointment with the veto of S.B. 1047 -- the Safe and Secure Innovation for Frontier Artificial Intelligence Models Act -- arguing California missed the chance to lead with first-of-its-kind regulations on some of the country's largest AI developers. "[S.B. 1047] was the first of its kind legislation that went and put real safeguards in place for some of the biggest and scariest unknown potential uses of AI -- which, particularly given the rapid advancement of the technology, is really important for us to have those guardrails in place moving forward," Kaili Lambe, the policy and advocacy director for Accountable Tech, told The Hill. S.B. 1047 would have required powerful AI models to undergo safety testing before being released to the public. Testing would have examined whether these systems could be manipulated by malicious actors for harm, such as hacking into the state's electric grid. It also sought to hold AI developers liable for severe harm caused by their models but would have only applied to AI systems that cost more than $100 million to train. No current models have hit that number yet. Landon Klein, the director of U.S. policy for the Future of Life Institute (FLI), told The Hill there is a timely need for regulation to keep up with the rapid development. The FLI is a nonprofit focused on the existential risks to society. "One year is a lifetime in terms of the generations of these systems, and there's considerable risk over the course of that year," he said. "And we also run the risk of sort of this broader integration of the technology across society that makes it more difficult to regulate in the future." Meanwhile, some AI or software experts are cautioning against the push for regulation and applauded Newsom's move to veto the bill. Some told The Hill that more evidence is needed before lawmakers start placing guardrails on the tech. This includes further research on the specific risks of AI development and the most effective response when these are identified, experts said.
[3]
Newsom throws AI regulation fight into uncertainty with veto
California Gov. Gavin Newsom (D)'s decision to veto a sweeping artificial intelligence (AI) bill has renewed the debate over the future of AI regulation, leaving different sectors of the tech industry largely in disagreement on the best path forward. While Newsom's veto of California Senate Bill 1047 may have put the contentious measure to rest - at least for now - it has state legislators, AI experts and tech advocacy groups at odds on what comes next for the emerging tech. Some tech advocacy groups quickly voiced their disappointment with the veto of SB 1047 - short for the Safe and Secure Innovation for Frontier Artificial Intelligence Models Act - arguing California missed the chance to lead with first-of-its-kind regulations on some of the country's largest AI developers. "This [SB 1047] was the first of its kind legislation that went and put real safeguards in place for some of the biggest and scariest unknown potential uses of AI - which particularly given the rapid advancement of the technology - is really important for us to have those guardrails in place moving forward," Kaili Lambe, the policy and advocacy director for Accountable Tech, told The Hill. SB 1047 would have required powerful AI models to undergo safety testing before being released to the public. Testing would have examined whether these systems could be manipulated by malicious actors for harm, such as hacking into the state's electric grid. It also sought to hold AI developers liable for severe harm caused by their models but would have only applied to AI systems that cost more than $100 million to train. No current models have hit that number yet. Lambe said the major bill's failure concerns her given regulation and legislation often move slowly while the technology moves "fast." Landon Klein, the director of U.S. policy for the Future of Life Institute (FLI), agreed, stating there is a timely need for regulation to keep up with the rapid development. FLI is a nonprofit organization focused on the existential risks to society. "One year is a lifetime in terms of the generations of these systems and there's considerable risk over the course of that year," he said. "And we also run the risk of sort of this broader integration of the technology across society that makes it more difficult to regulate in the future." In his veto message Sunday, Newsom said the bill was "well-intentioned," but "does not take into account whether an AI system is deployed in high-risk environments, involves critical decision-making or the use of sensitive data." "Instead, the bill applies stringent standards to even the most basic functions -- so long as a large system deploys it," he said, advocating for an "empirical trajectory analysis" of AI systems and their capabilities before a solution can be found. The veto appears to kick the can down the road on the issue, though Newsom said Sunday the state is planning to partner with leading industry experts including Dr. Fei-Fei Li, who is known as the "godmother of AI," to develop guardrails based on an "empirical, science-based trajectory analysis." Klein suggested this initiative could be "little too late." "By the time the process is completed, at least one if not several new generations of increasingly powerful systems are going to come out, all of these under the same sort of profit incentive structure that has driven a lot of the largest companies to cut corners on safety," he said. While disappointed, Lambe suggested it is not the end of the road for regulation. "We're really going to hold Gov. Newsom to his word that he's going to continue to try to find solutions here," she said, adding, "I do think that what you should see in this next legislative session [are] numerous bills in multiple states that put forward other AI safety frameworks, which I think will hopefully put pressure on the federal government to also take action." Sneha Revanhur, the founder of Encode Justice - a global youth advocacy group - and co-sponsor on SB 1047 also stressed advocates will keep pushing for regulation. "We'll be back soon, whether that's California or another stage, or federally or internationally, we will be back soon," she told The Hill. The bill captured national attention over the past few weeks and coupled with Newsom's veto, Revanhur and others believe it is raising awareness about the push for regulation. "People are seeing the global stakes of this political battle. And I think that that, in and of itself, is a remarkable achievement, and it means that the AI safety movement is really picking up steam and getting places. And I mean that is just us sort of building the foundations for our next victory," she said. California State Sen. Scott Wiener (D), the author of SB 1047, called the veto a "setback" but said the fight for the bill "dramatically advanced the issue of AI safety on the international stage." Meanwhile, some AI or software experts are cautioning against the push for regulation, and applauded Newsom's move to veto the bill. Some told The Hill more evidence is needed before lawmakers start placing guardrails on the tech. This includes further research on the specific risks of AI development and the most effective response when these are identified, experts said. This is the big question that industry and academia is wrestling with ," said Daniel Castro, the vice president at the Information and Innovation Foundation. "How do we test these systems? How do we evaluate them? To create a law requiring this at this point was definitely...putting the cart before the horse," he said. The latest debate over SB 1047 is just one of many conversations over how to regulate AI. Newsom signed a host of other AI regulatory bills this month aimed at preventing abuses of AI, including sexual explicit deepfakes. On a federal level, there are more than 100 bills introduced in Congress, which will have less than two months to pass legislation upon returning from recess in November. One way to begin the regulation process is by encouraging open-source AI, meaning developers make both the models and the data that went into them public, University of Michigan robotics and electrical engineering and computer science Professor Jason Corso suggested. "I think it requires a willingness to share not only resulting models, but also the data that went into them, so we can better understand those relationships," said Corso, who is also the co-founder and CEO of computer vision startup Voxel 51. "It requires a need for even better tooling around, analyzing models and analyzing data sets, and I hope to see a community driven safety mechanism in place without the need for a government mechanism in place. But I suspect there would be further legislation in the future if the community is unable to do that." "This is an ambitious piece of legislation, and it wouldn't have worked," echoed Matt Calks, the co-founder and CEO of cloud computing and software company Appian, said. "And in order to get effective legislation, we first need to understand AI across our society, need to understand what this technology is, what you can expect of it, what its limits are, and so we just need more information. We need transparency. I would start there."
[4]
California AI bill veto could allow smaller devs, models to 'flourish'
Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More California Gov. Gavin Newsom vetoed SB 1047, the bill that many believed would change the landscape of AI development in the state and the country. The veto published on Sunday could give AI companies the ability to show they can proactively protect users from AI risks. SB 1047 would have required AI companies to include a "kill switch" to models, implement a written safety protocol and get a third-party safety auditor before starting to train models. It would have also given California's attorney general access to an auditor's report and the right to sue AI developers. Some AI industry veterans believed the bill could have a chilling effect on AI development. Many in the industry thanked Newsom for vetoing the bill, noting the veto could protect open-source development in the future. Yann Le Cun, chief AI scientist at Meta and a vocal opponent of SB 1047, posted on X (formerly Twitter) that Newsom's decision was "sensible." Prominent AI investor and general manager of Andreessen Horowitz Marc Andreessen said Newsom had sided "with California Dynamism, economic growth, and freedom to compute." Other industry players also weighed in, citing that while they believe regulation in the AI space is necessary, it should not make it harder for smaller developers and smaller AI models to flourish. "The core issue isn't the AI models themselves; it's the applications of those models," said Mike Capone, CEO of data integration platform Qlik, in a statement sent to VentureBeat. "As Newsom pointed out, smaller models are sometimes deployed in critical decision-making roles, while larger models handle more low-risk tasks. That's why we need to focus on the contexts and use cases of AI, rather than the technology itself." He added regulatory frameworks should focus on "ensuring safe and ethical usage" and supporting best AI practices. Coursera co-founder Andrew Ng also said the veto was "pro-innovation" and would protect open-source development. It is not just corporations hailing the veto. Dean Ball, AI and tech policy expert at George Mason University's Mercatus Center said the veto "is the right move for California, and for America more broadly." Ball noted that the bill targeted model size thresholds that are becoming out of date, which would not encompass recent models like OpenAI's o1. Lav Varshney, associate professor of electrical and computer engineering, at the University of Illinois' Grainger College of Engineering, noted the bill penalized original developers for the actions of those who use the technology. "Since SB 1047 had provisions on the downstream uses and modifications of AI models, once it left the hands of the original developers, it would have made it difficult to continue innovating in an open-source manner," Varshney told VentureBeat. "Shared responsibility among the original developers and those that fine-tune the AI to do things beyond the knowledge (and perhaps imagination) of the original developers seems more appropriate." Improving existing guard rails The veto, though, could allow AI model developers to strengthen their AI safety policies and guardrails. Kjell Carlsson, head of AI strategy at Domino Data Lab, said this presents an opportunity for AI companies to examine their governance practices closely and embed these in their workflows. "Enterprise leaders should seize this opportunity to proactively address AI risks and protect their AI initiatives now. Rather than wait for regulation to dictate safety measures, organizations should enact robust AI governance practices across the entire AI lifecycle: establishing controls over access to data, infrastructure and models, rigorous model testing and validation, and ensuring output auditability and reproducibility," said Carlsson. Navrina Singh, founder of AI governance platform Credo AI, said in an interview with VentureBeat that while SB 1047 had good points around auditory rules and risk profiling, it showed there is still a need to understand what needs to be regulated around AI. "We want governance to be at the center of innovation within AI, but we also believe that those who want to succeed with AI want to lead with trust and transparency because this is what customers are demanding of them," Singh said. She added while it's unclear if SB 1047's veto would change the behaviors of developers, the market is already pushing companies to present themselves as trustworthy. Disappointment from others However, not everyone is hailing Newsom's decision, with tech policy and safety groups condemning the decision. Nicole Gill, co-founder and executive director of the non-profit Accountable Tech, said in a statement that Newsom's decision "is a massive giveaway to Big Tech companies and an affront to all Americans who are currently the uncontested guinea pigs" of the AI industry. "This veto will not 'empower innovation' - it only further entrenches the status quo where Big Tech monopolies are allowed to rake in profits without regard for our safety, even as their AI tools are already threatening democracy, civil rights, and the environment with unknown potential for other catastrophic harms," Gill said. The AI Policy Institute echoed this sentiment, with executive director Daniel Colson saying the decision to veto "is misguided, reckless, and out of step with the people he's tasked with governing." The groups said California, where the majority of AI companies in the country are located, will allow AI development to go unchecked despite the public's demand to rein in some of its capabilities. The United States does not have any federal regulation around generative AI. While some states have developed policies on AI usage, no law imposes rules around the technology. The closest federal government policy in the country is an executive order from President Joe Biden. The executive order laid out a plan for agencies to use AI systems and asked AI companies to submit voluntarily models for evaluation before public release. OpenAI and Anthropic agreed to let the government test its models.
[5]
California's AI bill was well-meaning but flawed
It may sound a bit like a science fiction plot, but in the future artificial intelligence could conceivably reach the point of rapid self-improvement, evade human control, and unleash chaos upon humans through cyber attacks or even nuclear disasters. That is the concern of some scientists and developers, and was the motivation for an AI safety bill in California, which is home to 32 of the world's 50 leading AI companies. But on Sunday, state governor Gavin Newsom vetoed the legislation. The decision is seen as a big win for Big Tech, a reckless decision for public safety, and a missed opportunity to set de facto AI safety standards nationally. It is not that simple. Setting out rules to protect against the potential harms of a technology, particularly one still in development, is a tricky balancing act. If it is too overbearing, it risks stifling innovation in the first place, which means society misses out on its potential benefits too. And, although California's bill was watered down following intense lobbying from Silicon Valley, uncertainties around its effect on AI development and deployment still remained. One broad aim of California's bill was to raise developers' accountability for the misuse of their models. However admirable that may be, it can have side effects. For instance, it is difficult for developers to know ex-ante how their technology might be used. They might reconcile that by pulling back from research. AI experts also worried that the bill's safety protocols -- which include a requirement for companies to build a "kill switch" into models over a certain threshold -- could discourage development and the use of open-source models, where much innovation takes place. Another worry was that the legislation did not specifically target AI systems used in high-risk environments, such as in critical infrastructure, or if they used sensitive data. It applied stringent standards to even basic functions. Given these concerns, Newsom's decision seems reasonable. That, however, does not mean tech companies should get a free run. As the AI race gains speed, there is a genuine concern that model builders could overlook weak spots. So it would make sense for lawmakers now to rework the proposed rules and clarify the vague wording, to better balance concerns around the impact on innovation today. Newsom announced a promising partnership with experts to develop "workable guardrails". It is also welcome that the governor has recently signed bills targeted at regulating clear and present AI risks -- rather than hypothetical ones -- including those around deepfakes, and misinformation. While California's leadership on AI regulation is commendable, it would also be better if safety rules were hashed out and enacted at a federal level. That would provide protections across America, prevent a patchwork of varying state laws from emerging, and avoid putting the Golden State -- the epicentre of American and global AI innovation -- at a competitive disadvantage. Indeed, though the allure of the Silicon Valley investor and talent pool remains strong, there is a risk that unilateral and overly stringent AI regulation could push model development elsewhere, weakening the state's AI tech ecosystem in the process. As it is, California has high taxes and is the most heavily regulated state in the US. Property is expensive, too. Firms including US data analytics business Palantir and brokerage Charles Schwab have left the state recently, and some tech companies have cut office space. Managing safety concerns around AI development is an art in preserving the good while insuring against the bad. Technological threats to our societies should not be taken lightly, but neither should be stunting the emergence of an innovation that could help diagnose diseases, accelerate scientific research, and boost productivity. It is worth making the effort to get it right.
[6]
California Governor Vetoes Far-Reaching AI Safety Bill
California Gov. Gavin Newsom on Saturday vetoed a controversial and far-reaching artificial intelligence bill, SB 1047, that The New York Times called "the most ambitious proposal in the nation aimed at curtailing the growth of the new technology." "While well-intentioned, SB 1047 does not take into account whether an AI system is deployed in high-risk environments, involves critical decision-making or the use of sensitive data," the Democratic governor said in a statement. "Instead, the bill applies stringent standards to even the most basic functions -- so long as a large system deploys it. I do not believe this is the best approach to protecting the public from real threats posed by the technology." The bill is just the latest attempt to legislate the fast-growing artificial intelligence industry. As of August, the European Union has put its AI Act into effect, but the US hasn't followed suit, although some states have passed laws against deepfakes and other deceptive uses of AI. The European act regulates the type of artificial intelligence that tech companies can deploy, and it calls for those companies to be transparent with users. The California bill would have required safety testing of large AI models before they could be released to the public, gave the state's attorney general the right to sue companies if their technologies caused serious harm and mandated a kill switch to turn off AI systems in case of major threats. Newsom said he's not against safety protocols for AI. "We cannot afford to wait for a major catastrophe to occur before taking action to protect the public. California will not abandon its responsibility," Newsom's statement went on to say. "Proactive guardrails .should be implemented, and severe consequences for bad actors must be clear and enforceable." The governor said a statewide solution needed to be "informed by an empirical trajectory analysis of Al systems and capabilities," adding, "ultimately, any framework for effectively regulating Al needs to keep pace with the technology itself." Bill co-author Sen. Scott Wiener called the veto a "setback" in a statement issued following the veto. "This veto leaves us with the troubling reality that companies aiming to create an extremely powerful technology face no binding restrictions from US policymakers, particularly given Congress's continuing paralysis around regulating the tech industry in any meaningful way," the Democratic state senator said. It's far from the end for future attempts at large-scale AI legislation, especially in the state that is home to the tech industry's powerful Silicon Valley. "California will continue to lead in that conversation," Wiener said. "We are not going anywhere."
[7]
California spiked a landmark AI regulation. But that doesn't mean the bill is going away
With the veto of California's AI bill, the idea of regulating frontier models may be in jeopardy. The bill, SB 1047, would have required developers of the largest AI models (OpenAI, Anthropic, and the like) to set up and report on a safety framework, and submit to outside safety audits. The bill also included a whistleblower protection clause, and required developers to build a "kill switch" into models in case they began acting on their own in harmful ways. Most of the tech industry came out against the bill, saying its passage would shift the focus from innovation to compliance in AI research. It's worth noting, however, that much of the public supported the bill's protections, as did a number of respected AI researchers. Nonetheless, Governor Gavin Newsom vetoed the bill this week, saying it fails to assess the risk of AI models based on where and how they're deployed. "Smaller, specialized models may emerge as equally or even more dangerous than the models targeted by SB 1047 -- at the potential expense of curtailing the very innovation that fuels advancement in favor of the public good," Newsom wrote.
[8]
Newsom vetoes AI bill
California Gov. Gavin Newsom (D) took long-awaited action on a landmark artificial intelligence (AI) bill that would have created new safety rules for the emerging tech, vetoing the contentious legislation. The veto caps off weeks of skepticism over how Newsom would act on the legislation, known as California Senate Bill 1047, or the Safe and Secure Innovation for Frontier Artificial Intelligence Models Act. In a veto message published Sunday, the governor said the bill's focus on the "most expensive and large-scale models" "could give the public a false sense of security about controlling" AI. "Smaller, specialized models may emerge equally or even more dangerous than the models targeted by SB 1047 - at the potential expense of curtailing the very innovation that fuels advancement in favor of the public good," he wrote. The bill, known as SB 1047 for short, would have required powerful AI models to undergo safety testing before being released to the public. It also sought to hold developers liable for severe harm caused by their models. but only applied to AI models that cost more than $100 million to train "While well-intentioned, SB 1047 does not take into account whether an AI system is deployed in high-risk environments, involves critical decision-making or the use of sensitive data," Newsom wrote. "Instead, the bill applies stringent standards to even the most basic functions - so as long as a large system deploys it." California is home to 32 of the world's "50 leading AI companies," according to Newsom's office, and has become a major hub for AI-related legislation as a result. Newsom emphasized his veto does not mean he does not believe there is an urgent need to act on the advancing tech to prevent major catastrophe. The bill received mixed opinions from AI startups, major technology firms, researchers and even some lawmakers who were divided over whether it would throttle the development of the technology or establish much-needed guardrails.
[9]
Newsom Blocks Contentious AI Safety Bill in California
California Governor Gavin Newsom has vetoed what would have become one of the most comprehensive policies governing the safety of AI in the US. Democratic California Senator Scott Wiener, who introduced the bill, joins Caroline Hyde and Bloomberg's Shirin Ghaffary to weigh in on "Bloomberg Technology." (Source: Bloomberg)
Share
Share
Copy Link
California Governor Gavin Newsom's veto of Senate Bill 1047, a proposed AI safety regulation, has ignited discussions about balancing innovation with public safety in the rapidly evolving field of artificial intelligence.
California Governor Gavin Newsom's recent veto of Senate Bill 1047 (SB 1047), also known as the Safe and Secure Innovation for Frontier Artificial Intelligence Models Act, has ignited a heated debate about the future of AI regulation 1. The bill, which aimed to introduce some of the toughest regulations yet for large-scale AI models, has left different sectors of the tech industry largely in disagreement on the best path forward 2.
The vetoed bill proposed several significant measures:
The veto has elicited mixed responses from various stakeholders:
In his veto message, Newsom stated that while the bill was "well-intentioned," it did not adequately consider whether an AI system is deployed in high-risk environments or involves critical decision-making [3]. He advocated for an "empirical trajectory analysis" of AI systems and their capabilities before determining appropriate solutions [3].
Despite the veto, the debate over AI regulation continues:
The veto highlights the challenges of regulating a rapidly evolving technology:
As the AI landscape continues to evolve, the debate over regulation is likely to intensify, with stakeholders seeking to strike a balance between fostering innovation and ensuring public safety.
Reference
[1]
[2]
[5]
California Governor Gavin Newsom vetoes the Safe and Secure Innovation for Frontier Artificial Intelligence Models Act (SB 1047), igniting discussions on balancing AI innovation with safety measures.
4 Sources
California Governor Gavin Newsom signs a groundbreaking AI safety bill into law, sparking debate among tech leaders and policymakers about the future of AI regulation and its impact on innovation.
2 Sources
California's proposed AI safety bill, SB 1047, has ignited a fierce debate in the tech world. While some industry leaders support the legislation, others, including prominent AI researchers, argue it could stifle innovation and favor large tech companies.
3 Sources
California's legislature has approved a groundbreaking bill to regulate large AI models, setting the stage for potential nationwide standards. The bill, if signed into law, would require companies to evaluate AI systems for risks and implement mitigation measures.
7 Sources
A proposed California bill aimed at regulating artificial intelligence has created a divide among tech companies in Silicon Valley. The legislation has garnered support from some firms while facing opposition from others, highlighting the complex challenges in AI governance.
4 Sources
The Outpost is a comprehensive collection of curated artificial intelligence software tools that cater to the needs of small business owners, bloggers, artists, musicians, entrepreneurs, marketers, writers, and researchers.
© 2024 TheOutpost.AI All rights reserved