Curated by THEOUTPOST
On Wed, 21 Aug, 4:02 PM UTC
12 Sources
[1]
Tech Companies Furious at New Law That Would Hold Them Accountable When Their AI Does Bad Stuff
California is on the verge of passing a bill that would enforce sweeping regulations in the AI industry, after it was approved in the state's Assembly Appropriations Committee on Thursday. The bill, SB 1047, proposes a number of safety requirements for AI developers to prevent "severe harm," and includes provisions that could hold them accountable for the output of their AI models. Now OpenAI, which has advocated for regulation in the past, is joining other tech companies, as well as some politicians, in decrying the bill, arguing that it would hurt innovation in the industry, Bloomberg reports. "The AI revolution is only just beginning, and California's unique status as the global leader in AI is fueling the state's economic dynamism," Jason Kwon, chief strategy officer at OpenAI, wrote in a letter to state Senator Scott Wiener, who introduced the bill, as quoted by Bloomberg. "SB 1047 would threaten that growth, slow the pace of innovation, and lead California's world-class engineers and entrepreneurs to leave the state in search of greater opportunity elsewhere." In more specific terms, the bill would give the California attorney general the power to seek an injunction against tech companies that put out unsafe AI models, according to Platformer's analysis. If successfully sued, these companies could face civil penalties -- though not criminal penalties. To be compliant, businesses would need to carry out mandated safety testing for any AI models that either cost more than $100 million to develop or require more than a certain amount of computing power. AI developers would also need to build their AI models with a "kill switch" that could be used to shut them down in an emergency. In addition to in-house testing, developers would be required to hire third-party auditors to assess their safety practices, per Reuters. The bill also provides more legal protections to whistleblowers speaking out against AI practices. As Platformer observes, the bill raises an age-old question: should the person using the tech be blamed, or the tech itself? With regards to social media, the law says that generally, websites can't be held accountable for what users post. AI companies hope that this status quo applies to them, too. Because AI models frequently hallucinate and are easily tricked into ignoring their guardrails, the prospect of being held accountable for their chaotic outputs could be a major headache. OpenAI and others argue that such regulatory actions are premature and could hamper development of the tech in the state. And true, it may be the case that AIs are still in their infancy and have a long way to go before they're capable enough to turn on us à la Skynet -- but it'd be remiss to downplay the dangers of more mundane fears like misinformation, or their ability to carry out hacks. As it stands, the bill awaits a vote in the state's full Assembly, and must be passed by the end of the month before it can be sent to Governor Gavin Newsom for approval.
[2]
OpenAI Says California's Controversial AI Bill Will Hurt Innovation
OpenAI is opposing a bill in California that would place new safety requirements on artificial intelligence companies, joining a chorus of tech leaders and politicians who have recently come out against the controversial legislation. The San Francisco-based startup said the bill would hurt innovation in the AI industry and argued that regulation on this issue should come from the federal government instead of the states, according to a letter sent to California State Senator Scott Wiener's office on Wednesday and obtained by Bloomberg News. The letter also raised concerns that the bill, if passed, could have "broad and significant" implications for US competitiveness on AI and national security. SB 1047, introduced by Wiener, aims to enact what his office has called "common sense safety standards" for companies that make large AI models above a specific size and cost threshold. The bill, which passed the state Senate in May, would require AI companies to take steps to prevent their models from causing "critical harm," such as enabling the development of bioweapons that can cause mass human casualties or by contributing to more than $500 million in financial damage. Under the bill, companies would need to ensure AI systems can be shut down, take "reasonable care" that artificial intelligence models don't cause catastrophe and disclose a statement of compliance to California's attorney general. If businesses don't follow these requirements, they could be sued and face civil penalties. The bill has received fierce opposition from many major tech companies, startups and venture capitalists who say that it's an overreach for a technology still in its infancy and could stifle tech innovation in the state. Some critics of the bill have raised concerns that it could drive AI companies out of California. OpenAI echoed those concerns in the letter to Wiener's office. "The AI revolution is only just beginning, and California's unique status as the global leader in AI is fueling the state's economic dynamism," Jason Kwon, chief strategy officer at OpenAI, wrote in the letter. "SB 1047 would threaten that growth, slow the pace of innovation, and lead California's world-class engineers and entrepreneurs to leave the state in search of greater opportunity elsewhere." A representative for Wiener's office did not have comment at the time of publication, but pointed to two prominent national security experts who have publicly supported the bill. OpenAI has put conversations about expanding its San Francisco offices on hold amid concerns about uncertainty with California's regulatory landscape, according to a person familiar with the company's real estate plans who requested anonymity to discuss internal conversations. Wiener has previously said the law would apply to any companies that conduct business in California, regardless of where their offices are located.
[3]
OpenAI joins opposition to California AI safety bill
OpenAI has hit out at a California bill aiming to ensure powerful artificial intelligence is deployed safely and suggested that new controls would threaten its growth in the state, joining a last-minute lobbying frenzy by investors and AI groups to block the legislation. The bill, SB 1047, threatens "California's unique status as the global leader in AI," the company's chief strategy officer Jason Kwon wrote in a letter to Scott Wiener, the California state senator spearheading the bill. It could "slow the pace of innovation, and lead California's world-class engineers and entrepreneurs to leave the state in search of greater opportunity elsewhere", he added. SB 1047 has divided Silicon Valley. While there is widespread acceptance of the need to curb the risks of super-powerful new AI models, critics have argued that Wiener's proposals would stifle start-ups, benefit America's rivals and undermine California's position at the epicentre of a boom in AI. OpenAI is the latest start-up to oppose elements of the bill, and the most prominent -- thanks largely to the popularity of its ChatGPT chatbot and a $13bn commitment from its partner Microsoft. OpenAI supports provisions to ensure AI systems are developed and deployed safely but argues in the letter, which was first reported by Bloomberg, that legislation should come from the federal government, not individual states. In a response on Wednesday, Wiener said he agreed that the federal government should take the lead but was "sceptical" Congress would act. He also criticised the "tired argument" that tech start-ups would relocate if the bill was passed and said companies based outside the state would still need to comply with the bill to do business locally. The California State Assembly is set to vote on the bill by the end of the month. If it passes, Governor Gavin Newsom will then decide whether to sign it into law or veto it. Silicon Valley tech groups and investors, including Anthropic, Andreessen Horowitz and YCombinator, have ramped up a lobbying campaign against Wiener's proposals for a strict safety framework. Nancy Pelosi, the former House Speaker and California representative, also published a statement in opposition of the bill last week, dubbing it "well-intentioned but ill informed". Among the most contentious elements in the senator's original proposals were demands that AI companies guarantee to a new state body that they will not develop models with "a hazardous capability", and create a "kill switch" to turn off their powerful models. Opponents claimed the bill focused on hypothetical risks and added an "extreme" liability risk on founders. The bill was amended to soften some of those requirements last week -- for example, limiting the civil liabilities that it had originally placed on AI developers and narrowing the scope of those who would need to adhere to the rules. However, critics argue that the bill still burdens start-ups with onerous and sometimes unrealistic requirements. On Monday, US House members Anna Eshoo and Zoe Lofgren wrote in a letter to Robert Rivas, Speaker of the California assembly, that there were "still substantial problems with the underlying construct of the bill", calling instead for "focus on federal rules to control physical tools needed to create these physical threats". Despite criticism from leading AI academics such as Stanford's Fei-Fei Li and Andrew Ng, who led AI projects at Alphabet's Google and China's Baidu, the bill has found support among some of the "AI godfathers", such as Geoffrey Hinton of the University of Toronto and Yoshua Bengio, a computer science professor at the University of Montreal. "Bottom line: SB 1047 is a highly reasonable bill that asks large AI labs to do what they've already committed to doing, namely, test their large models for catastrophic safety risk," Wiener wrote on Wednesday.
[4]
Big Tech wants AI to be regulated. Why do they oppose a California AI bill?
Advanced by State Senator Scott Wiener, a Democrat, the proposal would mandate safety testing for many of the most advanced AI models that cost more than $100 million to develop or those that require a defined amount of computing power. Developers of AI software operating in the state would also need to outline methods for turning off the AI models if they go awry, effectively a kill switch.California legislators are set to vote on a bill as soon as this week that would broadly regulate how artificial intelligence is developed and deployed in California even as a number of tech giants have voiced broad opposition. Here is background on the bill, known as SB 1047, and why it has faced backlash from Silicon Valley technologists and some lawmakers: WHAT DOES THE BILL DO? Advanced by State Senator Scott Wiener, a Democrat, the proposal would mandate safety testing for many of the most advanced AI models that cost more than $100 million to develop or those that require a defined amount of computing power. Developers of AI software operating in the state would also need to outline methods for turning off the AI models if they go awry, effectively a kill switch. The bill would also give the state attorney general the power to sue if developers are not compliant, particularly in the event of an ongoing threat, such as the AI taking over government systems like the power grid. As well, the bill would require developers to hire third-party auditors to assess their safety practices and provide additional protections to whistleblowers speaking out against AI abuses. WHAT HAVE LAWMAKERS SAID? SB 1047 has already passed the state Senate by a 32-1 vote. Last week it passed the state Assembly appropriations committee, setting up a vote by the full Assembly. If it passes by the end of the legislative session on Aug. 31, it would advance to Governor Gavin Newsom to sign or veto by Sept. 30. Wiener, who represents San Francisco, home to OpenAI and many of the startups developing the powerful software, has said legislation is necessary to protect the public before advances in AI become either unwieldy or uncontrollable. However, a group of California Congressional Democrats oppose the bill, including San Francisco's Nancy Pelosi; Ro Khanna, whose congressional district encompasses much of Silicon Valley; and Zoe Lofgren, from San Jose. Pelosi this week called SB 1047 ill-informed and said it may cause more harm than good. In an open letter last week, the Democrats said the bill could drive developers from the state and threaten so-called open-source AI models, which rely on code that is freely available for anyone to use or modify. WHAT DO TECH LEADERS SAY? Tech companies developing AI - which can respond to prompts with fully formed text, images or audio as well as run repetitive tasks with minimal intervention - have called for stronger guardrails for AI's deployment. They have cited risks that the software could one day evade human intervention and cause cyberattacks, among other concerns. But they also largely balked at SB 1047. Wiener revised the bill to appease tech companies, relying in part on input from AI startup Anthropic - backed by Amazon and Alphabet. Among other changes, he eliminated the creation of a government AI oversight committee. Wiener also took out criminal penalties for perjury, though civil suits may still be brought. Alphabet's Google and Meta have expressed concerns in letters to Wiener. Meta said the bill threatens to make the state unfavorable to AI development and deployment. The Facebook parent's chief scientist, Yann LeCun, in a July X post called the bill potentially harmful to research efforts. OpenAI, whose ChatGPT is credited with accelerating the frenzy over AI since its broad release in late 2022, has said AI should be regulated by the federal government and that SB 1047 creates an uncertain legal environment. Of particular concern is the potential for the bill to apply to open-source AI models. Many technologists believe open-source models are important for creating less risky AI applications more quickly, but Meta and others have fretted that they could be held responsible for policing open-source models if the bill passes. Wiener has said he supports open-source models and one of the recent amendments to the bill raised the standard for which open-sourced models are covered under its provisions. The bill also has its backers in the technology sector. Geoffrey Hinton, widely credited as a "godfather of AI," former OpenAI employee Daniel Kokotajlo and researcher Yoshua Bengio have said they support the bill.
[5]
OpenAI's opposition to California's AI law 'makes no sense,' says state Senator | TechCrunch
OpenAI broke its silence on California's most controversial AI bill on Tuesday, officially expressing opposition in a letter to California state Senator Scott Wiener and Governor Gavin Newsom. The AI giant argued SB 1047 would stifle innovation and push talent out of California -- a position Wiener quickly replied "makes no sense." "The AI revolution is only just beginning, and California's unique status as the global leader in AI is fueling the state's economic dynamism," said OpenAI's Chief Strategy Officer Jason Kwon in the letter obtained by TechCrunch. "SB 1047 would threaten that growth, slow the pace of innovation, and lead California's world-class engineers and entrepreneurs to leave the state in search of greater opportunity elsewhere. Given those risks, we must protect America's AI edge with a set of federal policies - rather than state ones - that can provide clarity and certainty for AI labs and developers while also preserving public safety." The company joins broad local pushback against SB 1047 on Tuesday, adding its take to those of trade groups representing Google and Meta, investment firm Andreessen Horowitz, prominent AI researchers, and California Representatives Nancy Pelosi and Zoe Lofgren. An OpenAI spokesperson says the company has been in discussions with Senator Wiener's office about the bill for months. However, Senator Wiener says the AI lab's argument that SB 1047 would push AI companies out of California is "tired." He pointed out in a press release on Wednesday that OpenAI doesn't actually "criticize a single provision of the bill." He says the company's claim that companies will leave California because of SB 1047 "makes no sense given that SB 1047 is not limited to companies headquartered in California." As we've previously reported, SB 1047 affects all AI model developers that do business in California and meet certain size thresholds. In other words, whether an AI company was based in San Jose or San Antonio, if they let Californians use their products, they would be subject to these restrictions. (An example of an effective law with this type of scope is Illinois' Biometric Information Privacy Act.) That said, Bloomberg reports that OpenAI has put conversations about expanding its San Francisco offices on hold amid concerns about California's regulatory landscape. OpenAI has had an office in San Francisco's Mission district for years, and recently moved into a new office in the city's Mission Bay region, previously occupied by Uber. OpenAI declined to comment further on those real estate discussions. "Instead of criticizing what the bill actually does, OpenAI argues this issue should be left to Congress," said Wiener in the statement. "As I've stated repeatedly, I agree that ideally Congress would handle this. However, Congress has not done so, and we are skeptical Congress will do so." Tech companies have taken similar stances regarding privacy laws in the past, calling for federal regulation knowing it will be slow to come, and California ended up stepping up there as well. OpenAI has endorsed several federal bills regulating AI models, one of which authorizes the United States AI Safety Institute as a federal body that sets standards and guidelines for AI models. From a high level, that's fairly similar to what SB 1047's Board of Frontier Models is supposed to do. California lawmakers significantly amended SB 1047 to give Governor Newsom a less controversial AI bill to sign, but they've failed to convince Silicon Valley's most important AI lab the bill is worth passing. SB 1047 is now headed for a final vote in California's Assembly, and could land on Governor Newsom's desk by the end of the month. The California Governor has not indicated his feelings on SB 1047, but he'd likely face a broad industry backlash if he signs it.
[6]
Explainer-Big Tech wants AI to be regulated. Why do they oppose a California AI bill?
SAN FRANCISCO (Reuters) - California legislators are set to vote on a bill as soon as this week that would broadly regulate how artificial intelligence is developed and deployed in California even as a number of tech giants have voiced broad opposition. Here is background on the bill, known as SB 1047, and why it has faced backlash from Silicon Valley technologists and some lawmakers: WHAT DOES THE BILL DO? Advanced by State Senator Scott Wiener, a Democrat, the proposal would mandate safety testing for many of the most advanced AI models that cost more than $100 million to develop or those that require a defined amount of computing power. Developers of AI software operating in the state would also need to outline methods for turning off the AI models if they go awry, effectively a kill switch. The bill would also give the state attorney general the power to sue if developers are not compliant, particularly in the event of an ongoing threat, such as the AI taking over government systems like the power grid. As well, the bill would require developers to hire third-party auditors to assess their safety practices and provide additional protections to whistleblowers speaking out against AI abuses. WHAT HAVE LAWMAKERS SAID? SB 1047 has already passed the state Senate by a 32-1 vote. Last week it passed the state Assembly appropriations committee, setting up a vote by the full Assembly. If it passes by the end of the legislative session on Aug. 31, it would advance to Governor Gavin Newsom to sign or veto by Sept. 30. Wiener, who represents San Francisco, home to OpenAI and many of the startups developing the powerful software, has said legislation is necessary to protect the public before advances in AI become either unwieldy or uncontrollable. However, a group of California Congressional Democrats oppose the bill, including San Francisco's Nancy Pelosi; Ro Khanna, whose congressional district encompasses much of Silicon Valley; and Zoe Lofgren, from San Jose. Pelosi this week called SB 1047 ill-informed and said it may cause more harm than good. In an open letter last week, the Democrats said the bill could drive developers from the state and threaten so-called open-source AI models, which rely on code that is freely available for anyone to use or modify. WHAT DO TECH LEADERS SAY? Tech companies developing AI - which can respond to prompts with fully formed text, images or audio as well as run repetitive tasks with minimal intervention - have called for stronger guardrails for AI's deployment. They have cited risks that the software could one day evade human intervention and cause cyberattacks, among other concerns. But they also largely balked at SB 1047. Wiener revised the bill to appease tech companies, relying in part on input from AI startup Anthropic - backed by Amazon and Alphabet. Among other changes, he eliminated the creation of a government AI oversight committee. Wiener also took out criminal penalties for perjury, though civil suits may still be brought. Alphabet's Google and Meta have expressed concerns in letters to Wiener. Meta said the bill threatens to make the state unfavorable to AI development and deployment. The Facebook parent's chief scientist, Yann LeCun, in a July X post called the bill potentially harmful to research efforts. OpenAI, whose ChatGPT is credited with accelerating the frenzy over AI since its broad release in late 2022, has said AI should be regulated by the federal government and that SB 1047 creates an uncertain legal environment. Of particular concern is the potential for the bill to apply to open-source AI models. Many technologists believe open-source models are important for creating less risky AI applications more quickly, but Meta and others have fretted that they could be held responsible for policing open-source models if the bill passes. Wiener has said he supports open-source models and one of the recent amendments to the bill raised the standard for which open-sourced models are covered under its provisions. The bill also has its backers in the technology sector. Geoffrey Hinton, widely credited as a "godfather of AI," former OpenAI employee Daniel Kokotajlo and researcher Yoshua Bengio have said they support the bill. (Reporting by Greg Bensinger; Editing by Sayantani Ghosh and Stephen Coates)
[7]
Explainer-Big Tech wants AI to be regulated. Why do they oppose a California AI bill?
Here is background on the bill, known as SB 1047, and why it has faced backlash from Silicon Valley technologists and some lawmakers: Advanced by State Senator Scott Wiener, a Democrat, the proposal would mandate safety testing for many of the most advanced AI models that cost more than $100 million to develop or those that require a defined amount of computing power. Developers of AI software operating in the state would also need to outline methods for turning off the AI models if they go awry, effectively a kill switch. The bill would also give the state attorney general the power to sue if developers are not compliant, particularly in the event of an ongoing threat, such as the AI taking over government systems like the power grid. As well, the bill would require developers to hire third-party auditors to assess their safety practices and provide additional protections to whistleblowers speaking out against AI abuses. SB 1047 has already passed the state Senate by a 32-1 vote. Last week it passed the state Assembly appropriations committee, setting up a vote by the full Assembly. If it passes by the end of the legislative session on Aug. 31, it would advance to Governor Gavin Newsom to sign or veto by Sept. 30. Wiener, who represents San Francisco, home to OpenAI and many of the startups developing the powerful software, has said legislation is necessary to protect the public before advances in AI become either unwieldy or uncontrollable. However, a group of California Congressional Democrats oppose the bill, including San Francisco's Nancy Pelosi; Ro Khanna, whose congressional district encompasses much of Silicon Valley; and Zoe Lofgren, from San Jose. Pelosi this week called SB 1047 ill-informed and said it may cause more harm than good. In an open letter last week, the Democrats said the bill could drive developers from the state and threaten so-called open-source AI models, which rely on code that is freely available for anyone to use or modify. Tech companies developing AI - which can respond to prompts with fully formed text, images or audio as well as run repetitive tasks with minimal intervention - have called for stronger guardrails for AI's deployment. They have cited risks that the software could one day evade human intervention and cause cyberattacks, among other concerns. But they also largely balked at SB 1047. Wiener revised the bill to appease tech companies, relying in part on input from AI startup Anthropic - backed by Amazon and Alphabet. Among other changes, he eliminated the creation of a government AI oversight committee. Wiener also took out criminal penalties for perjury, though civil suits may still be brought. Alphabet's Google and Meta have expressed concerns in letters to Wiener. Meta said the bill threatens to make the state unfavorable to AI development and deployment. The Facebook parent's chief scientist, Yann LeCun, in a July X post called the bill potentially harmful to research efforts. OpenAI, whose ChatGPT is credited with accelerating the frenzy over AI since its broad release in late 2022, has said AI should be regulated by the federal government and that SB 1047 creates an uncertain legal environment. Of particular concern is the potential for the bill to apply to open-source AI models. Many technologists believe open-source models are important for creating less risky AI applications more quickly, but Meta and others have fretted that they could be held responsible for policing open-source models if the bill passes. Wiener has said he supports open-source models and one of the recent amendments to the bill raised the standard for which open-sourced models are covered under its provisions. The bill also has its backers in the technology sector. Geoffrey Hinton, widely credited as a "godfather of AI," former OpenAI employee Daniel Kokotajlo and researcher Yoshua Bengio have said they support the bill. (Reporting by Greg Bensinger; Editing by Sayantani Ghosh and Stephen Coates)
[8]
California Senator Responds to OpenAI Opposition to AI Bill
OpenAI reportedly opposes a California bill that would place new safety requirements on artificial intelligence (AI) companies, saying that the bill would limit innovation and that this issue should be dealt with at the federal level. The company also said that the bill (Senate Bill 1047) would impact U.S. competitiveness on AI and the country's national security, Bloomberg reported Wednesday (Aug. 21), citing a letter that was sent by OpenAI to the state senator who wrote the bill and that was obtained by the media outlet. "The AI revolution is only just beginning, and California's unique status as the global leader in AI is fueling the state's economic dynamism," OpenAI Chief Strategy Officer Jason Kwon wrote in the letter sent to California State Sen. Scott Wiener, D-San Francisco, according to the report. "SB 1047 would threaten that growth, slow the pace of innovation, and lead California's world-class engineers and entrepreneurs to leave the state in search of greater opportunity elsewhere." OpenAI did not immediately reply to PYMNTS' request for comment. In a press release issued Wednesday in response to OpenAI opposition to SB 1047, Wiener said that the company's letter does not criticize "a single provision of the bill," and instead argues that this issue should be left to the U.S. Congress. "As I've stated repeatedly, I agree that ideally Congress would handle this," Wiener wrote in the letter. "However, Congress has not done so, and we are skeptical Congress will do so." Addressing other concerns that he said OpenAI outlined in its letter, Wiener said in his response that the bill's requirements would strengthen national security by making AI companies thoroughly test their products. He also said that it would not make sense for companies to leave California if the bill passes because the bill would still apply to them -- it applies to any company doing business in the state. "Bottom line: SB 1047 is a highly reasonable bill that asks large AI labs to do what they've already committed to doing, namely, test their large models for catastrophic safety risk," Wiener wrote. The AI bill is advancing through the state legislature, but only with significant modifications and intense debate within the tech industry, PYMNTS reported Monday (Aug. 19).
[9]
OpenAI exec says California's AI safety bill might slow progress
According to proponents like Wiener, it establishes standards ahead of the development of more powerful AI models, requires precautions like pre-deployment safety testing and other safeguards, adds whistleblower protections for employees of AI labs, gives California's Attorney General power to take legal action if AI models cause harm, and calls for establishing a "public cloud computer cluster" called CalCompute. In a response to the letter published Wednesday evening, Wiener points out that the proposed requirements apply to any company doing business in California, whether they are headquartered in the state or not, so the argument "makes no sense." He also writes that OpenAI "...doesn't criticize a single provision of the bill" and closes by saying, "SB 1047 is a highly reasonable bill that asks large AI labs to do what they've already committed to doing, namely, test their large models for catastrophic safety risk."
[10]
OpenAI joins Silicon Valley companies lobbying against California's AI bill, which includes a 'kill switch'
This story is available exclusively to Business Insider subscribers. Become an Insider and start reading now. Have an account? Log in. The bill mandates that companies implement protocols to prevent their AI models from causing "critical harms," such as being used in cyberattacks or leading to the development of weapons of mass destruction. The bill also specified the provision of a "full shutdown," which functions as a kill switch for AI systems. Jason Kwon, OpenAI's chief strategy officer, warned that the bill could stifle progress and drive companies out of California, in a letter addressed to Wiener on Wednesday. Kwon also wrote that regulation of AI concerning national security is "best managed at the federal level" rather than through a "patchwork of state laws." Silicon Valley tech heavyweights like Meta and Anthropic have been lobbying against the bill, too. Meta warned that the bill might discourage the open-source movement by exposing developers to significant legal liabilities, wrote Rob Sherman, vice president of policy and deputy chief privacy officer at Meta, in a letter in June. Sherman wrote that regulation could hamper the broader tech ecosystem because smaller businesses rely on these freely available models to innovate. Anthropic also resisted the bill's stringent preemptive regulations, advocating instead for a more balanced approach that wouldn't stymie progress, BI reported on Monday. OpenAI previously lobbied against similar legislation by the European Union. The company sought to ease the regulatory requirements on general-purpose AI systems like GPT-3, Time reported last year. The EU had since altered its final draft of the AI Act to exclude language that would classify general-purpose AI as high risk, instead focusing on "foundation models" with more limited requirements, according to Time. Despite the industry opposition, Sen. Wiener argued that it is a "highly reasonable bill that asks large AI labs to do what they've already committed to doing," the senator wrote in response to OpenAI's letter on Wednesday. The bill had passed a vote in the state Senate and is set for a final vote in the California Assembly at the end of the month. OpenAI and Sen. Weiner didn't respond to a request for comment sent outside standard business hours.
[11]
Gavin Newsom's California Wants To Regulate AI, But It's Facing Stiff Opposition From Both Big Tech As Well As Nancy Pelosi And Other Democrats: Here's Why - Alphabet (NASDAQ:GOOG), Alphabet (NASDAQ:GOOGL)
California lawmakers are poised to vote on a bill regulating AI development, despite significant opposition from major tech companies. What Happened: California legislators are expected to vote this week on SB 1047, a bill aimed at regulating AI development and deployment in the state, reported Reuters. If approved by the end of the legislative session on Aug. 31, the bill would move to Governor Gavin Newsom, who must sign or veto it by Sept. 30. What The Bill Says The bill, introduced by Senator Scott Wiener (D-Calif.), mandates safety testing for advanced AI models costing over $100 million to develop or requiring significant computing power. The bill also requires developers to include a "kill switch" to shut down AI models if they malfunction. The state attorney general would have the authority to sue non-compliant developers, especially if the AI poses a threat to critical systems like the power grid. Additionally, third-party auditors would be hired to assess safety practices, and whistleblowers would receive protections. See Also: Mark Zuckerberg Explains Why Facebook Beat Google, Microsoft, And Yahoo Who Were 'Fumbling Around:' 'We Were Like A Ragtag Group Of Children' Pelosi Calls California AI Bill 'Ill Informed' Despite passing the state Senate and the Assembly appropriations committee, the bill faces opposition from several California Congressional Democrats, including Rep. Nancy Pelosi (D-Calif.), Rep. Ro Khanna (D-Calif.), and Rep. Zoe Lofgren (D-Calif.). Pelosi criticized the bill as potentially harmful and called it "ill informed," arguing it could drive developers out of the state and threaten open-source AI models. Subscribe to the Benzinga Tech Trends newsletter to get all the latest tech developments delivered to your inbox. "At this time, the California legislature is considering SB 1047. The view of many of us in Congress is that SB 1047 is well-intentioned but ill-informed. Zoe Lofgren, the top Democrat on the Committee of Jurisdiction, Science, Space and Technology, has expressed serious concerns to the lead author, Senator Scott Wiener." Big Tech Doesn't Like It, Either Tech giants like Alphabet Inc.'s GOOG GOOGL Google and Meta Platforms Inc. META have also expressed concerns. Meta's chief scientist, Yann LeCun, had a rather strong reaction to the California AI bill. "California bill SB1047 was drafted by this apocalyptic cult guru disguising as an academic think-tank director," he said. OpenAI has argued that AI regulation should be handled at the federal level, citing the bill's potential to create an uncertain legal environment, the report added. Why It Matters: The opposition to SB 1047 is not new. Back in May, LeCun argued that it was too early to regulate AI, suggesting that the technology is not yet a threat. This sentiment was echoed by Elon Musk, who believes regulation is inevitable but premature. In July, tech giants Meta and Google criticized the bill, claiming it would make the AI ecosystem "less safe" by imposing stringent safety measures. They argued that regulations should target malicious actors rather than developers. Meta's chief AI scientist even called the author of the bill an "apocalyptic cult guru." Check out more of Benzinga's Consumer Tech coverage by following this link. Read Next: GOP-Led House Judiciary Committee Slams EU's Thierry Breton For Trying To 'Intimidate' Elon Musk's X: Freedom Is Coming!' Disclaimer: This content was partially produced with the help of AI tools and was reviewed and published by Benzinga editors. Photo courtesy: Shutterstock Market News and Data brought to you by Benzinga APIs
[12]
Big Tech wants AI to be regulated. Why do they oppose a California AI bill? - Times of India
California legislators were set to vote on SB 1047, which aims to regulate AI development and deployment. The bill mandates safety testing and includes measures such as a kill switch for AI models. Despite revisions to appease tech companies, many, including Google and Meta, expressed concerns. The bill also saw opposition from some Congressional Democrats.
Share
Share
Copy Link
Major tech companies, including OpenAI and Google, are opposing California's proposed AI accountability bill, arguing it could stifle innovation. The bill aims to regulate AI development and hold companies accountable for potential harms.
California's proposed artificial intelligence (AI) accountability bill, AB 2269, has ignited a fierce debate between tech giants and state lawmakers. The bill, introduced by California State Senator Scott Wiener, aims to regulate AI development and hold companies accountable for potential harms caused by their AI systems 1.
Major tech companies, including OpenAI, Google, and Meta, have voiced strong opposition to the bill. OpenAI, the creator of ChatGPT, argues that the legislation would "make it impossibly costly or impractical to develop AI models in California" 2. The company claims that the bill's requirements for extensive testing and documentation would significantly slow down AI development and potentially drive innovation out of the state.
The proposed legislation includes several key provisions:
Senator Wiener defends the bill, arguing that it is a necessary step to ensure responsible AI development. He contends that the tech companies' opposition "makes no sense" and that their claims about stifling innovation are unfounded 5. Supporters of the bill believe it will promote transparency and accountability in the rapidly evolving AI industry.
Tech companies express concerns about the bill's potential impact on their operations and competitiveness. They argue that:
The debate in California reflects a broader global discussion on AI regulation. As governments worldwide grapple with the challenges posed by rapidly advancing AI technology, the outcome of this legislative battle could set a precedent for future AI governance frameworks. The tension between innovation and regulation remains a central theme in shaping the future of AI development and deployment.
Reference
[1]
[2]
[3]
A proposed California bill aimed at regulating artificial intelligence has created a divide among tech companies in Silicon Valley. The legislation has garnered support from some firms while facing opposition from others, highlighting the complex challenges in AI governance.
4 Sources
4 Sources
A groundbreaking artificial intelligence regulation bill has passed the California legislature and now awaits Governor Gavin Newsom's signature. The bill, if signed, could set a precedent for AI regulation in the United States.
14 Sources
14 Sources
California's proposed AI safety bill, SB 1047, has ignited a fierce debate in the tech world. While some industry leaders support the legislation, others, including prominent AI researchers, argue it could stifle innovation and favor large tech companies.
3 Sources
3 Sources
California's legislature has approved a groundbreaking bill to regulate large AI models, setting the stage for potential nationwide standards. The bill, if signed into law, would require companies to evaluate AI systems for risks and implement mitigation measures.
7 Sources
7 Sources
California's AI safety bill, AB-1047, moves forward with significant amendments following tech industry input. The bill aims to regulate AI development while balancing innovation and safety concerns.
10 Sources
10 Sources
The Outpost is a comprehensive collection of curated artificial intelligence software tools that cater to the needs of small business owners, bloggers, artists, musicians, entrepreneurs, marketers, writers, and researchers.
© 2025 TheOutpost.AI All rights reserved