Curated by THEOUTPOST
On Thu, 15 Aug, 4:02 PM UTC
10 Sources
[1]
California trims AI safety bill amid fears of tech exodus
And as Anthropic CEO reckons there's 'a good chance ... we'll be able to get models that are better than most humans at most things' Legislation to regulate artificial intelligence (AI) software in California has been revised in response to industry discontent with the bill, which awaits a State Assembly vote later this month. California State Senator Scott Wiener (D)'s Safe and Secure Innovation for Frontier Artificial Intelligence Models Act (SB 1047) has faced resistance from leading AI companies such as Anthropic and from federal lawmakers like Congressional Representative Zoe Lofgren (D-CA-18). "I'm very concerned about the effect this legislation could have on the innovation economy of California without any clear benefit for the public," wrote Lofgren in an August 7 letter [PDF] to Wiener. "There is a real risk that companies will decide to incorporate in other jurisdictions or simply not release models in California." California is home to 35 of the top 50 AI companies in the world, according to Governor Gavin Newsom (D)'s executive order last September, which calls for studying the development, use, and risks of AI technology. We accepted a number of very reasonable amendments proposed ... we've addressed the core concerns Wiener on Thursday acknowledged changes to the bill, citing input from Anthropic, a startup built by former OpenAI staff and others with a focus on the safe use of machine learning. "While the amendments do not reflect 100 percent of the changes requested by Anthropic - a world leader on both innovation and safety - we accepted a number of very reasonable amendments proposed, and I believe we've addressed the core concerns expressed by Anthropic and many others in the industry," Wiener said in a statement. SB 1047, co-authored by Senator Richard Roth (D-Riverside), Senator Susan Rubio (D-Baldwin Park) and Senator Henry Stern (D-Los Angeles), has the support of latter-day AI pioneers Geoffrey Hinton, emeritus professor of computer science at University of Toronto and former AI lead at Google, and Yoshua Bengio, professor of computer science at University of Montreal. In a statement, Hinton said, "Forty years ago when I was training the first version of the AI algorithms behind tools like ChatGPT, no one - including myself - would have predicted how far AI would progress. Powerful AI systems bring incredible promise, but the risks are also very real and should be taken extremely seriously. "SB 1047 takes a very sensible approach to balance those concerns. I am still passionate about the potential for AI to save lives through improvements in science and medicine, but it's critical that we have legislation with real teeth to address the risks. California is a natural place for that to start, as it is the place this technology has taken off." The bill focuses on "frontier models," a term that refers to state-of-the-art AI models requiring more than 10 integer or floating-point operations to create, at a training cost of more than $100 million using average market prices. In a recent interview with Norges Bank CEO Nicolai Tangen, Anthropic CEO Dario Amodei said AI models now commonly cost around $100 million to train and that there are models currently being trained at a cost of about $1 billion. In the next few years, he said, the cost could go to $10 billion or $100 billion. And if chip and algorithm improvements continue, Amodei said, at that point, "there is in my mind a good chance that by that time we'll be able to get models that are better than most humans at most things." That's the sort of scenario that concerns the public, which largely supported SB 1047 as initially written. According to an Artificial Intelligence Policy Institute (AIPI) poll, "Only 25 percent of California voters oppose the legislation." The tech industry has been less enthusiastic. Anthropic last month sent a letter [PDF] to state lawmakers outlining its problems with the bill, which aims to establish a safety regime for large AI models. The San Francisco-based biz took issue with provisions that allowed AI companies to be sued prior to the establishment of harm; the creation of a new Frontier Model Division to police frontier models; and rules covering pricing and labor that extend beyond the bill's described score. Anthropic's proposed changes, though potentially unpopular with voters, have been largely accepted. The changes limit enforcement penalties, such as the injunctive option to require the deletion of models and their weights. Criminal perjury provisions for lying about models were dropped, based on the adequacy of existing law about lying to the government. There's no longer language that would create a Frontier Model Division, though some of the proposed responsibilities will be handled by other government bodies. And the legal standard by which developers must attest to compliance has been reduced from "reasonable assurance" to "reasonable care." An open source carveout has been made - developers spending less than $10 million to fine tune models aren't covered by the bill. Also, whistleblower protections have been narrowed such that contractors don't have to have their own internal whistleblowing process. SB 1047 can be voted on as of August 20 and must pass by the end of the month to have a chance to advance to Governor Newsom for signature. ®
[2]
California amends AI safety bill after Anthropic suggestions
The bill previously came under fire from Y Combinator and a host of AI start-ups based in California amid concerns it could stifle innovation in the state. California lawmakers have accepted amendments proposed by Anthropic and others to an AI safety bill in a bid to accommodate the unique needs of the open-source community. The bill, known as Senate Bill 1047, passed through California's Appropriations Committee yesterday (15 August) with several changes - bringing it a step closer to becoming law. SB 1047 aims to ensure the safe development of AI systems by putting more responsibilities on AI developers. The bill would force developers of large "frontier" AI models to take precautions such as safety testing, implementing safeguards to prevent misuse and post-deployment monitoring. After this week's amendments, the bill no longer allows California's attorney general to sue AI companies for negligent safety practices before a catastrophic event has occurred as a result, a suggestion made by Anthropic. Instead, the attorney general can sue a company after a catastrophic event has occurred because of its AI model. It can also request a company to cease a certain operation if it finds the operation dangerous. "We accepted a number of very reasonable amendments proposed, and I believe we've addressed the core concerns expressed by Anthropic and many others in the industry," Senator Wiener told TechCrunch in a statement. "These amendments build on significant changes to SB 1047 I made previously to accommodate the unique needs of the open source community, which is an important source of innovation." The bill has previously come under fire from investors such as Y Combinator and a host of AI start-ups based in California amid concerns the new rules could stifle innovation and "inadvertently threaten the vibrancy of California's technology economy and undermine competition". In June, Y Combinator argued in a letter signed by more than 100 start-ups that the responsibility for the misuse of large language models should rest "with those who abuse these tools, not with the developers who create them". "Developers often cannot predict all possible applications of their models and holding them liable for unintended misuse could stifle innovation and discourage investment in AI research," the letter read. "Furthermore, creating a penalty of perjury would mean that AI software developers could go to jail simply for failing to anticipate misuse of their software - a standard of product liability no other product in the world suffers from." According to the letter, the AI safety bill needs to take a more balanced approach that protects society from potential harm while also fostering an environment that is conducive to technological advancement "that is not more burdensome than other technologies have previously enjoyed". "Open-source AI, in particular, plays a critical role in democratising access to cutting-edge technology and enabling a diverse range of contributors to drive progress," it read. Similar rules, albeit more comprehensive, have been approved in the EU earlier this year in the form the AI Act - which came into force this month. Find out how emerging tech trends are transforming tomorrow with our new podcast, Future Human: The Series. Listen now on Spotify, on Apple or wherever you get your podcasts.
[3]
California weakens bill to prevent AI disasters before final vote, taking advice from Anthropic | TechCrunch
California's bill to prevent AI disasters, SB 1047, has faced significant opposition from many parties in Silicon Valley. Today, California lawmakers bent slightly to that pressure, adding in several amendments suggested by AI firm Anthropic and other opponents. On Thursday the bill passed through California's Appropriations Committee, a major step towards becoming law, with several key changes, Senator Wiener's office tells TechCrunch. "We accepted a number of very reasonable amendments proposed, and I believe we've addressed the core concerns expressed by Anthropic and many others in the industry," said Senator Wiener in a statement to TechCrunch. "These amendments build on significant changes to SB 1047 I made previously to accommodate the unique needs of the open source community, which is an important source of innovation." SB 1047 still aims to prevent large AI systems from killing lots of people, or causing cybersecurity events that cost over $500 million, by holding developers liable. However, the bill now grants California's government less power to hold AI labs to account. Most notably, the bill no longer allows California's attorney general to sue AI companies for negligent safety practices before a catastrophic event has occurred. This was a suggestion from Anthropic. Instead, California's attorney general can seek injunctive relief, requesting a company to cease a certain operation it finds dangerous, and can still sue an AI developer if its model does cause a catastrophic event. Further, SB 1047 no longer creates the Frontier Model Division (FMD), a new government agency formerly included in the bill. However, the bill still creates the Board of Frontier Models - the core of the FMD - and places them inside the existing Government Operations Agency. In fact, the board is bigger now, with nine people instead of five. The Board of Frontier Models will still set compute thresholds for covered models, issue safety guidance, and issue regulations for auditors. Senator Wiener also amended SB 1047 so that AI labs no longer need to submit certifications of safety test results "under penalty of perjury." Now, these AI labs are simply required to submit public "statements" outling their safety practices, but the bill no longer imposes any criminal liability. SB 1047 also now includes more lenient language around how developers ensure AI models are safe. Now, the bill requires developers to provide "reasonable care" AI models do not pose a significant risk of causing catastrophe, instead of the "reasonable assurance" the bill required before. Further, lawmakers added in a protection for open-source fine tuned models. If someone spends less than $10 million fine tuning a covered model, they are explicitly not considered a developer by SB 1047. The responsibility will still on the original, larger developer of the model. While the bill has faced significant opposition from U.S. congressmen, renowned AI researchers, Big Tech, and venture capitalists, the bill has flown through California's legislature with relative ease. These amendments are likely to appease SB 1047 opponents and present Governor Newsom with a less controversial bill he can sign into law without losing support from the AI industry. While Newsom has not publicly commented on SB 1047, he's previously indicated his commitment to California's AI innovation. That said, these changes are unlikely to appease staunch critics of SB 1047. While the bill is notably weaker than before these amendments, SB 1047 still holds developers liable for the dangers of their AI models. That core fact about SB 1047 is not universally supported, and these amendments do little to address it. SB 1047 is now headed to California's Assembly floor for a final vote. If it passes there, it will need to be referred back to California's Senate for a vote due to these latest amendments. If it passes both, it will head to Governor Newsom's desk, where it could be vetoed or signed into law.
[4]
California AI bill is amended; vote is expected by end of August
The bill has sparked fierce debate in the tech industry, with Silicon Valley giants, academics and investors taking sides on whether to regulate a nascent technology that has been hyped for both its benefits and its dangers. The bill would no longer create a new agency for AI safety, instead shifting regulatory duties to the existing California Government Operations Agency. And companies would be liable for violating the law only if their technologies caused real harm or imminent dangers to public safety.California lawmakers have amended a bill that would create new restrictions for artificial intelligence, paving the way for first-of-their-kind safety rules that could set new standards for how tech companies develop their systems. The State Assembly's Appropriations Committee voted Thursday to endorse an amended version of the bill, SB 1047, which would require companies to test the safety of powerful AI technologies before releasing them to the public. California's attorney general would be able to sue companies if their technologies caused serious harm, such as mass property damage or human casualties. The bill has sparked fierce debate in the tech industry, with Silicon Valley giants, academics and investors taking sides on whether to regulate a nascent technology that has been hyped for both its benefits and its dangers. Sen. Scott Wiener, the author of the bill, made several concessions in an effort to appease tech industry critics like OpenAI, Meta and Google. The changes also reflect some suggestions made by another prominent startup, Anthropic. The bill would no longer create a new agency for AI safety, instead shifting regulatory duties to the existing California Government Operations Agency. And companies would be liable for violating the law only if their technologies caused real harm or imminent dangers to public safety. Previously, the bill allowed for companies to be punished for failing to adhere to safety regulations even if no harm had yet occurred. "The new amendments reflect months of constructive dialogue with industry, startup and academic stakeholders," said Dan Hendrycks, a founder of the nonprofit Center for AI Safety in San Francisco, which helped write the bill. A Google spokesperson said the company's previous concerns "still stand." Anthropic said it was still reviewing the changes. OpenAI and Meta declined to comment on the amended bill. Wiener said in a statement Thursday that "we can advance both innovation and safety; the two are not mutually exclusive." He said he believed the amendments addressed many of the tech industry's concerns. The Democratic-majority Legislature is expected to pass the bill by the end of the month. It would then go to Gov. Gavin Newsom, who has not said whether he supports it. If the bill is signed, California will again be a step ahead of the federal government on tech regulation. California's Legislature has helped shape U.S. tech consumer protections with a 2020 privacy law that curbed the collection of user data and a 2022 child online safety law. Opponents of the AI bill have argued that it would discourage tech giants from sharing the software code underlying their artificial intelligence with other businesses and software developers -- a practice known as open source. They believe this would choke the progress of smaller AI companies. While the amendments alleviated some issues and improved on the earlier version, "the open source software community still rightly has concerns that this will be a damper on AI development," said Chris Nicholson, a partner with Page One Ventures, a venture capital firm. Several candidates for mayor in San Francisco, where much of the AI startup community is based, have questioned the bill. "San Francisco's reputation as a leader in technology and innovation is because we embrace new ideas and serve as a sanctuary for research and development, and SB 1047 clearly threatens our brand and leadership," said Mark Farrell, a former interim mayor and current candidate. Lauren Wagner, an investor and researcher who has worked for both Google and Meta, said the amended version of the bill was an improvement. But she argued that regulation of AI should be a federal issue, not something for state lawmakers. "This seems like something the federal government should take on," she said. "This is not light touch."
[5]
California advances AI regulation bill amid Silicon Valley concerns - SiliconANGLE
California advances AI regulation bill amid Silicon Valley concerns The California State Assembly's Appropriations Committee on Thursday voted in favor of a proposed law to regulate the artificial intelligence industry that has drawn the ire of some in Silicon Valley and federal lawmakers. SB 1047 - The Safe and Secure Innovation for Frontier Artificial Intelligence Models Act, would require developers of "frontier" models - models that cost at least $100 million to train, to implement safeguards and safety testing frameworks. The bill would also require companies with such models to undergo audits and give "reasonable assurance" that the models won't cause a catastrophe. Developers of such models would have to report their safety work to state agencies. According to Fast Company, the bill would also establish a new agency called the Frontier Model Division, which would help the California State government with enforcement of the bill and creating new safety standards. The bill, which has faced opposition from big tech companies including Meta Platforms Inc. and Google LLC, also proposes the establishment of "CalCompute," a publicly funded computer cluster program aimed at providing operational expertise and user support for creating "equitable" AI innovation. While the bill has strong support in the California Assembly, which is dominated by Democrats, Ro Khanna and Zoe Lofgren, Democrat members of Congress who represent Silicon Valley, have expressed concern that the bill could stifle innovation. "As the representative from Silicon Valley, I have been pushing for thoughtful regulation around artificial intelligence to protect workers and address potential risks, including misinformation, deepfakes and an increase in wealth disparity," Representative Khanna said in a statement. "I agree wholeheartedly that there is a need for legislation and appreciate the intention behind SB 1047, but am concerned that the bill as currently written would be ineffective, punishing of individual entrepreneurs and small businesses, and hurt California's spirit of innovation." Lofgren, a ranking member of the House Committee on Science, Space and Technology, said separately that the bill is "heavily skewed" toward addressing hypothetical risks "while largely ignoring demonstrable AI risks like misinformation, discrimination, nonconsensual deepfakes, environmental impacts and workforce displacement." With the ongoing criticism, the bill has had some minor amendments, with Techcrunch reporting that the revised bill will no longer require AI labs to submit certifications of safety test results "under penalty of perjury." With the amendments, the AI labs will only be required to submit public statements outlining their safety practices without the threat of criminal liability. Others also opposing the bill include Christopher Nguyen, chief executive officer of AI startup Aitomatic Inc., who told SiliconValley.com that the bill may impact startup companies that rely on large language models such as Meta's Llama 31. "We depend very much on this thriving ecosystem of open-source AI," Nguyen said. "If we can't keep state-of-the-art technology accessible, it will immediately impact the startup ecosystem, small businesses, and even the man on the street." The amended bill, as it stands, is now being forwarded to the California Assembly for approval. Given the Democratic majority, it's only a question now of when it will pass and become law.
[6]
California A.I. Bill Is Amended; Vote Is Expected by End of August
Cecilia Kang reported from Washington, and Cade Metz from San Francisco. California lawmakers have amended a bill that would create new restrictions for artificial intelligence, paving the way for first-of-their-kind safety rules that could set new standards for how tech companies develop their systems. The State Assembly's Appropriations Committee voted on Thursday to endorse an amended version of the bill, S.B. 1047, which would require companies to test the safety of powerful A.I. technologies before releasing them to the public. California's attorney general would be able to sue companies if their technologies caused serious harm, such as mass property damage or human casualties. The bill has sparked fierce debate in the tech industry, with Silicon Valley giants, academics and investors taking sides on whether to regulate a nascent technology that has been hyped for both its benefits and its dangers. Senator Scott Wiener, the author of the bill, made several concessions in an effort to appease tech industry critics like OpenAI, Meta and Google. The changes also reflect some suggestions made by another prominent start-up, Anthropic. The bill would no longer create a new agency for A.I. safety, instead shifting regulatory duties to the existing California Government Operations Agency. And companies would be liable for violating the law only if their technologies caused real harm or imminent dangers to public safety. Previously, the bill allowed for companies to be punished for failing to adhere to safety regulations even if no harm had yet occurred. "The new amendments reflect months of constructive dialogue with industry, start-up and academic stakeholders," said Dan Hendrycks, a founder of the nonprofit Center for A.I. Safety in San Francisco, which helped write the bill.
[7]
California AI bill SB 1047 aims to prevent AI disasters, but Silicon Valley warns it will cause one
Update: California's Appropriations Committee passed SB 1047 with significant amendments that change the bill on Thursday, August 15. You can read about them here. Outside of sci-fi films, there's no precedent for AI systems killing people or being used in massive cyberattacks. However, some lawmakers want to implement safeguards before bad actors make that dystopian future a reality. A California bill, known as SB 1047, tries to stop real-world disasters caused by AI systems before they happen, and it's headed for a final vote in the state's senate later in August. While this seems like a goal we can all agree on, SB 1047 has drawn the ire of Silicon Valley players large and small, including venture capitalists, big tech trade groups, researchers and startup founders. A lot of AI bills are flying around the country right now, but California's Safe and Secure Innovation for Frontier Artificial Intelligence Models Act has become one of the most controversial. Here's why. What would SB 1047 do? SB 1047 tries to prevent large AI models from being used to cause "critical harms" against humanity. The bill gives examples of "critical harms" as a bad actor using an AI model to create a weapon that results in mass casualties, or instructing one to orchestrate a cyberattack causing more than $500 million in damages (for comparison, the CrowdStrike outage is estimated to have caused upwards of $5 billion). The bill makes developers -- that is, the companies that develop the models -- liable for implementing sufficient safety protocols to prevent outcomes like these. What models and companies are subject to these rules? SB 1047's rules would only apply to the world's largest AI models: ones that cost at least $100 million and use 10^26 FLOPS during training -- a huge amount of compute, but OpenAI CEO Sam Altman said GPT-4 cost about this much to train. These thresholds could be raised as needed. Very few companies today have developed public AI products large enough to meet those requirements, but tech giants such as OpenAI, Google, and Microsoft are likely to very soon. AI models -- essentially, massive statistical engines that identify and predict patterns in data -- have generally become more accurate as they've grown larger, a trend many expect to continue. Mark Zuckerberg recently said the next generation of Meta's Llama will require 10x more compute, which would put it under the authority of SB 1047. When it comes to open source models and their derivatives, the bill determined the original developer is responsible unless another developer spends three times as much creating a derivative of the original model. The bill also requires a safety protocol to prevent misuses of covered AI products, including an "emergency stop" button that shuts down the entire AI model. Developers must also create testing procedures that address risks posed by AI models, and must hire third-party auditors annually to assess their AI safety practices. The result must be "reasonable assurance" that following these protocols will prevent critical harms -- not absolute certainty, which is of course impossible to provide. Who would enforce it, and how? A new California agency, the Frontier Model Division (FMD), would oversee the rules. Every new public AI model that meets SB 1047's thresholds must be individually certified with a written copy of its safety protocol. The FMD would be governed by a five-person board, including representatives from the AI industry, open source community and academia, appointed by California's governor and legislature. The board will advise California's attorney general on potential violations of SB 1047, and issue guidance to AI model developers on safety practices. A developer's chief technology officer must submit an annual certification to the FMD assessing its AI model's potential risks, how effective its safety protocol is and a description of how the company is complying with SB 1047. Similar to breach notifications, if an "AI safety incident" occurs, the developer must report it to the FMD within 72 hours of learning about the incident. If a developer fails to comply with any of these provisions, SB 1047 allows California's attorney general to bring a civil action against the developer. For a model costing $100 million to train, penalties could reach up to $10 million on the first violation and $30 million on subsequent violations. That penalty rate scales as AI models become more expensive. Lastly, the bill includes whistleblower protections for employees if they try to disclose information about an unsafe AI model to California's attorney general. What do proponents say? California State Senator Scott Wiener, who authored the bill and represents San Francisco, tells TechCrunch that SB 1047 is an attempt to learn from past policy failures with social media and data privacy, and protect citizens before it's too late. "We have a history with technology of waiting for harms to happen, and then wringing our hands," said Wiener. "Let's not wait for something bad to happen. Let's just get out ahead of it." Even if a company trains a $100 million model in Texas, or for that matter France, it will be covered by SB 1047 as long as it does business in California. Wiener says Congress has done "remarkably little legislating around technology over the last quarter century," so he thinks it's up to California to set a precedent here. When asked whether he's met with OpenAI and Meta on SB 1047, Wiener says "we've met with all the large labs." Two AI researchers who are sometimes called the "godfathers of AI," Geoffrey Hinton and Yoshua Bengio, have thrown their support behind this bill. These two belong to a faction of the AI community concerned about the dangerous, doomsday scenarios that AI technology could cause. These "AI doomers" have existed for a while in the research world, and SB 1047 could codify some of their preferred safeguards into law. Another group sponsoring SB 1047, the Center for AI Safety, wrote an open letter in May 2023 asking the world to prioritize "mitigating the risk of extinction from AI" as seriously as pandemics or nuclear war. "This is in the long-term interest of industry in California and the US more generally because a major safety incident would likely be the biggest roadblock to further advancement," said director of the Center for AI Safety, Dan Hendrycks, in an email to TechCrunch. Recently, Hendrycks' own motivations have been called into question. In July, he publicly launched a startup, Gray Swan, which builds "tools to help companies assess the risks of their AI systems," according to a press release. Following criticisms that Hendrycks' startup could stand to gain if the bill passes, potentially as one of the auditors SB 1047 requires developers to hire, he divested his equity stake in Gray Swan. "I divested in order to send a clear signal," said Hendrycks in an email to TechCrunch. "If the billionaire VC opposition to commonsense AI safety wants to show their motives are pure, let them follow suit." What do opponents say? A growing chorus of Silicon Valley players oppose SB 1047. Hendrycks' "billionaire VC opposition" likely refers to a16z, the venture firm founded by Marc Andreessen and Ben Horowitz, which has strongly opposed SB 1047. In early August, the venture firm's chief legal officer, Jaikumar Ramaswamy, submitted a letter to Senator Wiener, claiming the bill "will burden startups because of its arbitrary and shifting thresholds," creating a chilling effect on the AI ecosystem. As AI technology advances, it will get more expensive, meaning that more startups will cross that $100 million threshold and will be covered by SB 1047; a16z says several of their startups already receive that much for training models. Fei-Fei Li, often called the godmother of AI, broke her silence on SB 1047 in early August, writing in a Fortune column that the bill will "harm our budding AI ecosystem." While Li is a well-regarded pioneer in AI research from Stanford, she also reportedly created an AI startup called World Labs in April, valued at a billion dollars and backed by a16z. She joins influential AI academics such as fellow Stanford researcher Andrew Ng, who called the bill "an assault on open source" during a speech at a Y Combinator event in July. Open source models may create additional risk to their creators, since like any open software, they are more easily modified and deployed to arbitrary and potentially malicious purposes. Meta's chief AI scientist, Yann LeCun, said SB 1047 would hurt research efforts, and is based on an "illusion of 'existential risk' pushed by a handful of delusional think-tanks," in a post on X. Meta's Llama LLM is one of the foremost examples of an open source LLM. Startups are also not happy about the bill. Jeremy Nixon, CEO of AI startup Omniscience and founder of AGI House SF, a hub for AI startups in San Francisco, worries that SB 1047 will crush his ecosystem. He argues that bad actors should be punished for causing critical harms, not the AI labs that openly develop and distribute the technology. "There is a deep confusion at the center of the bill, that LLMs can somehow differ in their levels of hazardous capability," said Nixon. "It's more than likely, in my mind, that all models have hazardous capabilities as defined by the bill." But Big Tech, which the bill directly focuses on, is panicked about SB 1047 as well. The Chamber of Progress -- a trade group representing Google, Apple, Amazon and other Big Tech giants -- issued an open letter opposing the bill saying SB 1047 restrains free speech and "pushes tech innovation out of California." Last year, Google CEO Sundar Pichai and other tech executives endorsed the idea of federal AI regulation. U.S. Congressman Ro Khanna, who represents Silicon Valley, released a statement opposing SB 1047 on Tuesday. He expressed concerns the bill "would be ineffective, punishing of individual entrepreneurs and small businesses, and hurt California's spirit of innovation." Silicon Valley doesn't traditionally like when California sets broad tech regulation like this. In 2019, Big Tech pulled a similar card when another state privacy bill, California's Consumer Privacy Act, also threatened to change the tech landscape. Silicon Valley lobbied against that bill, and months before it went into effect, Amazon founder Jeff Bezos and 50 other executives wrote an open letter calling for a federal privacy bill instead. What happens next? On August 15, SB 1047 will be sent to the California Senate's Assembly floor with whatever amendments get approved. That's where bills "live or die" in California's Senate, according to Wiener. It's expected to pass, given its overwhelming support from lawmakers thus far. Anthropic submitted a number of suggested amendments to SB 1047 in late July, which Wiener says he and California's Senate policy committees are actively considering. Anthropic is the first developer of a state-of-the-art AI model to publicly signal it's willing to work with Wiener on SB 1047, even though it doesn't support the bill as it stands. This was largely seen as a win for the bill. Anthropic's proposed changes include getting rid of the FMD, reducing the Attorney General's power to sue AI developers before a harm occurs, and getting rid of the whistleblower protections provision in SB 1047. Wiener says he's generally positive about the amendments, but needs approval from several Senate policy committees before adding them to the bill. If SB 1047 passes the Senate, the bill will be sent to California Governor Gavin Newsom's desk where he will ultimately decide whether to sign the bill into law before the end of August. Wiener says he has not spoken to Newsom about the bill, and does not know his position. This bill would not go into effect immediately, as the FMD is set to be formed in 2026. Further, if the bill does pass, it's very likely to face legal challenges before then, perhaps from some of the same groups that are speaking up about it now.
[8]
A California bill to regulate AI causes alarm in Silicon Valley
If signed into law by Gov. Gavin Newsom, the bill would require companies to test the safety of powerful AI technologies before releasing them to the public. It would also allow California's attorney general to sue companies if their technologies cause serious harm, such as mass property damage or human casualties.A California bill that could impose restrictions on artificial intelligence has tech companies, investors and activists scrambling to explain what the first-of-its-kind legislation could mean for their industry in the state. The bill is still winding its way through the state capital in Sacramento. It is expected to reach the California state assembly appropriations committee on Thursday before facing a vote by the full assembly. If signed into law by Gov. Gavin Newsom, the bill would require companies to test the safety of powerful AI technologies before releasing them to the public. It would also allow California's attorney general to sue companies if their technologies cause serious harm, such as mass property damage or human casualties. The debate over the AI bill, called SB 1047, is a reflection of the arguments that have driven intense interest in artificial intelligence. Opponents believe it will choke the progress of technologies that promise to increase worker productivity, improve health care and fight climate change. Supporters believe the bill will help prevent disasters and place guardrails on the work of companies that are too focused on profits. Just last year, many AI experts and tech executives led public discussions about the risks of AI and even urged lawmakers in Washington to help set up those guardrails. Now, in an about-face, the tech industry is recoiling at an attempt to do exactly that in California. Because they are based in the state or do business in the state, many of the leading AI companies, including Google, Meta, Anthropic and OpenAI, would be bound by the proposed law, which could set a precedent for other states and national governments. SB 1047 arrives at a precarious time for the San Francisco Bay Area, where much of the AI startup community, as well as many of the industry's biggest companies, is based. The bill, its harshest critics argue, could push AI development into other states, just as the region is rebounding from a pandemic-induced slump. Some notable AI researchers have supported the bill, including Geoff Hinton, the former Google researcher, and Yoshua Bengio, a professor at the University of Montreal. The two have spent the past 18 months warning of the dangers of the technology. Other AI pioneers have come out against the bill, including Meta's chief AI scientist, Yann LeCun, and former Google executives and Stanford professors Andrew Ng and Fei-Fei Li. Newsom's office declined to comment. Google, Meta and Anthropic also declined to comment. An OpenAI spokesperson said the bill could slow innovation by creating an uncertain legal landscape for building AI. The company said it had expressed its concerns in meetings with the office of California state Sen. Scott Wiener, who created the bill, and that serious AI risks were national security issues that should be regulated by the federal government, not by states. The bill has roots in "AI salons" held in San Francisco. Last year, Wiener attended a series of those salons, where young researchers, entrepreneurs, activists and amateur philosophers discussed the future of artificial intelligence. After sitting in on those discussions, Wiener said he created SB 1047, with input from the lobbying arm of the Center for AI Safety, a think tank with ties to effective altruism, a movement that has long been concerned with preventing existential threats from AI. The bill would require safety tests for systems that have development costs exceeding $100 million and that are trained using a certain amount of raw computing power. It would also create a new state agency that defines and monitors those tests. Dan Hendrycks, a founder of the Center for AI Safety, said the bill would push the largest tech companies to identify and remove harmful behavior from their most expensive technologies. "Complex systems will have unexpected behavior. You can count on it," Hendrycks said in an interview with The New York Times. "The bill is a call to make sure that these systems don't have hazards or, if the hazards do exist, that the systems have the appropriate safeguards." Today's AI technologies can help spread disinformation online, including text, still images and videos. They are also beginning to take away some jobs. But studies by OpenAI and others over the past year showed that today's AI technologies were not significantly more dangerous than search engines. Still, some AI experts argue that serious dangers are on the horizon. In one example, Dario Amodei, CEO of the high-profile AI startup Anthropic, told Congress last year that new AI technology could soon help unskilled people create large-scale biological attacks. Wiener said he was trying to head off those scary scenarios. "Historically, we have waited for bad things to happen and then wrung our hands and dealt with it later, sometimes when the horse was out of the barn and it was too late," Wiener said in an interview. "So my view is, let's try to, in a very light touch way, get ahead of the risks and anticipate the risks." Google and Meta sent letters to Wiener expressing concerns about the bill. Anthropic, Amodei's company, surprised many observers when it also opposed the bill in its current form and suggested changes that would allow companies to control their own safety testing. The company said the government should only become involved if real harms were caused. Wiener said the opposition by tech giants sent mixed messages. The companies have already promised the Biden administration and global regulators that they would test their systems for safety. "The CEOs of Meta, Google, of OpenAI -- all of them -- have volunteered to testing and that's what this bill asks them to do," he said. The bill's critics say they are worried that the safety rules will add new liability to AI development, since companies will have to make a legal promise that their models are safe before they release them. They also argue that the threat of legal action from the state attorney general will discourage tech giants from sharing their technology's underlying software code with other businesses and software developers -- a practice known as open source. Open source is common in the AI world. It allows small companies and individuals to build on the work of larger organizations, and critics of SB 1047 argue that the bill could severely limit the options of startups that do not have the resources of tech giants like Google, Microsoft and Meta. "It could stifle innovation," said Lauren Wagner, an investor and researcher who has worked for both Google and Meta. Open-source backers believe that sharing code allows engineers and researchers across the industry to quickly identify and fix problems and improve technologies. Jeremy Howard, an entrepreneur and AI researcher who helped create the technologies that drive the leading AI systems, said the new California bill would ensure that the most powerful AI technologies belonged solely to the biggest tech companies. And if these systems were to eventually exceed the power of the human brain, as some AI researchers believe they will, the bill would consolidate power in the hands of a few corporations. "These organizations would have more power than any country -- any entity of any kind. They would be in control of an artificial super intelligence," Howard said. "That is a recipe for disaster." Others argue that if open source development is not allowed to flourish in the United States, it will flow to other countries, including China. The solution, they argue, is to regulate how people use AI rather than regulating the creation of the core technology. "AI is like a kitchen knife, which can be used for good things, like cutting an onion, and bad things, like stabbing a person," said Sebastian Thrun, an AI researcher and serial entrepreneur who founded the self-driving car project at Google. "We shouldn't try to put an off-switch on a kitchen knife. We should try to prevent people from misusing it."
[9]
California lawmakers set to make huge decision on future of AI
Welcome to AI Decoded, Fast Company's weekly newsletter that breaks down the most important news in the world of AI. You can sign up to receive this newsletter every week here. California's hotly contested AI bill nears a decisive moment A new California bill requiring AI companies to enact stricter safety standards has made its way through California's state house, much to Silicon Valley's chagrin. The Safe and Secure Innovation for Frontier Artificial Intelligence Models Act (otherwise known as SB 1047) would require developers of large "frontier" models (those requiring a massive compute power or at least $100 million in training costs) to implement a safeguarding and safety-testing framework, undergo audits, and give "reasonable assurance" that the models won't cause a catastrophe. Developers would report their safety work to state agencies. The bill also calls for the creation of a new agency, called the Frontier Model Division, which would help the Attorney General and Labor Commissioner with enforcement and creating new safety standards. A long list of industry, academic, and political names has lined up to voice their disapproval of SB 1047. The VC firm Andreesen Horowitz and its influence network has produced the loudest voices in opposition. Stanford professors Fei Fei Li and Andrew Ng have also come out against the bill following meetings with its author, Senator Scott Wiener. (Li has a billion-dollar AI company funded by Andreesen Horowitz. Ng is CEO of Landing AI.) Meta's Yann LeCun has come out against the bill. So have Reps. Ro Khanna and Zoe Lofgren.
[10]
Yoshua Bengio: California's AI safety bill will protect consumers and innovation
As a fellow AI researcher, I have enormous respect for Dr. Fei-Fei Li's scientific contributions to our field. However, I disagree with her recently published stance on California's SB 1047. I believe this bill represents a crucial, light touch and measured first step in ensuring the safe development of frontier AI systems to protect the public. Many experts in the field, including myself, agree that SB 1047 outlines a bare minimum for effective regulation of frontier AI models against foreseeable risks and that its compliance requirements are light and not prescriptive by intention. Instead, it relies on model developers to make self-assessments of risk and implement basic safety testing requirements. It also focuses on only the largest AI models -- those costing over $100 million to train -- which ensures it will not hamper innovation among startups or smaller companies. Its requirements align closely with voluntary commitments many leading AI companies have already made (notably with the White House and at the Seoul AI Summit). We cannot let corporations grade their own homework and simply put out nice-sounding assurances. We don't accept this in other technologies such as pharmaceuticals, aerospace, and food safety. Why should AI be treated differently? It is important to go from voluntary to legal commitments to level the competitive playing field among companies. I expect this bill to bolster public confidence in AI development at a time when many are questioning whether companies are acting responsibly. Critics of SB 1047 have asserted that this bill will punish developers in a manner that stifles innovation. This claim does not hold up to scrutiny. It is common sense for any sector building potentially dangerous products to be subject to regulation ensuring safety. This is what we do in everyday sectors and products from automobiles to electrical appliances to home building codes. Although hearing perspectives from industry is important, the solution cannot be to completely abandon a bill that is as targeted and measured as SB 1047. Instead, I am hopeful that, with additional key amendments, some of the main concerns from industry can be addressed, while staying true to the spirit of the bill: Protecting innovation and citizens. Another particular topic of concern for critics has been the potential impact of SB 1047 on the open-source development of cutting-edge AI. I have been a lifelong supporter of open source, but I don't view it as an end in itself that is always good no matter the circumstances. Consider, for instance, the recent case of an open-source model that is being used at a massive scale to generate child pornography. This illegal activity is outside the developer's terms of use, but now the model is released and we can never go back. With much more capable models being developed, we cannot wait for their open release before acting. For open-source models much more advanced than those that exist today, compliance with SB 1047 will not be a trivial box-checking exercise, like putting "illegal activity" outside the terms of service. I also welcome the fact that the bill requires developers to retain the ability to quickly shut down their AI models, but only if they are under their control. This exception was explicitly designed to make compliance possible for open-source developers. Overall, finding policy solutions for highly capable open-source AI is a complex issue, but the threshold of risks vs. benefits should be decided through a democratic process, not based on the whims of whichever AI company is most reckless or overconfident. Dr. Li calls for a "moonshot mentality" in AI development. I agree deeply with this point. I also believe this AI moonshot requires rigorous safety protocols. We simply cannot hope for companies to prioritize safety when the incentives to prioritize profits are so immense. Like Dr. Li, I would also prefer to see robust AI safety regulations at the federal level. But Congress is gridlocked and federal agencies constrained, which makes state action indispensable. In the past, California has led the way on green energy and consumer privacy, and it has a tremendous opportunity to lead again on AI. The choices we make about this field now will have profound consequences for current and future generations. SB 1047 is a positive and reasonable step towards advancing both safety and long-term innovation in the AI ecosystem, especially incentivizing research and development in AI safety. This technically sound legislation, developed with leading AI and legal experts, is direly needed, and I hope California Governor Gavin Newsom and the legislature will support it.
Share
Share
Copy Link
California's AI safety bill, AB-1047, moves forward with significant amendments following tech industry input. The bill aims to regulate AI development while balancing innovation and safety concerns.
California's ambitious artificial intelligence (AI) safety bill, AB-1047, is moving forward in the state legislature after undergoing significant amendments. The bill, initially introduced to prevent potential AI disasters, has been modified following input from tech industry leaders, particularly AI company Anthropic 1.
The original bill required AI companies to conduct rigorous testing and obtain third-party audits before releasing new AI models. However, the amended version has softened some of these requirements:
The amendments came after extensive lobbying from tech companies, with Anthropic playing a significant role in shaping the changes. The company argued that the original bill could hinder innovation and push AI development out of California 2.
Despite the modifications, some tech leaders remain concerned. Emad Mostaque, CEO of Stability AI, criticized the bill for potentially stifling innovation and favoring larger companies 1.
Proponents of the bill, including its author Assemblymember Rebecca Bauer-Kahan, maintain that the legislation still provides crucial oversight for AI development. They argue that the amendments strike a balance between ensuring safety and fostering innovation 5.
The amended bill is expected to face a final vote by the end of August 2024. If passed, it would establish California as a pioneer in AI regulation, potentially influencing similar legislation in other states and at the federal level 4.
As the AI industry continues to evolve rapidly, the outcome of this bill could have far-reaching implications for the future of AI development and regulation in the United States and beyond.
Reference
[1]
[2]
[3]
[4]
A proposed California bill aimed at regulating artificial intelligence has created a divide among tech companies in Silicon Valley. The legislation has garnered support from some firms while facing opposition from others, highlighting the complex challenges in AI governance.
4 Sources
California's proposed AI safety bill, SB 1047, has ignited a fierce debate in the tech world. While some industry leaders support the legislation, others, including prominent AI researchers, argue it could stifle innovation and favor large tech companies.
3 Sources
California's legislature has approved a groundbreaking bill to regulate large AI models, setting the stage for potential nationwide standards. The bill, if signed into law, would require companies to evaluate AI systems for risks and implement mitigation measures.
7 Sources
A groundbreaking artificial intelligence regulation bill has passed the California legislature and now awaits Governor Gavin Newsom's signature. The bill, if signed, could set a precedent for AI regulation in the United States.
14 Sources
Major tech companies, including OpenAI and Google, are opposing California's proposed AI accountability bill, arguing it could stifle innovation. The bill aims to regulate AI development and hold companies accountable for potential harms.
12 Sources
The Outpost is a comprehensive collection of curated artificial intelligence software tools that cater to the needs of small business owners, bloggers, artists, musicians, entrepreneurs, marketers, writers, and researchers.
© 2024 TheOutpost.AI All rights reserved