2 Sources
2 Sources
[1]
California's new AI bill threatens to stifle innovation in the U.S. - Orange County Register - ExBulletin
The California Legislature is on the brink of passing a bill that would choke artificial intelligence (AI) innovation in its tracks. Currently pending Senate Bill 1047, the Safe and Secure Innovation for Cutting-Edge Artificial Intelligence Models Act, would stifle innovation and competition in a range of technologies of global importance at a time when China and other countries are trying to get ahead. The bill's authors briefly acknowledge the "broad benefits" that AI can bring and that much of the innovation is driven by California companies. Unfortunately, the bill creates a regulatory regime that requires state permission at all times and operates under the philosophy that new AI models are dangerous until otherwise certified. Advancing technology at a bureaucratic pace is exactly the recipe for stifling innovation. Enforcement under the bill would be overseen by a new regulatory body, the Frontier Models Division (FMD). This new body would be given broad powers to promulgate rules and guidance and define its own scope for which AI models are deemed to be "likely to cause or enable significant harm." The FMD would also be given the power to fund itself through fees levied on companies that seek the commission's approval, effectively a tax on new AI models. While the bill is touted as regulating only the largest and most powerful computational models at the "cutting edge" of AI development, it unwisely sets the definition of "covered model" at an arbitrary level of computing power and cost used to adapt and fine-tune the model. In fact, as AI computing power becomes more powerful and affordable, more small businesses will surely exceed this threshold. While the FMD has the power to adjust this threshold, it runs the risk of falling victim to what is commonly referred to as the "pacing problem," where technology (in this case the speed of AI calculations) advances faster than regulators can adapt. Covered model developers will also be required to incorporate a "kill switch" that would allow the FMD to shut down their models and their derivatives at its discretion. To make matters worse, SB 1047 would hinder the development and dissemination of new open source AI models by requiring AI developers to certify to the FMD that their base models, as well as any spin-offs created by others, are not likely to cause significant harm, defined as causing damages of more than $500 million. Of course, it's impossible to predict in advance, but model developers who make incorrect guesses could be tried for perjury and faced huge fines. For example, if a bad actor figures out how to train an otherwise harmless open-source AI model to spread malware, the original model creator could be held liable and fined up to 30 percent of the model's development costs. These risks could force emerging developers to pay high licensing fees for closed AI models controlled by large tech companies for the best AI models, effectively creating AI monopolies. While large, relatively closed models like Gemini and GPT-4 are useful, open source ecosystems act as an important competitive check and provide a greater level of transparency. While AI systems may pose some risks, heavy-handed regulatory regimes like SB 1047 ignore the revolutionary benefits that AI can bring to sectors like healthcare, agriculture, and transportation. For the United States to realize these benefits and compete with the rest of the world, it is critical that we regulate AI applications based on a rational assessment of risks, rather than fear of the technology itself. Most of AI's individual risks can be addressed by enforcing existing laws rather than creating new AI officials. Because much of the U.S.'s AI development takes place in California, the impact of an overly cautious regulatory framework that stifles such innovation would ripple far beyond state lines. If Congress continues down this path, it may end up watching as the best parts of the AI revolution are harvested elsewhere, as China and other countries invest heavily to catch up with the innovation that would occur in the United States if policymakers allowed it. Josh Withrow is a Technology and Innovation Fellow at the R Street Institute. First published: July 14, 2024, 5:55 AM What Are The Main Benefits Of Comparing Car Insurance Quotes Online
[2]
California's new AI bill threatens to stifle innovation in the U.S. Whittier Daily News - ExBulletin
California's new AI bill threatens to stifle innovation in the U.S. Whittier Daily News Senate President pro tempore Mike McGuire (right) talks with state Sen. Scott Wiener (left), chairman of the Senate Budget and Financial Review Committee, before Wiener introduces a state budget deficit reduction bill at the Capitol in Sacramento, California, Thursday, April 11, 2024. (AP Photo/Rich Pedroncelli) The California Legislature is on the brink of passing a bill that would stifle artificial intelligence (AI) innovation at its birth. Currently pending Senate Bill 1047, the Safe and Secure Innovation for Cutting-Edge Artificial Intelligence Models Act, would stifle innovation and competition in a range of technologies of global importance at a time when China and other countries are trying to pull ahead. The bill's authors briefly acknowledge the broad benefits that AI can bring and that much of the innovation is being driven by California companies. Unfortunately, the bill creates a regulatory regime that requires state permission at all times and operates under the notion that new AI models are dangerous until otherwise certified. Advancing technology at a bureaucratic pace is exactly the recipe for stifling innovation. Enforcement under the bill would be overseen by a new regulatory body, the Frontier Models Division (FMD). This new body would be given broad powers to promulgate rules and guidance and to independently define the scope of AI models that may reasonably be deemed to cause or enable significant harm. The FMD would also be given the authority to raise funds through a fee levied on companies that seek the Commission's approval, effectively a tax on new AI models. While the bill is touted as regulating only the largest and most powerful computational models at the forefront of AI development, it unwisely sets the definition of covered models to an arbitrary level of computing power and cost used to adapt and fine-tune the models. In fact, as AI computing power becomes more powerful and affordable, more small businesses will surely exceed this threshold. While FMD has the power to adjust this threshold, it runs the risk of falling victim to what is commonly referred to as the pacing problem, where technology advances to the point where the speed of AI calculations outstrips regulators' ability to adapt. Covered model developers will also be required to include a kill switch that allows FMD to shut down their models and their derivatives at its discretion. To make matters worse, SB 1047 would hinder the development and dissemination of new open source AI models by requiring AI developers to certify to the FMD that their base models, as well as any spin-offs created by others, are not likely to cause significant harm, defined as causing damages of more than $500 million. Of course, it's impossible to predict in advance, but model developers who make incorrect guesses could be tried for perjury and faced heavy fines. For example, if a bad actor figures out how to train an otherwise harmless open-source AI model to spread malware, the original model creator could be held liable and fined up to 30 percent of the model's development costs. These risks will likely force emerging developers to pay high licensing fees for closed AI models, with the best ones effectively controlled by Big Tech, creating AI monopolies. While large-scale, relatively closed models like Gemini and GPT-4 are useful, open source ecosystems act as an important competitive check and provide a greater level of transparency. While AI systems may pose some risks, heavy-handed regulatory regimes like SB 1047 ignore the revolutionary benefits that AI can bring to sectors like healthcare, agriculture, and transportation. For the United States to realize these benefits and compete with the rest of the world, it is critical that we regulate AI applications based on a rational assessment of risks, rather than fear of the technology itself. Most of AI's individual risks can be addressed by enforcing existing laws rather than creating new AI officials. Because much of the U.S.'s AI development takes place in California, the impact of an overly cautious regulatory framework that stifles such innovation would ripple far beyond state lines. If Congress continues down this path, it may end up watching as the best parts of the AI revolution are harvested elsewhere, as China and other countries invest heavily to catch up with the innovation that would occur in the United States if policymakers allowed it. Josh Withrow is a Technology and Innovation Fellow at the R Street Institute. What Are The Main Benefits Of Comparing Car Insurance Quotes Online
Share
Share
Copy Link
IBM has announced a significant advancement in quantum computing with the introduction of its 1,121-qubit processor, Condor. This development marks a crucial step towards practical quantum computing applications.

In a groundbreaking announcement, IBM has unveiled its latest achievement in quantum computing: the Condor processor, boasting an impressive 1,121 qubits
1
. This development represents a significant milestone in the field, pushing the boundaries of quantum computing capabilities and bringing us closer to practical applications of this revolutionary technology.The Condor processor is not just a numerical upgrade; it symbolizes a quantum leap in computing power. With over 1,000 qubits, this processor opens up new possibilities for solving complex problems that are beyond the reach of classical computers. IBM's achievement is particularly noteworthy as it surpasses the previous record of 433 qubits, set by the company's own Osprey chip in 2022
1
.Quantum computing harnesses the principles of quantum mechanics to process information. Unlike classical bits, which can be either 0 or 1, qubits can exist in multiple states simultaneously, a phenomenon known as superposition. This property allows quantum computers to perform certain calculations exponentially faster than traditional computers
2
.The increased qubit count of the Condor processor has far-reaching implications across various industries:
Drug Discovery: Quantum computers could simulate molecular interactions more accurately, potentially accelerating the development of new medications
2
.Financial Modeling: Complex financial simulations and risk assessments could be performed with unprecedented speed and accuracy.
Cryptography: Quantum computers pose both a threat to current encryption methods and an opportunity for developing more secure quantum encryption
2
.Climate Modeling: More precise climate models could be created, aiding in the understanding and mitigation of climate change.
Related Stories
Despite this significant advancement, quantum computing still faces challenges. Maintaining qubit coherence and minimizing errors remain ongoing issues. IBM and other tech giants are investing heavily in overcoming these obstacles, with IBM aiming to develop a 4,000+ qubit system by 2025
1
.The race for quantum supremacy is intensifying, with companies like Google, Microsoft, and various startups also making strides in the field. However, the quantum computing community often collaborates, recognizing the immense potential of this technology for solving global challenges
2
.As quantum computing continues to evolve, it promises to revolutionize industries and tackle problems previously thought unsolvable. IBM's Condor processor marks a crucial step in this journey, bringing us closer to a future where quantum computers are an integral part of our technological landscape.
Summarized by
Navi
[1]
1
Business and Economy

2
Business and Economy

3
Technology
