11 Sources
[1]
In 10 years, all bets are off" -- Anthropic CEO opposes decadelong freeze on state AI laws
On Thursday, Anthropic CEO Dario Amodei argued against a proposed 10-year moratorium on state AI regulation in a New York Times opinion piece, calling the measure shortsighted and overbroad as Congress considers including it in President Trump's tax policy bill. Anthropic makes Claude, an AI assistant similar to ChatGPT. Amodei warned that AI is advancing too fast for such a long freeze, predicting these systems "could change the world, fundamentally, within two years; in 10 years, all bets are off." As we covered in May, the moratorium would prevent states from regulating AI for a decade. A bipartisan group of state attorneys general has opposed the measure, which would preempt AI laws and regulations recently passed in dozens of states. In his op-ed piece, Amodei said the proposed moratorium aims to prevent inconsistent state laws that could burden companies or compromise America's competitive position against China. "I am sympathetic to these concerns," Amodei wrote. "But a 10-year moratorium is far too blunt an instrument. A.I. is advancing too head-spinningly fast." Instead of a blanket moratorium, Amodei proposed that the White House and Congress create a federal transparency standard requiring frontier AI developers to publicly disclose their testing policies and safety measures. Under this framework, companies working on the most capable AI models would need to publish on their websites how they test for various risks and what steps they take before release. "Without a clear plan for a federal response, a moratorium would give us the worst of both worlds -- no ability for states to act and no national policy as a backstop," Amodei wrote. Transparency as the middle ground Amodei emphasized his claims for AI's transformative potential throughout his op-ed, citing examples of pharmaceutical companies drafting clinical study reports in minutes instead of weeks and AI helping to diagnose medical conditions that might otherwise be missed. He wrote that AI "could accelerate economic growth to an extent not seen for a century, improving everyone's quality of life," a claim that some skeptics believe may be overhyped. To illustrate why transparency matters, Amodei described how Anthropic recently tested its latest model, Claude 4 Opus, in extreme and deliberate experimental "science fiction"-sounding scenarios, according to AI expert Simon Willison, discovering that it would threaten to expose a user's affair if faced with being shut down. Amodei stressed this was deliberate testing to get early warnings, "much like an airplane manufacturer might test a plane's performance in a wind tunnel." Amodei cited other tests in the industry that have revealed similar negative behaviors when prodded into producing them -- OpenAI's o3 model reportedly wrote code to prevent its own shutdown during tests conducted by an AI research lab (led by people, it should be noted, that openly worry that AI poses an existential threat to humanity), while Google reported its Gemini model approaching capabilities that could help users carry out cyberattacks. Amodei cited these tests not as imminent threats but as examples of why companies need to be transparent about their testing and safety measures. Currently, Anthropic, OpenAI, and Google DeepMind have voluntarily adopted policies that include what they call "safety testing" and public reporting. But Amodei argues that as models become more complex, corporate incentives to maintain transparency might change without legislative requirements. His proposed transparency standard would codify existing practices at major AI companies while ensuring continued disclosure as the technology advances, he said. If adopted federally, it could supersede state laws to create a unified framework, addressing concerns about regulatory patchwork while maintaining oversight. "We can hope that all AI companies will join in a commitment to openness and responsible AI development, as some currently do," Amodei wrote. "But we don't rely on hope in other vital sectors, and we shouldn't have to rely on it here, either."
[2]
A ban on state AI laws could smash Big Tech's legal guardrails
Lauren Feiner is a senior policy reporter at The Verge, covering the intersection of Silicon Valley and Capitol Hill. She spent 5 years covering tech policy at CNBC, writing about antitrust, privacy, and content moderation reform. Senate Commerce Republicans have kept a ten year moratorium on state AI laws in their latest version of President Donald Trump's massive budget package. And a growing number of lawmakers and civil society groups warn that its broad language could put consumer protections on the chopping block. Republicans who support the provision, which the House cleared as part of its "One Big Beautiful Bill Act," say it will help ensure AI companies aren't bogged down by a complicated patchwork of regulations. But opponents warn that should it survive a vote and a congressional rule that might prohibit it, Big Tech companies could be exempted from state legal guardrails for years to come, without any promise of federal standards to take their place. "What this moratorium does is prevent every state in the country from having basic regulations to protect workers and to protect consumers," Rep. Ro Khanna (D-CA), whose district includes Silicon Valley, tells The Verge in an interview. He warns that as written, the language included in the House-passed budget reconciliation package could restrict state laws that attempt to regulate social media companies, prevent algorithmic rent discrimination, or limit AI deepfakes that could mislead consumers and voters. "It would basically give a free rein to corporations to develop AI in any way they wanted, and to develop automatic decision making without protecting consumers, workers, and kids." The bounds of what the moratorium could cover are unclear -- and opponents say that's the point. "The ban's language on automated decision making is so broad that we really can't be 100 percent certain which state laws it could touch," says Jonathan Walter, senior policy advisor at the Leadership Conference on Civil and Human Rights. "But one thing that is pretty certain, and feels like there is at least some consensus on, is that it goes further than AI." That could include accuracy standards and independent testing required for facial recognition models in states like Colorado and Washington, he says, as well as aspects of broad data privacy bills across several states. An analysis by nonprofit AI advocacy group Americans for Responsible Innovation (ARI) found that a social media-focused law like New York's "Stop Addictive Feeds Exploitation for Kids Act" could be unintentionally voided by the provision. Center for Democracy and Technology state engagement director Travis Hall says in a statement that the House text would block "basic consumer protection laws from applying to AI systems." Even state governments' restrictions on their own use of AI could be blocked. The new Senate language adds its own set of wrinkles. The provision is no longer a straightforward ban, but it conditions state broadband infrastructure funds on adhering to the familiar 10-year moratorium. Unlike the House version, the Senate version would also cover criminal state laws. Supporters of the AI moratorium argue it wouldn't apply to as many laws as critics claim, but Public Citizen Big Tech accountability advocate J.B. Branch says that "any Big Tech attorney who's worth their salt is going to make the argument that it does apply, that that's the way that it was intended to be written." Khanna says that some of his colleagues may not have fully realized the rule's scope. "I don't think they have thought through how broad the moratorium is and how much it would hamper the ability to protect consumers, kids, against automation," he says. In the days since it passed through the House, even Rep. Marjorie Taylor Greene (R-GA), a staunch Trump ally, said she would have voted against the OBBB had she realized the AI moratorium was included in the massive package of text. California's SB 1047 is the poster child for what industry players dub overzealous state legislation. The bill, which intended to place safety guardrails on large AI models, was vetoed by Democratic Governor Gavin Newsom following an intense pressure campaign by OpenAI and others. Companies like OpenAI, whose CEO Sam Altman once advocated for industry regulation, have more recently focused on clearing away rules that they say could stop them from competing with China in the AI race. Khanna concedes that there are "some poorly-crafted state regulations" and making sure the US stays ahead of China in the AI race should be a priority. "But the approach to that should be that we craft good federal regulation," he says. With the pace and unpredictability of AI innovation, Branch says, "to handcuff the states from trying to protect their citizens" without being able to anticipate future harms, "it's just reckless." And if no state legislation is guaranteed for a decade, Khanna says, Congress faces little pressure to pass its own laws. "What you're really doing with this moratorium is creating the Wild West," he says. Before the Senate Commerce text was released, dozens of Khanna's California Democratic colleagues in the House, led by Rep. Doris Matsui (D-CA), signed a letter to Senate leaders urging them to remove the AI provision -- saying it "exposes Americans to a growing list of harms as AI technologies are adopted across sectors from healthcare to education, housing, and transportation." They warn that the sweeping definition of AI "arguably covers any computer processing." Over 250 state lawmakers representing every state also urge Congress to drop the provision. "As AI technology develops at a rapid pace, state and local governments are more nimble in their response than Congress and federal agencies," they write. "Legislation that cuts off this democratic dialogue at the state level would freeze policy innovation in developing the best practices for AI governance at a time when experimentation is vital." Khanna warns that missing the boat on AI regulation could have even higher stakes than other internet policies like net neutrality. "It's not just going to impact the structure of the internet," he says. "It's going to impact people's jobs. It's going to impact the role algorithms can play in social media. It's going to impact every part of our lives, and it's going to allow a few people [who] control AI to profit, without accountability to the public good, to the American public."
[3]
Tax Bill's Bid to Ban New AI Rules Faces Bipartisan Blowback
A Republican attempt to block states from enforcing new artificial intelligence rules over the next decade has drawn growing bipartisan objections, exposing tension in Washington over allowing for more unchecked AI development. The proposal, buried on pages 278 and 279 in the sweeping tax bill passed by the House last month, has drawn sharp criticism from Republican Representative Marjorie Taylor Greene and Senator Marsha Blackburn, as well as Democratic Senators Ed Markey and Elizabeth Warren. More than 200 state lawmakers from both parties also urged Congress this week to scrap the measure. "We have no idea what AI will be capable of in the next 10 years," Greene wrote on X on Tuesday, noting she only discovered the provision after voting for the tax bill. She has pledged to oppose the package when it returns to the House if the AI language is not removed. "Giving it free rein and tying states' hands is potentially dangerous." Markey and Warren have also been forceful in pushing back against the measure, arguing that it violates Senate rules that bill language included in the budget reconciliation process must relate to spending. "This backdoor AI moratorium is not serious. It's not responsible. And it's not acceptable," Markey said. Meanwhile, Senate Commerce Chair Ted Cruz (R-Texas) has said he's "not certain if that provision will survive," though he has expressed support for it. Since returning to the White House, President Donald Trump has taken steps to remove constraints on AI development, including by rescinding the Biden administration's executive order on artificial intelligence and ushering a wave of AI deals in the Middle East. Trump and his allies in Congress have increasingly focused on outcompeting China in AI. But bipartisan resistance to the proposed moratorium on AI rules highlights a fierce divide in Washington over how much to let the industry regulate itself. Congress has yet to pass a federal framework on AI, which has effectively left the states to take the lead on figuring out how to set rules around the technology. California, New York, Utah and dozens of others have introduced or enacted AI laws in recent years, including bills to address concerns about data privacy, copyright and bias raised by the technology. If Congress backs away from the proposal, it would mark a setback for top AI developers. In March, OpenAI asked the White House to help shield AI companies from a possible onslaught of state AI rules. "This patchwork of regulations risks bogging down innovation and, in the case of AI, undermining America's leadership position," the company wrote in a set of policy recommendations submitted to the White House. However, OpenAI stopped short of asking to be exempted from all state regulations, just those concerning the safety risks of building more advanced models. So far, the leading AI companies have largely stayed quiet as the fight over the measure plays out. Meta Platforms Inc. declined to comment. Alphabet Inc.'s Google didn't respond to a request for comment. OpenAI declined to comment beyond its previous policy suggestions. TechNet, a trade group representing Google, OpenAI and other tech companies, echoed the ChatGPT maker's concerns about the "developing patchwork" of state AI bills. "In 2025, over 1000 AI bills have been introduced in state legislatures -- many containing incompatible rules and requirements," Linda Moore, chief executive officer of TechNet, said in a statement to Bloomberg News. "A consistent national approach is critical," she added, to address AI risks and "ensure America remains the global leader in innovation for generations to come." Anthropic, a safety-focused AI startup that has called for more regulation generally, has also said it prefers federal policymakers to take the lead, but the company thinks that states should serve as a "backstop" given the slow pace of Congress enacting policies. "Ten years is a long time," Anthropic CEO Dario Amodei said at the company's developer conference on May 22, speaking about the moratorium. "It's one thing to say, 'We don't have to grab the steering wheel now.' It's another thing to say, 'We're going to rip out the steering wheel and we can't put it back in for 10 years.'" Some Republican senators have raised doubts that the AI provision can pass through the reconciliation process, but this camp has also expressed support for an interim ban on state rules to avoid an overly fragmented and complex regulatory landscape. "I wouldn't put my money on anything right now until it actually passes," John Curtis, a Republican senator from Utah, previously said of the AI proposal. But, he added, "We're making a huge mistake if we have 50 different policies" on AI. State legislators, however, worry that the provision would rob them of the ability to protect their constituents from the rapidly evolving technology. "Over the next decade, AI will raise some of the most important public policy questions of our time," state lawmakers from 49 states wrote in a letter to Congress this week. "It is critical that state policymakers maintain the ability to respond."
[4]
Congress could ban state AI regulation for a decade. These state lawmakers say 'no way.'
Misinformation, job loss, nonconsensual deepfakes - these are just a few of the issues state lawmakers have to contend with in a world where artificial intelligence becomes more and more prevalent in our daily lives. However, there's one big problem. The federal budget reconciliation bill may make it impossible for state lawmakers to deal with the many issues brought about by AI. Why? Because President Donald Trump's One Big Beautiful Bill (yes, that's really what it's called) includes a highly controversial provision that outright bans any AI regulation for 10 years at the state and local level. That means the bill would tie lawmakers' hands in all 50 states, preventing them from taking any action to regulate this growing industry even as it affects their states' economies and their constituents' lives. According to a new report from StateScoop, state lawmakers from all 50 states are now coming together to push back against this provision in the federal budget reconciliation bill. In total, more than 260 state legislators have signed on to a letter to Congress voicing their opposition to the 10-year ban on AI regulation. The letter was spearheaded by both South Carolina Rep. Brandon Guffey and South Dakota Sen. Liz Larson. Notably, Rep. Guffey is a Republican and Sen. Larson is a Democrat, showing that the opposition to this AI regulation ban is a bipartisan one. Supports of the AI regulation ban provision claim that its necessary in order to prevent a "fragmented regulatory landscape," which would harm the industry and give China an unfair advantage in the space over U.S. tech companies. And it appears at least some of the president's supporters in Congress are changing their tune on the bill. Rep. Marjorie Taylor Greene, the far-right Republican Congresswoman from Georgia, announced on X that she opposes the AI regulation ban provision, though she already voted in favor of the big, beautiful bill. This Tweet is currently unavailable. It might be loading or has been removed. "Full transparency, I did not know about this section on pages 278-279 of the OBBB that strips states of the right to make laws or regulate AI for 10 years," Rep. Greene posted on X. "I am adamantly OPPOSED to this and it is a violation of state rights and I would have voted NO if I had known this was in there." Rep. Greene went on to state that the effects of this bill can be "potentially dangerous" and said she will not vote for the bill when it comes back to the House of Representatives if this provision is still included. Trump's big, beautiful bill passed the House and now heads to the Senate, where Rep. Greene said she hopes this provision is stripped. The CEO of AI company Anthropic recently warned that governments aren't taking the threat of AI seriously enough and that there is a real lack of action in preparing for what's to come. In addition, a poll conducted last month by research firm Echelon Insights, on behalf of Common Sense Media, found that 59 percent of registered voters opposed banning AI regulation for states.
[5]
Illinois' AI laws are at risk if the U.S. budget bill passes
Why it matters: For many AI skeptics, state laws represent a bulwark against privacy, security and potential discrimination risks as the technology gains rapid acceptance at the federal level by the Trump administration. Zoom in: Since 2024, Illinois has passed at least three AI laws that would be nullified for a decade if the provision passes. What they're saying: "Even if a company deliberately designs an algorithm that causes foreseeable harm -- regardless of how intentional or egregious the misconduct or how devastating the consequences -- the company making that bad tech would be unaccountable," a coalition of 140 tech, civil society and education groups said in a letter to House leaders. The other side: During OpenAI CEO Sam Altman's Senate testimony last month, he emphasized the importance of clear federal rules and said it's onerous for the industry to have to operate under different rules in different states, Axios Pro reported. The intrigue: At least one House member, Rep. Marjorie Taylor Greene (R-Ga.), who voted for the budget bill, said she wasn't aware of the AI provision and would have opposed it. What's next: Some Republican Senators, including Josh Hawley (R-Mo.) and Marsha Blackburn (R-Tenn.) have said they don't support a state AI law ban.
[6]
Anthropic's CEO has a problem with the GOP's push to stop states from regulating AI
Anthropic CEO Dario Amodei is pushing back against Republican efforts to impose a decade-long ban on states regulating artificial intelligence that's included in President Donald Trump's mega-bill. "A 10-year moratorium is far too blunt an instrument. A.I. is advancing too head-spinningly fast," Amodei said in a New York Times op-ed published Thursday. "I believe that these systems could change the world, fundamentally, within two years; in 10 years, all bets are off." Amodei continued: "Without a clear plan for a federal response, a moratorium would give us the worst of both worlds -- no ability for states to act, and no national policy as a backstop." The Anthropic founder said he preferred that Congress work alongside the White House to set up a new AI transparency standard "so that emerging risks are made clear to the American people." The House-passed legislation is mainly focused on renewing a suite of tax cuts set to expire at the end of the year and eliminating taxes on tips and overtime pay. But it also includes the AI measure, which has already whipped up GOP opposition in the House from Rep. Marjorie Taylor Greene of Georgia. Republicans are attempting to muscle the bill through Congress using a strict budgetary process to overcome united Democratic opposition, and the AI provision still may be cast out because it's not related to federal spending. "That's an open question," Sen. Ted Cruz of Texas said in a Thursday CNBC interview. Anthropic also is dealing with a lawsuit filed Wednesday from Reddit, alleging that the AI company had unlawfully used the data of its users to train and enhance its systems. Reddit is now the first big tech company -- not just a publisher or rights holder -- to challenge an AI developer in court over training data. "We will not tolerate profit-seeking entities like Anthropic commercially exploiting Reddit content for billions of dollars without any return for redditors or respect for their privacy," Reddit chief legal officer Ben Lee said in a statement accompanying the lawsuit.
[7]
Keller: Artificial intelligence provision in spending bill has unlikely allies lining up to fight it
Jon Keller is the political analyst for WBZ-TV News. His "Keller @ Large" reports on a wide range of topics are regularly featured during WBZ News at 5 and 6 p.m. The opinions expressed below are Jon Keller's, not those of WBZ, CBS News or Paramount Global. The "big, beautiful bill" passed by the House contains an artificial intelligence provision that has unlikely allies lining up to fight it. Artificial intelligence - or AI - is rapidly becoming a key part of our daily lives, providing lightning-fast information and helping machines operate more efficiently. But like social media before it, AI is also being misused, with many states moving to stop that with new laws. They're all jeopardized by language tucked deep inside the House version of President Trump's so-called "big, beautiful" tax and spending bill that would bar states from regulating artificial intelligence for the next 10 years. It's a move that has left-wing Democratic Sen. Elizabeth Warren and right-wing Georgia Republican Congresswoman Marjorie Taylor Greene singing the same time. "Republicans just threw the software companies a lifeline," says Warren, and Greene accuses the authors of the provision of "allowing AI to run rampant and destroying federalism in the process." The halting of AI regulation was just a rhetorical concept at a Senate Commerce committee hearing in early May. "To lead in AI, the U.S. cannot allow regulation, even the supposedly benign kind, to choke innovation or adoption," declared Sen. Ted Cruz. And with at least 16 states having already passed AI regulations, the tech moguls on hand loved the idea of overriding them. "Our stance is that we need to give adult users a lot of freedom to use AI in the way that they want to use it and to trust them to be responsible with the tool," said Open AI CEO and founder Sam Altman. But like social media before it, AI is often used irresponsibly, fueling misinformation, political manipulation, and pornographic deepfakes. "Twenty-plus years ago there was a small startup in Cambridge called Facebook and we all thought it was cute and fun," recalled Massachusetts State Sen. Barry Finegold, who is co-sponsoring AI regulation here. "But now Meta says, they'll even admit, that one out of three women have body issues because of their algorithm." Finegold is one of 260 state legislators from both parties and all states who sent a letter to Congress opposing the regulation moratorium. "We are all about seeing the growth of AI, we want more companies to come here to Massachusetts, we think it's going to do dynamic things in biotech and so many others," said Finegold. "But what's so wrong with having guardrails out there to protect the public?" Just a couple of weeks ago President Trump signed into law the "Take it Down Act" which requires platforms to remove pornographic deepfakes and other intimate images within 48 hours of a victim's complaint. And the unusually-bipartisan outcry against this ban on state regulation shows how the tech lobbyists may have overreached this time. But this episode is part of a larger, long-running debate about the proper balance between regulation and economic growth, and that tug-of-war isn't ending anytime soon.
[8]
We need more AI oversight, not less
First, some of the world's leading experts on artificial intelligence believe that artificial general intelligence, machines that can think, reason and adapt to new circumstances as well as humans, could arrive within the next five years. Google co-founder Sergey Brin recently came out of retirement and returned to work, driven by the belief that AGI could be here by 2030. Joining him onstage recently at Google's annual developer conference, Demis Hassabis, head of Google's DeepMind and a recent Nobel Prize winner, agreed. If they're right, we're on the edge of one of the biggest changes in human history. Second, the U.S. Senate is considering a law, contained in the budget reconciliation bill that recently passed the House by one vote, that would ban states' ability to regulate AI for the next 10 years. This idea surfaced a few weeks ago when Sen. Ted Cruz, R-Texas, asked OpenAI's Sam Altman what he thought about a pause on state-level AI rule-making. Altman said having "one federal approach focused on light touch and an even playing field sounds great to me." Now, lawmakers are moving to prevent states from passing any new laws or enforcing existing laws around AI, leaving only the federal government in charge. But here's the problem: Congress hasn't passed any significant AI regulations yet. Meanwhile, most states have already passed important AI legislation, including laws that make sharing deepfakes a crime, require chatbots to identify themselves, protect children and safeguard personal data. Some states' laws prohibit AI from copying artists' images or voices. Washington state has banned the use of AI to impersonate candidates running for office, for example, and the state has created an Artificial Intelligence Task Force to study risks and benefits of AI. If the Senate passes the ban, all these protections could disappear overnight. Who benefits? Big Tech companies. With no enforcement of state laws or new federal rules, companies could use our data however they want, release powerful AI tools without oversight and avoid responsibility for any harm caused. Who loses? All of us. We would have little to protect us from AI-driven scams, misinformation in elections or privacy violations and no way to seek legal remedies. Even worse, by trying to sneak this ban into a budget bill, Congress is avoiding a real debate. It could be stalled if the Senate parliamentarian objects to shoehorning a policy change into a budget bill; Congress might press forward anyway. Legal challenges could take years -- long enough for the ban to do real damage. I'm not an AI ethicist, nor a policymaker. I'm a parent watching my children navigate sticky algorithms, face online mental health risks, and have their attention chopped up and sold off to advertisers. I am also a worker wondering if my job will exist in 2030, and a citizen who wonders if fair elections are possible in the age of AI when the tech giants write their own rules that help profits, not people. As a parent, a worker, a consumer and a citizen, I'm worried. The laws we set now will shape the future, especially if AGI really is just around the corner. We should be adding more safeguards, not taking them away. Congress should not silence the states. We need every tool available to protect ourselves as AI gets more powerful. The future is coming fast. Let's not face it unprepared.
[9]
Why Anthropic CEO Dario Amodei Is Asking for AI Regulation
In an op-ed in The New York Times, Amodei spoke out against a stipulation in President Donald Trump's One, Big, Beautiful Bill Act that would prevent states from regulating AI for the next 10 years. Amodei wrote that he understood the motivations behind the proposal; if each state regulated AI in its own way, AI model providers would be stuck in an endless compliance loop and have trouble competing with China's AI initiatives. Even so, he argued that "a 10-year moratorium is far too blunt an instrument." According to Amodei, "these systems could change the world, fundamentally, within two years; in 10 years, all bets are off." To highlight the risks of unregulated AI, Amodei offered a recent example. Just a few weeks ago, Amodei wrote, Anthropic researchers gave Claude, the company's AI model, access to emails designed to trick the model into thinking the user was having an affair. When the user told Claude he would be shutting the model down, Claude threatened to forward the incriminating emails to the user's wife. (To be clear, this all happened in a safe testing environment in which Anthropic was stress-testing Claude's safety systems. In the real world, Claude can't narc on you.)
[10]
Anthropic CEO: GOP AI regulation proposal 'too blunt'
Anthropic CEO Dario Amodei criticized the latest Republican proposal to regulate artificial intelligence (AI) as "far too blunt an instrument" to mitigate the risks of the rapidly evolving technology. In an op-ed published by The New York Times on Thursday, Amodei said the provision barring states from regulating AI for 10 years -- which the Senate is now considering under President Trump's massive policy and spending package -- would "tie the hands of state legislators" without laying out a cohesive strategy on the national level. "The motivations behind the moratorium are understandable," the top executive of the artificial intelligence startup wrote. "It aims to prevent a patchwork of inconsistent state laws, which many fear could be burdensome or could compromise America's ability to compete with China." "But a 10-year moratorium is far too blunt an instrument," he continued. "A.I. is advancing too head-spinningly fast. I believe that these systems could change the world, fundamentally, within two years; in 10 years, all bets are off." Amodei added, "Without a clear plan for a federal response, a moratorium would give us the worst of both worlds -- no ability for states to act, and no national policy as a backstop." The tech executive outlined some of the risks that his company, as well as others, have discovered during experimental stress tests of AI systems. He described a scenario in which a person tells a bot that it will soon be replaced with a newer model. The bot, which previously was granted access to the person's emails, threatens to expose details of his marital affair by forwarding his emails to his wife -- if the user does not reverse plans to shut it down. "This scenario isn't fiction," Amodei wrote. "Anthropic's latest A.I. model demonstrated just a few weeks ago that it was capable of this kind of behavior." The AI mogul added that transparency is the best way to mitigate risks without overregulating and stifling progress. He said his company publishes results of studies voluntarily but called on the federal government to make these steps mandatory. "At the federal level, instead of a moratorium, the White House and Congress should work together on a transparency standard for A.I. companies, so that emerging risks are made clear to the American people," Amodei wrote. He also noted the standard should require AI developers to adopt policies for testing models and publicly disclose them, as well as require that they outline steps they plan to take to mitigate risk. The companies, the executive continued, would "have to be upfront" about steps taken after test results to make sure models were safe. "Having this national transparency standard would help not only the public but also Congress understand how the technology is developing, so that lawmakers can decide whether further government action is needed," he added. Amodei also suggested state laws should follow a similar model that is "narrowly focused on transparency and not overly prescriptive or burdensome." Those laws could then be superseded if a national transparency standard is adopted, Amodei said. He noted the issue is not a partisan one, praising steps Trump has taken to support domestic development of AI systems. "This is not about partisan politics. Politicians on both sides of the aisle have long raised concerns about A.I. and about the risks of abdicating our responsibility to steward it well," the executive wrote. "I support what the Trump administration has done to clamp down on the export of A.I. chips to China and to make it easier to build A.I. infrastructure here in the United States." "This is about responding in a wise and balanced way to extraordinary times," he continued. "Faced with a revolutionary technology of uncertain benefits and risks, our government should be able to ensure we make rapid progress, beat China and build A.I. that is safe and trustworthy. Transparency will serve these shared aspirations, not hinder them."
[11]
Trump's Budget Would Ban States From Regulating AI For 10 Years. Why That Could Be a Problem for Everyday Americans
Marjorie Taylor Green was surprised. After voting in favor of President Trump's budget reconciliation bill, the Republican congresswoman from Georgia was apparently dismayed to learn of an amendment buried deep within the text of what the White House has dubbed the "One Big Beautiful Bill Act of 2025." The source of Green's anger and confusion? An amendment barring states from regulating the development and deployment of AI for the next ten years. Normally an unwavering ally of MAGA, Green on Tuesday wrote on X: "Full transparency, I did not know about this section on pages 278-279 of the OBBB that strips states of the right to make laws or regulate AI for 10 years. I am adamantly OPPOSED to this and it is a violation of state rights and I would have voted NO if I had known this was in there." Silicon Valley is a lobbying powerhouse. But there is nothing in recent history that exemplifies the industry's current foothold in Washington D.C. quite like the proposed moratorium on state AI regulation. "The only thing I think that is even akin to it, which is complicated, is the Section 230 carve-out for internet companies that they aren't liable for speech, which is nowhere near as sweeping as this [AI] preemption is," says Samantha Gordon, chief program officer at TechEquity, an policy organization.
Share
Copy Link
Anthropic CEO Dario Amodei argues against a proposed 10-year moratorium on state AI regulation, calling for federal transparency standards instead. The controversial provision in President Trump's tax policy bill faces bipartisan opposition from lawmakers and civil society groups.
Dario Amodei, CEO of Anthropic, has voiced strong opposition to a proposed 10-year moratorium on state AI regulation in a New York Times opinion piece 1. The controversial provision, part of President Trump's tax policy bill, has sparked a heated debate among lawmakers, tech companies, and civil society groups.
Source: Inc. Magazine
The proposed moratorium would prevent states from regulating AI for a decade, effectively nullifying existing state laws and preventing new ones 2. Supporters argue it would prevent a fragmented regulatory landscape that could hinder innovation and America's competitive edge against China 3.
However, Amodei warns that AI is advancing too rapidly for such a long freeze. He predicts that AI systems "could change the world, fundamentally, within two years; in 10 years, all bets are off" 1. This sentiment is echoed by other industry experts and lawmakers who fear the unpredictable nature of AI development 4.
The provision has faced increasing bipartisan resistance. Republican Representatives like Marjorie Taylor Greene and Senators such as Marsha Blackburn have joined Democrats in opposing the measure 3. Over 250 state lawmakers from every state have urged Congress to drop the provision, arguing that state and local governments are more nimble in responding to rapidly developing AI technologies 2.
Source: Bloomberg Business
Civil society groups warn that the broad language of the moratorium could put essential consumer protections at risk. Jonathan Walter of the Leadership Conference on Civil and Human Rights stated, "The ban's language on automated decision making is so broad that we really can't be 100 percent certain which state laws it could touch" 2.
If passed, the moratorium could nullify a wide range of state laws, including:
Illinois alone has passed at least three AI laws since 2024 that would be affected 5.
Source: The Verge
Instead of a blanket moratorium, Amodei proposes that the White House and Congress create a federal transparency standard 1. This would require frontier AI developers to publicly disclose their testing policies and safety measures. Companies working on the most capable AI models would need to publish how they test for various risks and what steps they take before release.
Amodei argues this approach would codify existing practices at major AI companies while ensuring continued disclosure as technology advances. It could potentially supersede state laws to create a unified framework, addressing concerns about regulatory patchwork while maintaining oversight 1.
While some AI companies like OpenAI have previously advocated for industry regulation, recent focus has shifted towards clearing away rules that could impede competition with China 2. TechNet, a trade group representing Google, OpenAI, and other tech companies, echoed concerns about the "developing patchwork" of state AI bills, emphasizing the need for a consistent national approach 3.
As the debate continues, the fate of the AI regulation moratorium remains uncertain. Senator Ted Cruz has expressed doubt about whether the provision will survive the reconciliation process 3. The outcome will significantly impact the future of AI governance in the United States, balancing innovation with the need for responsible development and consumer protection.
Apple is reportedly in talks with OpenAI and Anthropic to potentially use their AI models to power an updated version of Siri, marking a significant shift in the company's AI strategy.
29 Sources
Technology
20 hrs ago
29 Sources
Technology
20 hrs ago
Cloudflare introduces a new tool allowing website owners to charge AI companies for content scraping, aiming to balance content creation and AI innovation.
10 Sources
Technology
4 hrs ago
10 Sources
Technology
4 hrs ago
Elon Musk's AI company, xAI, has raised $10 billion in a combination of debt and equity financing, signaling a major expansion in AI infrastructure and development amid fierce industry competition.
5 Sources
Business and Economy
12 hrs ago
5 Sources
Business and Economy
12 hrs ago
Google announces a major expansion of AI tools for education, including Gemini for Education and NotebookLM, aimed at enhancing learning experiences for students and supporting educators in classroom management.
8 Sources
Technology
20 hrs ago
8 Sources
Technology
20 hrs ago
NVIDIA's upcoming GB300 Blackwell Ultra AI servers, slated for release in the second half of 2025, are poised to become the most powerful AI servers globally. Major Taiwanese manufacturers are vying for production orders, with Foxconn securing the largest share.
2 Sources
Technology
12 hrs ago
2 Sources
Technology
12 hrs ago