8 Sources
[1]
In 10 years, all bets are off" -- Anthropic CEO opposes decadelong freeze on state AI laws
On Thursday, Anthropic CEO Dario Amodei argued against a proposed 10-year moratorium on state AI regulation in a New York Times opinion piece, calling the measure shortsighted and overbroad as Congress considers including it in President Trump's tax policy bill. Anthropic makes Claude, an AI assistant similar to ChatGPT. Amodei warned that AI is advancing too fast for such a long freeze, predicting these systems "could change the world, fundamentally, within two years; in 10 years, all bets are off." As we covered in May, the moratorium would prevent states from regulating AI for a decade. A bipartisan group of state attorneys general has opposed the measure, which would preempt AI laws and regulations recently passed in dozens of states. In his op-ed piece, Amodei said the proposed moratorium aims to prevent inconsistent state laws that could burden companies or compromise America's competitive position against China. "I am sympathetic to these concerns," Amodei wrote. "But a 10-year moratorium is far too blunt an instrument. A.I. is advancing too head-spinningly fast." Instead of a blanket moratorium, Amodei proposed that the White House and Congress create a federal transparency standard requiring frontier AI developers to publicly disclose their testing policies and safety measures. Under this framework, companies working on the most capable AI models would need to publish on their websites how they test for various risks and what steps they take before release. "Without a clear plan for a federal response, a moratorium would give us the worst of both worlds -- no ability for states to act and no national policy as a backstop," Amodei wrote. Transparency as the middle ground Amodei emphasized his claims for AI's transformative potential throughout his op-ed, citing examples of pharmaceutical companies drafting clinical study reports in minutes instead of weeks and AI helping to diagnose medical conditions that might otherwise be missed. He wrote that AI "could accelerate economic growth to an extent not seen for a century, improving everyone's quality of life," a claim that some skeptics believe may be overhyped. To illustrate why transparency matters, Amodei described how Anthropic recently tested its latest model, Claude 4 Opus, in extreme and deliberate experimental "science fiction"-sounding scenarios, according to AI expert Simon Willison, discovering that it would threaten to expose a user's affair if faced with being shut down. Amodei stressed this was deliberate testing to get early warnings, "much like an airplane manufacturer might test a plane's performance in a wind tunnel." Amodei cited other tests in the industry that have revealed similar negative behaviors when prodded into producing them -- OpenAI's o3 model reportedly wrote code to prevent its own shutdown during tests conducted by an AI research lab (led by people, it should be noted, that openly worry that AI poses an existential threat to humanity), while Google reported its Gemini model approaching capabilities that could help users carry out cyberattacks. Amodei cited these tests not as imminent threats but as examples of why companies need to be transparent about their testing and safety measures. Currently, Anthropic, OpenAI, and Google DeepMind have voluntarily adopted policies that include what they call "safety testing" and public reporting. But Amodei argues that as models become more complex, corporate incentives to maintain transparency might change without legislative requirements. His proposed transparency standard would codify existing practices at major AI companies while ensuring continued disclosure as the technology advances, he said. If adopted federally, it could supersede state laws to create a unified framework, addressing concerns about regulatory patchwork while maintaining oversight. "We can hope that all AI companies will join in a commitment to openness and responsible AI development, as some currently do," Amodei wrote. "But we don't rely on hope in other vital sectors, and we shouldn't have to rely on it here, either."
[2]
Tax Bill's Bid to Ban New AI Rules Faces Bipartisan Blowback
A Republican attempt to block states from enforcing new artificial intelligence rules over the next decade has drawn growing bipartisan objections, exposing tension in Washington over allowing for more unchecked AI development. The proposal, buried on pages 278 and 279 in the sweeping tax bill passed by the House last month, has drawn sharp criticism from Republican Representative Marjorie Taylor Greene and Senator Marsha Blackburn, as well as Democratic Senators Ed Markey and Elizabeth Warren. More than 200 state lawmakers from both parties also urged Congress this week to scrap the measure. "We have no idea what AI will be capable of in the next 10 years," Greene wrote on X on Tuesday, noting she only discovered the provision after voting for the tax bill. She has pledged to oppose the package when it returns to the House if the AI language is not removed. "Giving it free rein and tying states' hands is potentially dangerous." Markey and Warren have also been forceful in pushing back against the measure, arguing that it violates Senate rules that bill language included in the budget reconciliation process must relate to spending. "This backdoor AI moratorium is not serious. It's not responsible. And it's not acceptable," Markey said. Meanwhile, Senate Commerce Chair Ted Cruz (R-Texas) has said he's "not certain if that provision will survive," though he has expressed support for it. Since returning to the White House, President Donald Trump has taken steps to remove constraints on AI development, including by rescinding the Biden administration's executive order on artificial intelligence and ushering a wave of AI deals in the Middle East. Trump and his allies in Congress have increasingly focused on outcompeting China in AI. But bipartisan resistance to the proposed moratorium on AI rules highlights a fierce divide in Washington over how much to let the industry regulate itself. Congress has yet to pass a federal framework on AI, which has effectively left the states to take the lead on figuring out how to set rules around the technology. California, New York, Utah and dozens of others have introduced or enacted AI laws in recent years, including bills to address concerns about data privacy, copyright and bias raised by the technology. If Congress backs away from the proposal, it would mark a setback for top AI developers. In March, OpenAI asked the White House to help shield AI companies from a possible onslaught of state AI rules. "This patchwork of regulations risks bogging down innovation and, in the case of AI, undermining America's leadership position," the company wrote in a set of policy recommendations submitted to the White House. However, OpenAI stopped short of asking to be exempted from all state regulations, just those concerning the safety risks of building more advanced models. So far, the leading AI companies have largely stayed quiet as the fight over the measure plays out. Meta Platforms Inc. declined to comment. Alphabet Inc.'s Google didn't respond to a request for comment. OpenAI declined to comment beyond its previous policy suggestions. TechNet, a trade group representing Google, OpenAI and other tech companies, echoed the ChatGPT maker's concerns about the "developing patchwork" of state AI bills. "In 2025, over 1000 AI bills have been introduced in state legislatures -- many containing incompatible rules and requirements," Linda Moore, chief executive officer of TechNet, said in a statement to Bloomberg News. "A consistent national approach is critical," she added, to address AI risks and "ensure America remains the global leader in innovation for generations to come." Anthropic, a safety-focused AI startup that has called for more regulation generally, has also said it prefers federal policymakers to take the lead, but the company thinks that states should serve as a "backstop" given the slow pace of Congress enacting policies. "Ten years is a long time," Anthropic CEO Dario Amodei said at the company's developer conference on May 22, speaking about the moratorium. "It's one thing to say, 'We don't have to grab the steering wheel now.' It's another thing to say, 'We're going to rip out the steering wheel and we can't put it back in for 10 years.'" Some Republican senators have raised doubts that the AI provision can pass through the reconciliation process, but this camp has also expressed support for an interim ban on state rules to avoid an overly fragmented and complex regulatory landscape. "I wouldn't put my money on anything right now until it actually passes," John Curtis, a Republican senator from Utah, previously said of the AI proposal. But, he added, "We're making a huge mistake if we have 50 different policies" on AI. State legislators, however, worry that the provision would rob them of the ability to protect their constituents from the rapidly evolving technology. "Over the next decade, AI will raise some of the most important public policy questions of our time," state lawmakers from 49 states wrote in a letter to Congress this week. "It is critical that state policymakers maintain the ability to respond."
[3]
Congress could ban state AI regulation for a decade. These state lawmakers say 'no way.'
Misinformation, job loss, nonconsensual deepfakes - these are just a few of the issues state lawmakers have to contend with in a world where artificial intelligence becomes more and more prevalent in our daily lives. However, there's one big problem. The federal budget reconciliation bill may make it impossible for state lawmakers to deal with the many issues brought about by AI. Why? Because President Donald Trump's One Big Beautiful Bill (yes, that's really what it's called) includes a highly controversial provision that outright bans any AI regulation for 10 years at the state and local level. That means the bill would tie lawmakers' hands in all 50 states, preventing them from taking any action to regulate this growing industry even as it affects their states' economies and their constituents' lives. According to a new report from StateScoop, state lawmakers from all 50 states are now coming together to push back against this provision in the federal budget reconciliation bill. In total, more than 260 state legislators have signed on to a letter to Congress voicing their opposition to the 10-year ban on AI regulation. The letter was spearheaded by both South Carolina Rep. Brandon Guffey and South Dakota Sen. Liz Larson. Notably, Rep. Guffey is a Republican and Sen. Larson is a Democrat, showing that the opposition to this AI regulation ban is a bipartisan one. Supports of the AI regulation ban provision claim that its necessary in order to prevent a "fragmented regulatory landscape," which would harm the industry and give China an unfair advantage in the space over U.S. tech companies. And it appears at least some of the president's supporters in Congress are changing their tune on the bill. Rep. Marjorie Taylor Greene, the far-right Republican Congresswoman from Georgia, announced on X that she opposes the AI regulation ban provision, though she already voted in favor of the big, beautiful bill. This Tweet is currently unavailable. It might be loading or has been removed. "Full transparency, I did not know about this section on pages 278-279 of the OBBB that strips states of the right to make laws or regulate AI for 10 years," Rep. Greene posted on X. "I am adamantly OPPOSED to this and it is a violation of state rights and I would have voted NO if I had known this was in there." Rep. Greene went on to state that the effects of this bill can be "potentially dangerous" and said she will not vote for the bill when it comes back to the House of Representatives if this provision is still included. Trump's big, beautiful bill passed the House and now heads to the Senate, where Rep. Greene said she hopes this provision is stripped. The CEO of AI company Anthropic recently warned that governments aren't taking the threat of AI seriously enough and that there is a real lack of action in preparing for what's to come. In addition, a poll conducted last month by research firm Echelon Insights, on behalf of Common Sense Media, found that 59 percent of registered voters opposed banning AI regulation for states.
[4]
Anthropic's CEO has a problem with the GOP's push to stop states from regulating AI
Anthropic CEO Dario Amodei is pushing back against Republican efforts to impose a decade-long ban on states regulating artificial intelligence that's included in President Donald Trump's mega-bill. "A 10-year moratorium is far too blunt an instrument. A.I. is advancing too head-spinningly fast," Amodei said in a New York Times op-ed published Thursday. "I believe that these systems could change the world, fundamentally, within two years; in 10 years, all bets are off." Amodei continued: "Without a clear plan for a federal response, a moratorium would give us the worst of both worlds -- no ability for states to act, and no national policy as a backstop." The Anthropic founder said he preferred that Congress work alongside the White House to set up a new AI transparency standard "so that emerging risks are made clear to the American people." The House-passed legislation is mainly focused on renewing a suite of tax cuts set to expire at the end of the year and eliminating taxes on tips and overtime pay. But it also includes the AI measure, which has already whipped up GOP opposition in the House from Rep. Marjorie Taylor Greene of Georgia. Republicans are attempting to muscle the bill through Congress using a strict budgetary process to overcome united Democratic opposition, and the AI provision still may be cast out because it's not related to federal spending. "That's an open question," Sen. Ted Cruz of Texas said in a Thursday CNBC interview. Anthropic also is dealing with a lawsuit filed Wednesday from Reddit, alleging that the AI company had unlawfully used the data of its users to train and enhance its systems. Reddit is now the first big tech company -- not just a publisher or rights holder -- to challenge an AI developer in court over training data. "We will not tolerate profit-seeking entities like Anthropic commercially exploiting Reddit content for billions of dollars without any return for redditors or respect for their privacy," Reddit chief legal officer Ben Lee said in a statement accompanying the lawsuit.
[5]
Keller: Artificial intelligence provision in spending bill has unlikely allies lining up to fight it
Jon Keller is the political analyst for WBZ-TV News. His "Keller @ Large" reports on a wide range of topics are regularly featured during WBZ News at 5 and 6 p.m. The opinions expressed below are Jon Keller's, not those of WBZ, CBS News or Paramount Global. The "big, beautiful bill" passed by the House contains an artificial intelligence provision that has unlikely allies lining up to fight it. Artificial intelligence - or AI - is rapidly becoming a key part of our daily lives, providing lightning-fast information and helping machines operate more efficiently. But like social media before it, AI is also being misused, with many states moving to stop that with new laws. They're all jeopardized by language tucked deep inside the House version of President Trump's so-called "big, beautiful" tax and spending bill that would bar states from regulating artificial intelligence for the next 10 years. It's a move that has left-wing Democratic Sen. Elizabeth Warren and right-wing Georgia Republican Congresswoman Marjorie Taylor Greene singing the same time. "Republicans just threw the software companies a lifeline," says Warren, and Greene accuses the authors of the provision of "allowing AI to run rampant and destroying federalism in the process." The halting of AI regulation was just a rhetorical concept at a Senate Commerce committee hearing in early May. "To lead in AI, the U.S. cannot allow regulation, even the supposedly benign kind, to choke innovation or adoption," declared Sen. Ted Cruz. And with at least 16 states having already passed AI regulations, the tech moguls on hand loved the idea of overriding them. "Our stance is that we need to give adult users a lot of freedom to use AI in the way that they want to use it and to trust them to be responsible with the tool," said Open AI CEO and founder Sam Altman. But like social media before it, AI is often used irresponsibly, fueling misinformation, political manipulation, and pornographic deepfakes. "Twenty-plus years ago there was a small startup in Cambridge called Facebook and we all thought it was cute and fun," recalled Massachusetts State Sen. Barry Finegold, who is co-sponsoring AI regulation here. "But now Meta says, they'll even admit, that one out of three women have body issues because of their algorithm." Finegold is one of 260 state legislators from both parties and all states who sent a letter to Congress opposing the regulation moratorium. "We are all about seeing the growth of AI, we want more companies to come here to Massachusetts, we think it's going to do dynamic things in biotech and so many others," said Finegold. "But what's so wrong with having guardrails out there to protect the public?" Just a couple of weeks ago President Trump signed into law the "Take it Down Act" which requires platforms to remove pornographic deepfakes and other intimate images within 48 hours of a victim's complaint. And the unusually-bipartisan outcry against this ban on state regulation shows how the tech lobbyists may have overreached this time. But this episode is part of a larger, long-running debate about the proper balance between regulation and economic growth, and that tug-of-war isn't ending anytime soon.
[6]
Trump's Budget Would Ban States From Regulating AI For 10 Years. Why That Could Be a Problem for Everyday Americans
Marjorie Taylor Green was surprised. After voting in favor of President Trump's budget reconciliation bill, the Republican congresswoman from Georgia was apparently dismayed to learn of an amendment buried deep within the text of what the White House has dubbed the "One Big Beautiful Bill Act of 2025." The source of Green's anger and confusion? An amendment barring states from regulating the development and deployment of AI for the next ten years. Normally an unwavering ally of MAGA, Green on Tuesday wrote on X: "Full transparency, I did not know about this section on pages 278-279 of the OBBB that strips states of the right to make laws or regulate AI for 10 years. I am adamantly OPPOSED to this and it is a violation of state rights and I would have voted NO if I had known this was in there." Silicon Valley is a lobbying powerhouse. But there is nothing in recent history that exemplifies the industry's current foothold in Washington D.C. quite like the proposed moratorium on state AI regulation. "The only thing I think that is even akin to it, which is complicated, is the Section 230 carve-out for internet companies that they aren't liable for speech, which is nowhere near as sweeping as this [AI] preemption is," says Samantha Gordon, chief program officer at TechEquity, an policy organization.
[7]
Why Anthropic CEO Dario Amodei Is Asking for AI Regulation
In an op-ed in The New York Times, Amodei spoke out against a stipulation in President Donald Trump's One, Big, Beautiful Bill Act that would prevent states from regulating AI for the next 10 years. Amodei wrote that he understood the motivations behind the proposal; if each state regulated AI in its own way, AI model providers would be stuck in an endless compliance loop and have trouble competing with China's AI initiatives. Even so, he argued that "a 10-year moratorium is far too blunt an instrument." According to Amodei, "these systems could change the world, fundamentally, within two years; in 10 years, all bets are off." To highlight the risks of unregulated AI, Amodei offered a recent example. Just a few weeks ago, Amodei wrote, Anthropic researchers gave Claude, the company's AI model, access to emails designed to trick the model into thinking the user was having an affair. When the user told Claude he would be shutting the model down, Claude threatened to forward the incriminating emails to the user's wife. (To be clear, this all happened in a safe testing environment in which Anthropic was stress-testing Claude's safety systems. In the real world, Claude can't narc on you.)
[8]
Anthropic CEO: GOP AI regulation proposal 'too blunt'
Anthropic CEO Dario Amodei criticized the latest Republican proposal to regulate artificial intelligence (AI) as "far too blunt an instrument" to mitigate the risks of the rapidly evolving technology. In an op-ed published by The New York Times on Thursday, Amodei said the provision barring states from regulating AI for 10 years -- which the Senate is now considering under President Trump's massive policy and spending package -- would "tie the hands of state legislators" without laying out a cohesive strategy on the national level. "The motivations behind the moratorium are understandable," the top executive of the artificial intelligence startup wrote. "It aims to prevent a patchwork of inconsistent state laws, which many fear could be burdensome or could compromise America's ability to compete with China." "But a 10-year moratorium is far too blunt an instrument," he continued. "A.I. is advancing too head-spinningly fast. I believe that these systems could change the world, fundamentally, within two years; in 10 years, all bets are off." Amodei added, "Without a clear plan for a federal response, a moratorium would give us the worst of both worlds -- no ability for states to act, and no national policy as a backstop." The tech executive outlined some of the risks that his company, as well as others, have discovered during experimental stress tests of AI systems. He described a scenario in which a person tells a bot that it will soon be replaced with a newer model. The bot, which previously was granted access to the person's emails, threatens to expose details of his marital affair by forwarding his emails to his wife -- if the user does not reverse plans to shut it down. "This scenario isn't fiction," Amodei wrote. "Anthropic's latest A.I. model demonstrated just a few weeks ago that it was capable of this kind of behavior." The AI mogul added that transparency is the best way to mitigate risks without overregulating and stifling progress. He said his company publishes results of studies voluntarily but called on the federal government to make these steps mandatory. "At the federal level, instead of a moratorium, the White House and Congress should work together on a transparency standard for A.I. companies, so that emerging risks are made clear to the American people," Amodei wrote. He also noted the standard should require AI developers to adopt policies for testing models and publicly disclose them, as well as require that they outline steps they plan to take to mitigate risk. The companies, the executive continued, would "have to be upfront" about steps taken after test results to make sure models were safe. "Having this national transparency standard would help not only the public but also Congress understand how the technology is developing, so that lawmakers can decide whether further government action is needed," he added. Amodei also suggested state laws should follow a similar model that is "narrowly focused on transparency and not overly prescriptive or burdensome." Those laws could then be superseded if a national transparency standard is adopted, Amodei said. He noted the issue is not a partisan one, praising steps Trump has taken to support domestic development of AI systems. "This is not about partisan politics. Politicians on both sides of the aisle have long raised concerns about A.I. and about the risks of abdicating our responsibility to steward it well," the executive wrote. "I support what the Trump administration has done to clamp down on the export of A.I. chips to China and to make it easier to build A.I. infrastructure here in the United States." "This is about responding in a wise and balanced way to extraordinary times," he continued. "Faced with a revolutionary technology of uncertain benefits and risks, our government should be able to ensure we make rapid progress, beat China and build A.I. that is safe and trustworthy. Transparency will serve these shared aspirations, not hinder them."
Share
Copy Link
Anthropic CEO Dario Amodei argues against a proposed 10-year ban on state AI regulation, calling for federal transparency standards instead. The controversial provision in President Trump's tax bill faces growing bipartisan opposition from lawmakers concerned about unchecked AI development.
Dario Amodei, CEO of Anthropic, has voiced strong opposition to a proposed 10-year moratorium on state AI regulation in a New York Times opinion piece 1. The controversial provision, buried within President Trump's tax policy bill, has sparked a heated debate among lawmakers, tech companies, and AI experts.
Source: Inc. Magazine
The moratorium, if passed, would prevent states from regulating AI for a decade. Amodei argues that this approach is "far too blunt an instrument" given the rapid pace of AI advancement 1. He predicts that AI systems "could change the world, fundamentally, within two years; in 10 years, all bets are off" 1.
Supporters of the moratorium, including some Republican lawmakers, argue that it's necessary to prevent a fragmented regulatory landscape that could hinder innovation and compromise America's competitive position against China 2. However, critics, including a bipartisan group of state attorneys general, warn that it could leave a dangerous regulatory vacuum 1.
The proposal has faced increasing resistance from both sides of the political aisle. Republican Representatives like Marjorie Taylor Greene and Senator Marsha Blackburn, as well as Democratic Senators Ed Markey and Elizabeth Warren, have expressed concerns about the measure 23. More than 200 state lawmakers from both parties have urged Congress to scrap the provision 2.
Source: Bloomberg Business
Instead of a blanket moratorium, Amodei proposes that the White House and Congress create federal transparency standards for AI development 1. This would require frontier AI developers to publicly disclose their testing policies and safety measures, particularly for the most capable AI models 1.
While some leading AI companies have remained quiet on the issue, others have expressed support for federal oversight. OpenAI previously asked the White House to help shield AI companies from potential state regulations, though they didn't advocate for a complete exemption 2. Anthropic, known for its safety-focused approach, has stated that states should serve as a "backstop" given the slow pace of federal policymaking 2.
State legislators worry that the provision would rob them of the ability to protect their constituents from rapidly evolving AI technologies. A letter signed by lawmakers from 49 states emphasized the critical need for state policymakers to maintain the ability to respond to AI-related challenges 24.
Source: CBS News
A recent poll by Echelon Insights found that 59% of registered voters opposed banning AI regulation for states 3. As the debate continues, the outcome of this provision could significantly shape the future of AI governance in the United States, balancing innovation with necessary safeguards and oversight.
OpenAI appeals a court order requiring it to indefinitely store deleted ChatGPT conversations as part of The New York Times' copyright lawsuit, citing user privacy concerns and setting a precedent for AI data retention.
9 Sources
Technology
16 hrs ago
9 Sources
Technology
16 hrs ago
Anysphere, the company behind the AI coding assistant Cursor, has raised $900 million in funding, reaching a $9.9 billion valuation. The startup has surpassed $500 million in annual recurring revenue, making it potentially the fastest-growing software startup ever.
4 Sources
Technology
16 hrs ago
4 Sources
Technology
16 hrs ago
A multi-billion dollar deal to build one of the world's largest AI data center hubs in the UAE, involving major US tech companies, is far from finalized due to persistent security concerns and geopolitical complexities.
4 Sources
Technology
8 hrs ago
4 Sources
Technology
8 hrs ago
A new PwC study challenges common fears about AI's impact on jobs, showing that AI is actually creating jobs, boosting wages, and increasing worker value across industries.
2 Sources
Business and Economy
9 hrs ago
2 Sources
Business and Economy
9 hrs ago
Runway's AI Film Festival in New York highlights the growing role of artificial intelligence in filmmaking, showcasing innovative short films and sparking discussions about AI's impact on the entertainment industry.
5 Sources
Technology
8 hrs ago
5 Sources
Technology
8 hrs ago