4 Sources
4 Sources
[1]
What a Proposed Moratorium on State AI Rules Could Mean for You
Expertise artificial intelligence, home energy, heating and cooling, home technology States couldn't enforce regulations on artificial intelligence technology for a decade under a plan being considered in the US House of Representatives. The legislation, in an amendment to the federal government's budget bill, says no state or political subdivision "may enforce any law or regulation regulating artificial intelligence models, artificial intelligence systems or automated decision systems" for 10 years. The proposal would still need the approval of both chambers of Congress and President Donald Trump before it can become law. The House is expected to vote on the full budget package this week. AI developers and some lawmakers have said federal action is necessary to keep states from creating a patchwork of different rules and regulations across the US that could slow the technology's growth. The rapid growth in generative AI since ChatGPT exploded on the scene in late 2022 has led companies to fit the technology in as many spaces as possible. The economic implications are significant, as the US and China race to see which country's tech will predominate, but generative AI poses privacy, transparency and other risks for consumers that lawmakers have sought to temper. "We need, as an industry and as a country, one clear federal standard, whatever it may be," Alexandr Wang, founder and CEO of the data company Scale AI, told lawmakers during an April hearing. "But we need one, we need clarity as to one federal standard and have preemption to prevent this outcome where you have 50 different standards." Efforts to limit the ability of states to regulate artificial intelligence could mean fewer consumer protections around a technology that is increasingly seeping into every aspect of American life. "There have been a lot of discussions at the state level, and I would think that it's important for us to approach this problem at multiple levels," said Anjana Susarla, a professor at Michigan State University who studies AI. "We could approach it at the national level. We can approach it at the state level too. I think we need both." The proposed language would bar states from enforcing any regulation, including those already on the books. The exceptions are rules and laws that make things easier for AI development and those that apply the same standards to non-AI models and systems that do similar things. These kinds of regulations are already starting to pop up. The biggest focus is not in the US, but in Europe, where the European Union has already implemented standards for AI. But states are starting to get in on the action. Colorado passed a set of consumer protections last year, set to go into effect in 2026. California adopted more than a dozen AI-related laws last year. Other states have laws and regulations that often deal with specific issues such as deepfakes or require AI developers to publish information about their training data. At the local level, some regulations also address potential employment discrimination if AI systems are used in hiring. "States are all over the map when it comes to what they want to regulate in AI," said Arsen Kourinian, partner at the law firm Mayer Brown. So far in 2025, state lawmakers have introduced at least 550 proposals around AI, according to the National Conference of State Legislatures. In the House committee hearing last month, Rep. Jay Obernolte, a Republican from California, signaled a desire to get ahead of more state-level regulation. "We have a limited amount of legislative runway to be able to get that problem solved before the states get too far ahead," he said. While some states have laws on the books, not all of them have gone into effect or seen any enforcement. That limits the potential short-term impact of a moratorium, said Cobun Zweifel-Keegan, managing director in Washington for the International Association of Privacy Professionals. "There isn't really any enforcement yet." A moratorium would likely deter state legislators and policymakers from developing and proposing new regulations, Zweifel-Keegan said. "The federal government would become the primary and potentially sole regulator around AI systems," he said. AI developers have asked for any guardrails placed on their work to be consistent and streamlined. During a Senate Commerce Committee hearing last week, OpenAI CEO Sam Altman told Sen. Ted Cruz, a Republican from Texas, that an EU-style regulatory system "would be disastrous" for the industry. Altman suggested instead that the industry develop its own standards. Asked by Sen. Brian Schatz, a Democrat from Hawaii, if industry self-regulation is enough at the moment, Altman said he thought some guardrails would be good but, "It's easy for it to go too far. As I have learned more about how the world works, I am more afraid that it could go too far and have really bad consequences." (Disclosure: Ziff Davis, parent company of CNET, in April filed a lawsuit against OpenAI, alleging it infringed Ziff Davis copyrights in training and operating its AI systems.) Concerns from companies -- both the developers that create AI systems and the "deployers" who use them in interactions with consumers -- often stem from fears that states will mandate significant work such as impact assessments or transparency notices before a product is released, Kourinian said. Consumer advocates have said more regulations are needed, and hampering the ability of states could hurt the privacy and safety of users. "AI is being used widely to make decisions about people's lives without transparency, accountability or recourse -- it's also facilitating chilling fraud, impersonation and surveillance," Ben Winters, director of AI and privacy at the Consumer Federation of America, said in a statement. "A 10-year pause would lead to more discrimination, more deception and less control -- simply put, it's siding with tech companies over the people they impact." A moratorium on specific state rules and laws could result in more consumer protection issues being dealt with in court or by state attorneys general, Kourinian said. Existing laws around unfair and deceptive practices that are not specific to AI would still apply. "Time will tell how judges will interpret those issues," he said. Susarla said the pervasiveness of AI across industries means states might be able to regulate issues like privacy and transparency more broadly, without focusing on the technology. But a moratorium on AI regulation could lead to such policies being tied up in lawsuits. "It has to be some kind of balance between 'we don't want to stop innovation,' but on the other hand, we also need to recognize that there can be real consequences," she said. Much policy around the governance of AI systems does happen because of those so-called technology-agnostic rules and laws, Zweifel-Keegan said. "It's worth also remembering that there are a lot of existing laws and there is a potential to make new laws that don't trigger the moratorium but do apply to AI systems as long as they apply to other systems," he said. House Democrats have said the proposed pause on regulations would hinder states' ability to protect consumers. Rep. Jan Schakowsky called the move "reckless" in a committee hearing on AI regulation Wednesday. "Our job right now is to protect consumers," the Illinois Democrat said. Republicans, meanwhile, contended that state regulations could be too much of a burden on innovation in artificial intelligence. Rep. John Joyce, a Pennsylvania Republican, said in the same hearing that Congress should create a national regulatory framework rather than leaving it to the states. "We need a federal approach that ensures consumers are protected when AI tools are misused, and in a way that allows innovators to thrive." At the state level, a letter signed by 40 state attorneys general -- of both parties -- called for Congress to reject the moratorium and instead create that broader regulatory system. "This bill does not propose any regulatory scheme to replace or supplement the laws enacted or currently under consideration by the states, leaving Americans entirely unprotected from the potential harms of AI," they wrote.
[2]
The One Big Beautiful Bill Act would ban states from regulating AI
Congressional Republicans have included a moratorium on state AI regulations in their budget bill. Credit: Anna Moneymaker / Staff / Getty Images News Buried in the Republican budget bill is a proposal that will radically change how artificial intelligence develops in the U.S., according to both its supporters and critics. The provision would ban states from regulating AI for the next decade. Opponents say the moratorium is so broadly written that states wouldn't be able to enact protections for consumers affected by harmful applications of AI, like discriminatory employment tools, deepfakes, and addictive chatbots. Instead, consumers would have to wait for Congress to pass its own federal legislation to address those concerns. Currently it has no draft of such a bill. If Congress fails to act, consumers will have little recourse until the end of the decade-long ban, unless they decide to sue companies responsible for alleged harms. Proponents of the proposal, which include the Chamber of Commerce, say that it will ensure America's global dominance in AI by freeing small and large companies from what they describe as a burdensome patchwork of state-by-state regulations. But many say the provision's scope, scale, and timeline is without precedent -- and a big gift to tech companies, including ones that donated to President Donald Trump. This week, a coalition of 77 advocacy organizations, including Common Sense Media, Fairplay, and the Center For Humane Technology, called on congressional leadership to jettison the provision from the GOP-led budget. "By wiping out all existing and future state AI laws without putting new federal protections in place, AI companies would get exactly what they want: no rules, no accountability, and total control," the coalition wrote in an open letter. Some states already have AI-related laws on the books. In Tennessee, for example, a state law known as the ELVIS Act was written to prevent the impersonation of a musician's voice using AI. Republican Sen. Marsha Blackburn, who represents Tennessee in Congress, recently hailed the act's protections and said a moratorium on regulation can't come before a federal bill. Other states have drafted legislation to address specific emerging concerns, particularly related to youth safety. California has two bills that would place guardrails on AI companion platforms, which advocates say are currently not safe for teens. One of the bills specifically outlaws high-risk uses of AI, including "anthropomorphic chatbots that offer companionship" to children and will likely lead to emotional attachment or manipulation. Camille Carlton, policy director at the Center for Humane Technology, says that while remaining competitive amidst greater regulation may be a valid concern for smaller AI companies, states are not proposing or passing expansive restrictions that would fundamentally hinder them. Nor are they targeting companies' ability to innovate in areas that would make America truly world-leading, like in health care, security, and the sciences. Instead, they are focused on key areas of safety, like fraud and privacy. They're also tailoring bills to cover larger companies or offering tiered responsibilities appropriate to a company's size. Historically, tech companies have lobbied against certain state regulations, arguing that federal legislation would be preferable, Carlton says. But then they lobby Congress to water down or kill their own regulatory bills too, she notes. Arguably, that's why Congress hasn't passed any major encompassing consumer protections related to digital technology in the decades since the internet became ascendant, Carlton says. She adds that consumers may see the same pattern play out with AI, too. Some experts are particularly worried that a hands-off approach to regulating AI will only repeat what happened when social media companies first operated without much interference. They say that came at the cost of youth mental health. Gaia Bernstein, a tech policy expert and professor at the Seton Hall University School of Law, says that states have increasingly been at the forefront of regulating social media and tech companies, particularly with regard to data privacy and youth safety. Now they're doing the same for AI. Bernstein says that in order to protect kids from excessive screen time and other online harms, states also need to regulate AI, because of how frequently the technology is used in algorithms. Presumably, the moratorium would prohibit states from doing so. "Most protections are coming from the states. Congress has largely been unable to do anything," Bernstein says. "If you're saying that states cannot do anything, then it's very alarming, because where are any protections going to come from?"
[3]
GOP push to ban state AI laws ignites debate: What to know
House Republicans pushed forward this week with a bid to ban state regulation of artificial intelligence (AI), sparking debate among the tech community and lawmakers over its implications for the emerging tech. The proposal passed the House on Thursday morning as part of a sweeping tax and spending bill. Now, it faces an uphill battle in the Senate in the wake of procedural rules and potential resistance from some GOP senators. What to know The proposal, tucked into President Trump's "one big, beautiful bill," calls for a 10-year moratorium on state laws regulating AI models, systems or automated decision systems. This includes enforcement of existing and future laws on the state level. Proponents of the moratorium argue a patchwork of state laws can be confusing or burdensome for technology companies to follow when operating in multiple parts of the country. "Right now, there are over a thousand bills on the topic of AI regulation pending in state legislatures across the country," Rep. Jay Obernolte (R-Calif.) said during the House Energy and Commerce Committee's markup of the measure. "Imagine how difficult it would be for a federal agency that operates in all 50 states to have to navigate this labyrinth of regulation when we potentially have 50 different states going 50 different directions on the topic of AI regulation," Obernolte adding, referring to the ongoing push to incorporate AI into federal agencies. "This is exactly the same circumstances that we are putting private industry in as they attempt to deploy AI," he added. The bill includes some exemptions for states' enforcement of laws focused on promoting AI development. This includes regulations that seek to remove barriers or facilitate the use of AI models and systems or those focused on streamlining processes like licensing or permitting to help AI adoption. The push aligns with the Trump administration's broader pro-innovation agenda that prioritizes technology development over regulations that hamper U.S. innovation and competitiveness. Vice President Vance in February slammed what he called "excessive" regulations of AI, while Trump rolled back former President Biden's AI executive orders he believes hampered innovation. No federal framework yet Most supporters of a moratorium make clear they are not against regulation as a whole but believe it should be done at the federal level for a unified standard for companies to easily follow. And while lawmakers have discussed a federal AI framework for years, no effort has made significant progress. The House Task Force on AI released a sweeping report at the end of last year that proposed a federal regulatory framework. Obernolte, the co-leader of the task force, expressed frustrations during the markup that Congress has not moved on this. "Congress needs to get its act together and codify some of the things in this report," he said, adding, "A moratorium is appropriate and then that will allow us a little bit of runway to get our job done and regulate." Meanwhile, many Democrats are against the moratorium over concerns it is overreaching and risks harm to consumers in the absence of a federal standard. Democratic Rep. Doris Matsui (Calif.) called the moratorium a "slap in the face to American consumers." Matsui's home state of California is one of the country's leaders when it comes to AI legislation and regulation given its Silicon Valley community. "The U.S. should be leading in the global race for AI dominance," she said. "If we don't lead, others will. However, we can't shoot ourselves in the foot by stopping the good work that states have done and will continue to do." Some Democrats say they would be more willing to support a moratorium if a federal framework existed already. Rep. Scott Peters, another California Democrat, said it was a "close call" but decided to support an amendment to eliminate the provision given the lack of federal standard. "We don't have a standard that we're offering, and I think the moratorium is too long. We should be able to do it in a much shorter period of time," Peters told Obernolte during the markup. Should the provision be stripped from the Senate reconciliation bill, some Republicans are eyeing separate legislation, Rep. Laurel Lee (R-Fla.) told The Hill. When pressed over whether this would take place this year, Lee said Trump has brought "a lot of focus and attention to artificial intelligence and innovation, so that will likely help build enthusiasm and focus in Congress as well." "I'm open-minded," said Rep. Gus Bilirakis (R-Fla.), a senior member of the House Energy and Commerce Committee, when asked about independent legislation. Obstacles in Senate While House Republicans got the provision over the finish line in their chamber, it faces greater challenges in the Senate. Lawmakers are concerned the provision may not pass the Byrd Rule, a procedural rule in the Senate prohibiting "extraneous matters" from being included in reconciliation packages. This includes provisions that do not "change outlays or revenues." It is up to the Senate parliamentarian to determine whether the moratorium violates the Byrd Rule. The measure was included in a section ordering the Commerce Department to allocate funds to "modernize and secure federal information technology systems through the deployment of commercial artificial intelligence." Moreover, at least two GOP senators known for their criticism of major tech companies voiced concerns with the moratorium this week. "We certainly know that in Tennessee we need those protections," Sen. Marsha Blackburn (R-Tenn.) said during a hearing last week on No Fakes Act, which would create federal protections for artists' voice, likeness and image from nonconsensual AI-generated deepfakes. Blackburn was discussing Tennessee's Elvis Act, which resembles her No Fakes proposal. "Until we pass something that is federally preemptive, we can't call for a moratorium," she said. Punchbowl reported Sen. Josh Hawley (R-Mo.) also pushed back against the proposal. The proposal is also seeing pushback from some state leaders, including a group of 40 state attorneys general who called the bid "irresponsible." In a letter sent to House leadership earlier this week, a coalition of more than 140 organizations urged lawmakers to remove the provision. Signatories, which included groups like Amazon Employees for Climate Justice and nonprofit Public Citizen, argued state actions on AI so far have attempted to protect residents from the risks that are otherwise ignored by Congress. Rep. Jan Schakowsky (D-Ill.), the ranking member on the House Energy and Commerce Subcommittee on Commerce, Manufacturing, and Trade called the proposal a "giant gift to Big Tech." Small and midsize firms reject this characterization, arguing larger technology companies have the financial and legal resources to comply with state regulations, while smaller ventures do not. "These entrepreneurs will be the ones who build the next transformative AI breakthroughs if the policy environment empowers them to do so," said John Mitchell, the director for Consumer Technology Association, a trade association representing mostly small and midsize tech firms. Mitchell said he does not believe Congress will move too slowly for their own framework as the watchdog warn. "I think that Congress is keenly aware that we could fall into the data privacy realm, where there are so many patchworks and has been detrimental to our business community," he said. Like for AI, there is no comprehensive national privacy law, despite a years-long push from some lawmakers. Bigger technology companies like OpenAI also support of a light-touch federal framework that preempts what CEO Sam Altman called "burdensome" state laws during a Senate hearing earlier this month. Microsoft President Brad Smith, during the same Senate hearing, advocated for a similar approach to the limited regulation that allowed the early internet commerce to develop. "There's a lot of details that need to be hammered out, but giving the federal government the ability to lead, especially in the areas around product safety and pre-release reviews and the like, would help this industry grow," Smith said.
[4]
Coalition Asks US House to Reject Freeze on State AI Regulations | PYMNTS.com
Critics argue that without state oversight, Americans remain vulnerable to AI harms as Congress has failed to pass comprehensive federal protections. More than 140 organizations are asking U.S. House leaders to reject a proposed 10-year freeze on state-level regulation of artificial intelligence (AI), saying it would remove corporate accountability for any harms arising from the technology. According to The Hill, the group sent a letter this week to House Speaker Mike Johnson, R-La., House Minority Leader Hakeem Jeffries, D-N.Y., and other members of Congress criticizing a provision tucked into the House's tax and spending bill. "This moratorium would mean that even if a company deliberately designs an algorithm that causes foreseeable harm -- regardless of how intentional or egregious the misconduct or how devastating the consequences -- the company making that bad tech would be unaccountable to lawmakers and the public," the letter stated. "We urge Congress to reject this provision," they wrote. The list of signatories includes civil society groups, academic institutions, artists and technology workers, including Amazon Employees for Climate Justice, Public Citizen and the Alphabet Workers Union, which represents employees at Google's parent company. The provision, which is part of President Donald Trump's bill, would stop states from enforcing laws over AI systems for the next decade. In a recent Congressional hearing, tech leaders told lawmakers that one of the issues hampering progress in AI is the patchwork of state AI regulations that make it more difficult to comply since requirements vary. OpenAI CEO Sam Altman told legislators that "light-touch" regulations would support AI infrastructure and supply chain advances. Other executives that testified -- AMD CEO Lisa Su, CoreWeave CEO Mike Intrator and Microsoft President Brad Smith -- also urged Congress to solve the issue of unwieldy state-level AI regulations. Read more: Regulatory Uncertainty Shifts Middle-Market CFOs' Focus Toward Compliance and Risk Management The provision does contain carveouts for state measures that "remove legal impediments" or "facilitate the deployment or operation" of AI systems, as well as laws that "streamline licensing, permitting, routing, zoning, procurement or reporting procedures." It also would allow state laws that do not impose any substantive "design, performance, data-handling, documentation, civil liability, taxation, fee, or other requirement" on AI systems, The Hill reported. The bill that contains the provision was approved by the House Budget Committee on Sunday, although the full bill still awaits a vote by the House. The coalition argued that state efforts to regulate AI have so far focused on protecting residents from potential dangers that can emerge from unregulated AI use. "As we have learned during other periods of rapid technological advancement, like the industrial revolution and the creation of the automobile, protecting people from being harmed by new technologies, including by holding companies accountable when they cause harm, ultimately spurs innovation and adoption of new technologies," the letter stated. "In other words, we will only reap the benefits of AI if people have a reason to trust it." See also: The Investment Impact of GenAI Operating Standards on Enterprise Adoption The coalition's concerns echo PYMNTS data, which shows that more than a third of CFOs surveyed see the lack of standards as an impediment to companies investing in GenAI. That report examined how CFOs navigate GenAI investment decisions and drew on insights from 60 CFOs at U.S. firms generating at least $1 billion in revenue, surveyed from Jan. 8 to Jan. 16. The data also showed 42% of companies already using AI tools are unwilling to implement GenAI due to governance concerns. The debate over whether states should retain authority to regulate AI comes amid growing calls from the tech industry for a single federal standard. Industry leaders argue that a national framework is needed to avoid a patchwork of conflicting rules. However, critics of preemption say Congress has failed to pass meaningful legislation on AI, leaving states to fill the gap. "Congress's inability to enact comprehensive legislation enshrining AI protections leaves millions of Americans more vulnerable to existing threats described above such as discrimination and all of us exposed to the unpredictable safety risks posed by this nascent industry," the coalition warned in its letter.
Share
Share
Copy Link
A proposal in the US House of Representatives to ban state-level AI regulations for 10 years has ignited a heated debate about consumer protection, innovation, and federal oversight of artificial intelligence.
A controversial provision in the US House of Representatives' budget bill has sparked intense debate over the regulation of artificial intelligence (AI) in the United States. The proposal, backed by House Republicans, calls for a 10-year moratorium on state-level AI regulations, effectively banning states from enforcing any laws or regulations on AI models, systems, or automated decision systems
1
2
.Source: PYMNTS
Proponents of the moratorium, including some lawmakers and industry leaders, argue that a patchwork of state regulations could hinder AI innovation and America's global competitiveness. They contend that a unified federal standard is necessary to provide clarity for AI developers and deployers
1
3
.Rep. Jay Obernolte (R-Calif.) emphasized the challenges faced by companies operating across multiple states, stating, "Imagine how difficult it would be for a federal agency that operates in all 50 states to have to navigate this labyrinth of regulation when we potentially have 50 different states going 50 different directions on the topic of AI regulation"
3
.Critics, including a coalition of over 140 organizations, argue that the moratorium would remove corporate accountability and leave consumers vulnerable to potential AI-related harms
4
. They warn that without state oversight, Americans would remain exposed to risks such as discrimination, privacy violations, and safety issues2
4
.Democratic Rep. Doris Matsui (Calif.) called the moratorium a "slap in the face to American consumers," highlighting concerns about the lack of federal protections in place
3
.Several states have already enacted or proposed AI-related laws. For example:
1
.1
.2
.Source: Mashable
Some tech industry leaders have called for a balanced approach to AI regulation. OpenAI CEO Sam Altman suggested that while some guardrails would be beneficial, excessive regulation could have negative consequences for the industry
1
.Related Stories
The debate over AI regulation is taking place against the backdrop of a global race for AI dominance, particularly between the US and China. Supporters of the moratorium argue that it would help maintain America's competitive edge in AI development
1
2
.The proposed moratorium faces challenges in the Senate, where it may encounter procedural hurdles and resistance from some Republican senators
3
. Some lawmakers have suggested that a shorter moratorium period or the development of a federal AI framework could be more appropriate alternatives3
.As the debate continues, stakeholders from various sectors emphasize the need for a balanced approach that fosters innovation while protecting consumers and addressing potential risks associated with AI technologies
1
2
3
4
.Summarized by
Navi
18 Jun 2025•Policy and Regulation
13 May 2025•Policy and Regulation
05 Jun 2025•Policy and Regulation
1
Business and Economy
2
Technology
3
Business and Economy