26 Sources
[1]
GOP sneaks decade-long AI regulation ban into spending bill
On Sunday night, House Republicans added language to the Budget Reconciliation bill that would block all state and local governments from regulating AI for 10 years, 404 Media reports. The provision, introduced by Representative Brett Guthrie of Kentucky, states that "no State or political subdivision thereof may enforce any law or regulation regulating artificial intelligence models, artificial intelligence systems, or automated decision systems during the 10 year period beginning on the date of the enactment of this Act." The broad wording of the proposal would prevent states from enforcing both existing and proposed laws designed to protect citizens from AI systems. For example, California's recent law requiring health care providers to disclose when they use generative AI to communicate with patients would potentially become unenforceable. New York's 2021 law mandating bias audits for AI tools used in hiring decisions would also be affected, 404 Media notes. The measure would also halt legislation set to take effect in 2026 in California that requires AI developers to publicly document the data used to train their models. The ban could also restrict how states allocate federal funding for AI programs. States currently control how they use federal dollars and can direct funding toward AI initiatives that may conflict with the administration's technology priorities. The Education Department's AI programs represent one example where states might pursue different approaches than those favored by the White House and its tech industry allies. The House Committee on Energy and Commerce, chaired by Guthrie, scheduled consideration of the text during the budget reconciliation markup on May 13. The language defines AI systems broadly enough to encompass both newer generative AI tools and older automated decision-making technologies. The reconciliation bill primarily focuses on cuts to Medicaid access and increased health care fees for millions of Americans. The AI provision appears as an addition to these broader health care changes, potentially limiting debate on the technology's policy implications. The move is already inspiring backlash. On Monday, tech safety groups and at least one Democrat criticized the proposal, reports The Hill. Rep. Jan Schakowsky (D-Ill.), the ranking member on the Commerce, Manufacturing and Trade Subcommittee, called the proposal a "giant gift to Big Tech," while nonprofit groups like the Tech Oversight Project and Consumer Reports warned it would leave consumers unprotected from AI harms like deepfakes and bias. Big Tech's White House connections President Trump has already reversed several Biden-era executive orders on AI safety and risk mitigation. The push to prevent state-level AI regulation represents an escalation in the administration's industry-friendly approach to AI policy. Perhaps it's no surprise, as the AI industry has cultivated close ties with the Trump administration since before the president took office. For example, Tesla CEO Elon Musk serves in the Department of Government Efficiency (DOGE), while entrepreneur David Sacks acts as "AI czar," and venture capitalist Marc Andreessen reportedly advises the administration. OpenAI CEO Sam Altman appeared with Trump in an AI datacenter development plan announcement in January. By limiting states' authority over AI regulation, the provision could prevent state governments from using federal funds to develop AI oversight programs or support initiatives that diverge from the administration's deregulatory stance. This restriction would extend beyond enforcement to potentially affect how states design and fund their own AI governance frameworks.
[2]
States Want to Regulate AI. Why Congress May Push Back
Expertise artificial intelligence, home energy, heating and cooling, home technology States wouldn't be able to enforce their own regulations on artificial intelligence technology for a decade under a plan being considered in the US House of Representatives. The legislation, to be considered Tuesday by the House Energy and Commerce Committee, says no state or political subdivision "may enforce any law or regulation regulating artificial intelligence models, artificial intelligence systems, or automated decision systems" for 10 years. The proposal would need the approval of both chambers of Congress and President Trump before it becomes law. AI developers and some lawmakers have said federal action is necessary to keep states from creating a patchwork of different rules and requirements across the country that could slow the technology's growth. The rapid growth in generative AI since ChatGPT exploded on the scene at the end of 2022 has led companies to fit the technology in as many spaces as possible. The economic implications are significant, as the US and China race to see which country's tech will predominate, but generative AI poses privacy, transparency and other risks for consumers that lawmakers have sought to temper. "We need, as an industry and as a country, one clear federal standard, whatever it may be," Alexandr Wang, founder and CEO of the data company Scale AI, told lawmakers during an April congressional hearing. "But we need one, we need clarity as to one federal standard and have preemption to prevent this outcome where you have 50 different standards." Efforts to limit states' ability to regulate the technology could mean fewer consumer protections around a technology that is increasingly seeping into every aspect of American life. "There have been a lot of discussions at the state level, and I would think that it's important for us to approach this problem at multiple levels," said Anjana Susarla, a professor at Michigan State University who studies AI. "We could approach it at the national level. We can approach it at the state level, too. I think we need both." The proposed language would bar states from enforcing any regulation, including those already on the books. There are exceptions -- rules and laws that make things easier for AI development and those that apply the same standards to non-AI models and systems that do similar things would be OK. These kinds of regulations are starting to pop up already. The biggest focus isn't in the US, but in Europe, where the European Union has implemented standards for AI already. But states are starting to get in on the action. Colorado passed a set of consumer protections last year, set to go into effect in 2026. California adopted more than a dozen AI-related laws last year. Other states have laws and regulations. These often deal with specific issues like deepfakes. So far in 2025, state lawmakers have introduced at least 550 proposals around AI, according to the National Conference of State Legislatures. In the April House committee hearing, Rep. Jay Obernolte, a Republican from California, signaled a desire to get ahead of more state-level regulation. "We have a limited amount of legislative runway to be able to get that problem solved before the states get too far ahead," he said. AI developers have asked for any guardrails placed on their work to be consistent and streamlined. In a hearing by the Senate Committee on Commerce, Science and Transportation last week, OpenAI CEO Sam Altman told Sen. Ted Cruz, a Republican from Texas, that an EU-style regulatory system "would be disastrous" for the industry. Altman suggested instead that the industry develop its own standards. Asked by Sen. Brian Schatz, a Democrat from Hawaii, if industry self-regulation is enough at the moment, Altman said he thought some guardrails would be good but, "it's easy for it to go too far. As I have learned more about how the world works, I am more afraid that it could go too far and have really bad consequences." (Disclosure: Ziff Davis, CNET's parent company, in April filed a lawsuit against OpenAI, alleging it infringed Ziff Davis copyrights in training and operating its AI systems.) Consumer advocates say more regulations are needed, and hampering states' ability to do so could hurt the privacy and safety of users. "AI is being used widely to make decisions about people's lives without transparency, accountability, or recourse -- it's also facilitating chilling fraud, impersonation, and surveillance," Ben Winters, director of AI and privacy at the Consumer Federation of America, said in a statement. "A ten-year pause would lead to more discrimination, more deception, and less control -- simply put, it's siding with tech companies over the people they impact." Susarla said the pervasiveness of AI across industries means states might be able to regulate issues like privacy and transparency more broadly, without focusing on AI. But a moratorium on AI regulation could lead to such policies being tied up in lawsuits. "It has to be some kind of balance between 'we don't want to stop innovation,' but on the other hand, we also need to recognize that there can be real consequences," she said.
[3]
Why Congress May Push Back on State AI Regulations
Expertise artificial intelligence, home energy, heating and cooling, home technology States will not be able to enforce their regulations on artificial intelligence technology for a decade under a plan being considered in the US House of Representatives. The legislation, in an amendment being considered this week by the House Energy and Commerce Committee, says no state or political subdivision "may enforce any law or regulation regulating artificial intelligence models, artificial intelligence systems or automated decision systems" for 10 years. The proposal would still need the approval of both chambers of Congress and President Donald Trump before it can become law. AI developers and some lawmakers have said federal action is necessary to keep states from creating a patchwork of different rules and regulations across the US that could slow the technology's growth. The rapid growth in generative AI since ChatGPT exploded on the scene in late 2022 has led companies to fit the technology in as many spaces as possible. The economic implications are significant, as the US and China race to see which country's tech will predominate, but generative AI poses privacy, transparency and other risks for consumers that lawmakers have sought to temper. "We need, as an industry and as a country, one clear federal standard, whatever it may be," Alexandr Wang, founder and CEO of the data company Scale AI, told lawmakers during an April hearing. "But we need one, we need clarity as to one federal standard and have preemption to prevent this outcome where you have 50 different standards." Efforts to limit the ability of states to regulate artificial intelligence could mean fewer consumer protections around a technology that is increasingly seeping into every aspect of American life. "There have been a lot of discussions at the state level, and I would think that it's important for us to approach this problem at multiple levels," said Anjana Susarla, a professor at Michigan State University who studies AI. "We could approach it at the national level. We can approach it at the state level too. I think we need both." The proposed language would bar states from enforcing any regulation, including those already on the books. The exceptions are rules and laws that make things easier for AI development and those that apply the same standards to non-AI models and systems that do similar things. These kinds of regulations are already starting to pop up. The biggest focus is not in the US, but in Europe, where the European Union has already implemented standards for AI. But states are starting to get in on the action. Colorado passed a set of consumer protections last year, set to go into effect in 2026. California adopted more than a dozen AI-related laws last year. Other states have laws and regulations that often deal with specific issues such as deepfakes or require AI developers to publish information about their training data. At the local level, some regulations also address potential employment discrimination if AI systems are used in hiring. "States are all over the map when it comes to what they want to regulate in AI," said Arsen Kourinian, partner at the law firm Mayer Brown. So far in 2025, state lawmakers have introduced at least 550 proposals around AI, according to the National Conference of State Legislatures. In the House committee hearing last month, Rep. Jay Obernolte, a Republican from California, signaled a desire to get ahead of more state-level regulation. "We have a limited amount of legislative runway to be able to get that problem solved before the states get too far ahead," he said. AI developers have asked for any guardrails placed on their work to be consistent and streamlined. During a Senate Commerce Committee hearing last week, OpenAI CEO Sam Altman told Sen. Ted Cruz, a Republican from Texas, that an EU-style regulatory system "would be disastrous" for the industry. Altman suggested instead that the industry develop its own standards. Asked by Sen. Brian Schatz, a Democrat from Hawaii, if industry self-regulation is enough at the moment, Altman said he thought some guardrails would be good but, "It's easy for it to go too far. As I have learned more about how the world works, I am more afraid that it could go too far and have really bad consequences." (Disclosure: Ziff Davis, parent company of CNET, in April filed a lawsuit against OpenAI, alleging it infringed Ziff Davis copyrights in training and operating its AI systems.) Concerns from companies -- both the developers that create AI systems and the "deployers" who use them in interactions with consumers -- often stem from fears that states will mandate significant work like such as impact assessments or transparency notices before a product is released, Kourinian said. Consumer advocates have said more regulations are needed, and hampering the ability of states could hurt the privacy and safety of users. "AI is being used widely to make decisions about people's lives without transparency, accountability or recourse -- it's also facilitating chilling fraud, impersonation and surveillance," Ben Winters, director of AI and privacy at the Consumer Federation of America, said in a statement. "A 10-year pause would lead to more discrimination, more deception and less control -- simply put, it's siding with tech companies over the people they impact." A moratorium on specific state rules and laws could result in more consumer protection issues being dealt with in court or by state attorneys general, Kourinian said. Existing laws around unfair and deceptive practices that are not specific to AI would still apply. "Time will tell how judges will interpret those issues," he said. Susarla said the pervasiveness of AI across industries means states might be able to regulate issues like privacy and transparency more broadly, without focusing on the technology. But a moratorium on AI regulation could lead to such policies being tied up in lawsuits. "It has to be some kind of balance between 'we don't want to stop innovation,' but on the other hand, we also need to recognize that there can be real consequences," she said.
[4]
Congress Might Halt State AI Regulations. What It Means for You and Your Privacy
Expertise artificial intelligence, home energy, heating and cooling, home technology States will not be able to enforce their regulations on artificial intelligence technology for a decade under a plan being considered in the US House of Representatives. The legislation, in an amendment accepted this week by the House Energy and Commerce Committee, says no state or political subdivision "may enforce any law or regulation regulating artificial intelligence models, artificial intelligence systems or automated decision systems" for 10 years. The proposal would still need the approval of both chambers of Congress and President Donald Trump before it can become law. AI developers and some lawmakers have said federal action is necessary to keep states from creating a patchwork of different rules and regulations across the US that could slow the technology's growth. The rapid growth in generative AI since ChatGPT exploded on the scene in late 2022 has led companies to fit the technology in as many spaces as possible. The economic implications are significant, as the US and China race to see which country's tech will predominate, but generative AI poses privacy, transparency and other risks for consumers that lawmakers have sought to temper. "We need, as an industry and as a country, one clear federal standard, whatever it may be," Alexandr Wang, founder and CEO of the data company Scale AI, told lawmakers during an April hearing. "But we need one, we need clarity as to one federal standard and have preemption to prevent this outcome where you have 50 different standards." Efforts to limit the ability of states to regulate artificial intelligence could mean fewer consumer protections around a technology that is increasingly seeping into every aspect of American life. "There have been a lot of discussions at the state level, and I would think that it's important for us to approach this problem at multiple levels," said Anjana Susarla, a professor at Michigan State University who studies AI. "We could approach it at the national level. We can approach it at the state level too. I think we need both." The proposed language would bar states from enforcing any regulation, including those already on the books. The exceptions are rules and laws that make things easier for AI development and those that apply the same standards to non-AI models and systems that do similar things. These kinds of regulations are already starting to pop up. The biggest focus is not in the US, but in Europe, where the European Union has already implemented standards for AI. But states are starting to get in on the action. Colorado passed a set of consumer protections last year, set to go into effect in 2026. California adopted more than a dozen AI-related laws last year. Other states have laws and regulations that often deal with specific issues such as deepfakes or require AI developers to publish information about their training data. At the local level, some regulations also address potential employment discrimination if AI systems are used in hiring. "States are all over the map when it comes to what they want to regulate in AI," said Arsen Kourinian, partner at the law firm Mayer Brown. So far in 2025, state lawmakers have introduced at least 550 proposals around AI, according to the National Conference of State Legislatures. In the House committee hearing last month, Rep. Jay Obernolte, a Republican from California, signaled a desire to get ahead of more state-level regulation. "We have a limited amount of legislative runway to be able to get that problem solved before the states get too far ahead," he said. While some states have laws on the books, not all of them have gone into effect or seen any enforcement. That limits the potential short-term impact of a moratorium, said Cobun Zweifel-Keegan, managing director in Washington for the International Association of Privacy Professionals. "There isn't really any enforcement yet." A moratorium would likely deter state legislators and policymakers from developing and proposing new regulations, Zweifel-Keegan said. "The federal government would become the primary and potentially sole regulator around AI systems," he said. AI developers have asked for any guardrails placed on their work to be consistent and streamlined. During a Senate Commerce Committee hearing last week, OpenAI CEO Sam Altman told Sen. Ted Cruz, a Republican from Texas, that an EU-style regulatory system "would be disastrous" for the industry. Altman suggested instead that the industry develop its own standards. Asked by Sen. Brian Schatz, a Democrat from Hawaii, if industry self-regulation is enough at the moment, Altman said he thought some guardrails would be good but, "It's easy for it to go too far. As I have learned more about how the world works, I am more afraid that it could go too far and have really bad consequences." (Disclosure: Ziff Davis, parent company of CNET, in April filed a lawsuit against OpenAI, alleging it infringed Ziff Davis copyrights in training and operating its AI systems.) Concerns from companies -- both the developers that create AI systems and the "deployers" who use them in interactions with consumers -- often stem from fears that states will mandate significant work such as impact assessments or transparency notices before a product is released, Kourinian said. Consumer advocates have said more regulations are needed, and hampering the ability of states could hurt the privacy and safety of users. "AI is being used widely to make decisions about people's lives without transparency, accountability or recourse -- it's also facilitating chilling fraud, impersonation and surveillance," Ben Winters, director of AI and privacy at the Consumer Federation of America, said in a statement. "A 10-year pause would lead to more discrimination, more deception and less control -- simply put, it's siding with tech companies over the people they impact." A moratorium on specific state rules and laws could result in more consumer protection issues being dealt with in court or by state attorneys general, Kourinian said. Existing laws around unfair and deceptive practices that are not specific to AI would still apply. "Time will tell how judges will interpret those issues," he said. Susarla said the pervasiveness of AI across industries means states might be able to regulate issues like privacy and transparency more broadly, without focusing on the technology. But a moratorium on AI regulation could lead to such policies being tied up in lawsuits. "It has to be some kind of balance between 'we don't want to stop innovation,' but on the other hand, we also need to recognize that there can be real consequences," she said. Much policy around the governance of AI systems does happen because of those so-called technology-agnostic rules and laws, Zweifel-Keegan said. "It's worth also remembering that there are a lot of existing laws and there is a potential to make new laws that don't trigger the moratorium but do apply to AI systems as long as they apply to other systems," he said.
[5]
Republicans push for a decadelong ban on states regulating AI
Republicans want to stop states from regulating AI. On Sunday, a Republican-led House committee submitted a budget reconciliation bill that proposes blocking states from enforcing "any law or regulation" targeting an exceptionally broad range of automated computing systems for 10 years after the law is enacted -- a move that would stall efforts to regulate everything from AI chatbots to online search results. Democrats are calling the new provision a "giant gift" to Big Tech, and organizations that promote AI oversight, like Americans for Responsible Innovation (ARI), say it could have "catastrophic consequences" for the public. It's a gift companies like OpenAI have recently been seeking in Washington, aiming to avoid a slew of pending and active state laws. The budget reconciliation process allows lawmakers to fast-track bills related to government spending by requiring only a majority in the Senate rather than 60 votes to pass.
[6]
GOP sneaks 10-year halt to AI regulation decree into budget bill
House republicans are trying to sneak a 10-year pause on AI regulation into the Budget Reconciliation bill. Representative Brett Guthrie (R-KY), who chairs the House Committee on Energy and Commerce, introduced a provision to the Budget Reconciliation bill last Sunday night that will prevent states and local governments from enforcing any legislation on AI. According to 404 Media, this limitation was inserted into an already controversial bill, and would take away the power of individual states to regulate artificial intelligence as they would see fit. The text of the proposal, which is under Title IV, Subtitle C, Part 2c of the bill, says, "In general -- except as provided in paragraph (2), no State or political subdivision thereof may enforce any law or regulation regulating artificial intelligence models, artificial intelligence systems, or automated decision systems during the 10-year period beginning on the date of the enactment of this Act." This means that states will lose power over anything related to AI, as they cannot implement their own rules and must follow the federal directive. Several states have already enacted laws that control the use of AI in their respective territories. For example, California mandates that health care providers must tell their patients if they use generative AI to communicate with them. Furthermore, AI developers in the state must document and publicly share the information about the data they used to train their models -- a crucial law to help prevent AI companies from stealing copyrighted data. New York also requires businesses that use AI for hiring to conduct audits of their tools to avoid any bias. What's more concerning is that the bill's text covers such a broad spectrum of AI, including models, systems, and even "automated decision systems". This means that both new AI models and old algorithms that can make automated decisions are covered by the federal law. Many AI and tech companies have been trying to get closer to President Trump and the Republican Party, and it seems that their efforts are bearing fruit. Several key personalities involved in AI have become key members of the administration, including Elon Musk, former PayPal COO David Sacks, and venture capitalist Marc Andreessen, who has invested in Facebook, Twitter, and OpenAI. The current government has also suspended or reversed former President Biden's executive orders aimed at reducing the threat of uncontrolled AI development.
[7]
US AI laws risk becoming more 'European' than Europe's
There was no mistaking the mood in the Senate room in Washington last Thursday as politicians and tech bosses debated artificial intelligence. The consensus was that it was essential for the US to deregulate and accelerate investment in order to outrun China in the latest technological arms race. Europe, meanwhile, was ridiculed as an AI also-ran having hobbled itself with "stifling" regulations. Lobbing a softball to the tech executives, Republican Senator Ted Cruz asked: how harmful would it be if the US followed the EU in creating a heavy-handed regulatory process for AI? "I think that would be disastrous," replied OpenAI's chief executive, Sam Altman. Deregulation and acceleration may be the watchwords in Washington after President Donald Trump tore up his predecessor's sweeping executive order on AI. Republicans later trumpeted almost $1tn of promised investment in the sector. But that worldview is evidently not shared across the nation. Thirty-one US states passed resolutions or laws on AI last year, according to the National Conference of State Legislatures, covering harms, such as the use of deepfakes in elections, employment discrimination and lack of consumer protection. This year, the NCSL has flagged a further 550 AI-related bills that have been introduced in 45 states. Most of these initiatives will fail, as happened to California's landmark AI bill last year, but a few may well pass. Left unchecked, that could result in the US having "a web of inconsistent laws that fragment national policy, delay innovation, and create legal and technical barriers to scaling AI systems across state lines", warns Daniel Castro, director of the Center for Data Innovation. When it comes to tech regulation, it seems, the US might end up more "European" than Europe. That fear prompted House Republicans this week to push a legislative amendment that would roll back state AI laws and impose a moratorium on any new ones for a decade. The move was condemned by state representatives and the AI researcher Gary Marcus. "A decade of deregulation isn't a path forward. It's an abdication of responsibility," they wrote in an open letter. Opposition politicians also highlighted the hypocrisy of revering states' rights when regulating women's bodies but abandoning them when protecting consumers from powerful tech interests. An intense battle may now erupt between Washington and the states over who has the right to regulate -- or deregulate -- technology. At the state level, there is "incredible momentum" to fill the regulatory vacuum created by Washington's inaction, according to Amba Kak, executive director of the AI Now Institute. States are determined to tackle the most "abhorrent, harmful and problematic" use cases of AI, she says. "In today's world, they're the only people who can push this regulatory agenda forward. I think the states very much see a gap, and a moment, for them to step up to the plate," she tells me. However, fragmented AI-related state legislation affecting data privacy rights and autonomous cars, for example, can cause real complications for many companies. That is particularly true in some traditional sectors, such as financial services and medicine, that are wary of adopting AI services because of a lack of trust in untested AI systems and no clear mitigation, says Rumman Chowdhury, co-founder of non-profit Humane Intelligence and a former Biden administration official. "Regulation doesn't stifle innovation. Regulation enables it," she tells me, noting there is often a "trickle up" effect from the states to the federal level. That suggests that the regulatory activism by states might yet force Washington to move, especially seeing that some members of the Maga crowd support a more interventionist approach. "Right now a nail salon in Washington DC has more regulations than these four guys running wild on AI. We have no earthly idea what is going on," the former Trump aide Steve Bannon told the FT Weekend Festival in DC. "I think we ought to have tremendous regulations on AI." Even the anti-regulation evangelist Cruz accepts the necessity to act in certain cases. With the Democratic senator Amy Klobuchar, he co-sponsored the recent bipartisan Take It Down Act criminalising the sharing of AI-generated sexual abuse material. That legislation was also supported by first lady Melania Trump. There may be many strange alliances and unpredictable zigzags along the way, but regulation is coming for AI -- even in the US.
[8]
AI regulation ban meets opposition from state attorneys general over risks to US consumers
May 16 (Reuters) - A Republican proposal to block states from regulating artificial intelligence for 10 years drew opposition on Friday from a bipartisan group of attorneys general in California, New York, Ohio and other states that have regulated high-risk uses of the technology. The measure included in President Donald Trump's tax cut bill would preempt AI laws and regulations passed recently in dozens of states. A group of 40 state attorneys general, including Republicans from Ohio, Tennessee, Arkansas, Utah and Virginia and other states, urged Congress to ditch the measure on Friday, as the U.S. House of Representatives' budget committee geared up for a Sunday night hearing. "Imposing a broad moratorium on all state action, while Congress fails to act in this area is irresponsible and deprives consumers of reasonable protections," said the group. The attorney general from California -- which is home to prominent AI companies, including OpenAI, Alphabet (GOOGL.O), opens new tab, Meta Platforms (META.O), opens new tab and Anthropic -- was among the Democrats who signed the letter. "I strongly oppose any effort to block states from developing and enforcing common-sense regulation; states must be able to protect their residents by responding to emerging and evolving AI technology," Attorney General Rob Bonta said. California implemented a raft of bills this year limiting specific uses of AI, illustrating the kind of laws that would be blocked under the moratorium. Like several other states, California has criminalized the use of AI to generate sexually explicit images of individuals without their consent. The state also prohibits unauthorized deepfakes in political advertising, and requires healthcare providers to notify patients when they are interacting with an AI and not a human. Healthcare provider networks, also known as HMOs, are barred in California from using AI systems instead of doctors to decide medical necessity. House Republicans said in a hearing Tuesday that the measure was necessary to help the federal government in implementing AI, for which the package allocates $500 million. "It's nonsensical to do that if we're going to allow 1,000 different pending bills in state legislatures across the country to become law," said Jay Obernolte, a Republican from California who represents part of Silicon Valley, including Mountain View where Google is based. "It would be impossible for any agency that operates in all the states to be able to comply with those regulations," he said. Google has called the proposed moratorium "an important first step to both protect national security and ensure continued American AI leadership." That position will be tested if the measure makes it to the Senate. It will need to clear the budget reconciliation process, which is supposed to be used only for budget-related legislation. Reporting by Jody Godoy in New York; Editing by Aurora Ellis Our Standards: The Thomson Reuters Trust Principles., opens new tab Suggested Topics:Artificial IntelligenceHuman Rights Jody Godoy Thomson Reuters Jody Godoy reports on tech policy and antitrust enforcement, including how regulators are responding to the rise of AI. Reach her at [email protected]
[9]
House Republicans include a 10-year ban on US states regulating AI in 'big, beautiful' bill
WASHINGTON (AP) -- House Republicans surprised tech industry watchers and outraged state governments when they added a clause to Republicans' signature " big, beautiful " tax bill that would ban states and localities from regulating artificial intelligence for a decade. The brief but consequential provision, tucked into the House Energy and Commerce Committee's sweeping markup, would be a major boon to the AI industry, which has lobbied for uniform and light touch regulation as tech firms develop a technology they promise will transform society. However, while the clause would be far-reaching if enacted, it faces long odds in the U.S. Senate, where procedural rules may doom its inclusion in the GOP legislation. "I don't know whether it will pass the Byrd Rule," said Sen. John Cornyn, R-Texas, referring to a provision that requires that all parts of a budget reconciliation bill, like the GOP plan, focus mainly on the budgetary matters rather than general policy aims. "That sounds to me like a policy change. I'm not going to speculate what the parliamentarian is going to do but I think it is unlikely to make it," Cornyn said. Senators in both parties have expressed an interest in artificial intelligence and believe that Congress should take the lead in regulating the technology. But while lawmakers have introduced scores of bills, including some bipartisan efforts, that would impact artificial intelligence, few have seen any meaningful advancement in the deeply divided Congress. An exception is a bipartisan bill expected to be signed into law by President Donald Trump next week that would enact stricter penalties on the distribution of intimate "revenge porn" images, both real and AI-generated, without a person's consent. "AI doesn't understand state borders, so it is extraordinarily important for the federal government to be the one that sets interstate commerce. It's in our Constitution. You can't have a patchwork of 50 states," said Sen. Bernie Moreno, an Ohio Republican. But Moreno said he was unsure if the House's proposed ban could make it through Senate procedure. The AI provision in the bill states that "no state or political subdivision may enforce any law or regulation regulating artificial intelligence models, artificial intelligence systems, or automated decision systems." The language could bar regulations on systems ranging from popular commercial models like ChatGPT to those that help make decisions about who gets hired or finds housing. State regulations on AI's usage in business, research, public utilities, educational settings and government would be banned. The congressional pushback against state-led AI regulation is part of a broader move led by the Trump administration to do away with policies and business approaches that have sought to limit AI's harms and pervasive bias. Half of all U.S. states so far have enacted legislation regulating AI deepfakes in political campaigns, according to a tracker from the watchdog organization Public Citizen. Most of those laws were passed within the last year, as incidents in democratic elections around the globe in 2024 highlighted the threat of lifelike AI audio clips, videos and images to deceive voters. California state Sen. Scott Wiener called the Republican proposal "truly gross" in a social media post. Wiener, a San Francisco Democrat, authored landmark legislation last year that would have created first-in-the-nation safety measures for advanced artificial intelligence models. The bill was vetoed by California Gov. Gavin Newsom, a fellow San Francisco Democrat. "Congress is incapable of meaningful AI regulation to protect the public. It is, however, quite capable of failing to act while also banning states from acting," Wiener wrote. A bipartisan group of dozens of state attorneys general also sent a letter to Congress on Friday opposing the bill. "AI brings real promise, but also real danger, and South Carolina has been doing the hard work to protect our citizens," said South Carolina Attorney General Alan Wilson, a Republican, in a statement. "Now, instead of stepping up with real solutions, Congress wants to tie our hands and push a one-size-fits-all mandate from Washington without a clear direction. That's not leadership, that's federal overreach." As the debate unfolds, AI industry leaders are pressing ahead on research while competing with rivals to develop the best -- and most widely used -- AI systems. They have pushed federal lawmakers for uniform and unintrusive rules on the technology, saying they need to move quickly on the latest models to compete with Chinese firms. Sam Altman, the CEO of ChatGPT maker OpenAI, testified in a Senate hearing last week that a "patchwork" of AI regulations "would be quite burdensome and significantly impair our ability to do what we need to do." "One federal framework, that is light touch, that we can understand and that lets us move with the speed that this moment calls for seems important and fine," Altman told Sen. Cynthia Lummis, a Wyoming Republican. And Sen. Ted Cruz floated the idea of a 10-year "learning period" for AI at the same hearing, which included three other tech company executives. "Would you support a 10-year learning period on states issuing comprehensive AI regulation, or some form of federal preemption to create an even playing field for AI developers and employers?" asked the Texas Republican. Altman responded that he was "not sure what a 10-year learning period means, but I think having one federal approach focused on light touch and an even playing field sounds great to me." Microsoft's president, Brad Smith, also offered measured support for "giving the country time" in the way that limited U.S. regulation enabled early internet commerce to flourish. "There's a lot of details that need to be hammered out, but giving the federal government the ability to lead, especially in the areas around product safety and pre-release reviews and the like, would help this industry grow," Smith said. It was a change, at least in tone, for some of the executives. Altman had testified to Congress two years ago on the need for AI regulation, and Smith, five years ago, praised Microsoft's home state of Washington for its "significant breakthrough" in passing first-in-the-nation guardrails on the use of facial recognition, a form of AI. Ten GOP senators said they were sympathetic to the idea of creating a national framework for AI. But whether the majority can work with Democrats to find a filibuster-proof solution is unclear. "I am not opposed to the concept. In fact, interstate commerce would suggest that it is the responsibility of Congress to regulate these types of activities and not the states," said Sen. Mike Rounds, a South Dakota Republican. "If we're going to do it state by state we're going to have a real mess on our hands," Rounds said. -- -- -- -- -- -- O'Brien reported from Providence, Rhode Island. AP writers Ali Swenson in New York, Jesse Bedayn in Denver, Jeffrey Collins in Columbia, South Carolina, and Trân Nguyễn in Sacramento, California contributed to this report.
[10]
Republicans Pander to Big Tech With Proposed 10-Year Ban on State AI Regulations
The United States doesn't have a comprehensive legislative framework regulating AI. If states can't make laws, Big Tech can continue doing as it pleases. Over the weekend, Republicans in the House unveiled a sweeping budget proposal that includes massive cuts to Medicaid, food assistance, climate programs, and more. But buried amongst those cuts, legislators also proposed a decade-long ban on AI regulations at the state level. Although framed as upholding innovation, the attempted moratorium is yet another clear display of the federal government pandering to the desires of Big Tech. Within the House Energy and Commerce Committee's bill, lawmakers proposed that "no state or political subdivision may enforce any law of regulation regulating artificial intelligence models, artificial intelligence systems, or automated decision systems" starting from the day that the proposal is enacted. Laws imposing "a substantive design, performance, data-handling, documentation, civil liability, taxation, fee, or other requirement" on listed AI systems would fall under the moratorium. However, there are a few exceptions, like if the above requirements are due to federal regulation or if the law also applies to non-AI systems that "provide comparable functions". In addition, the pause isn't applicable to regulations that "remove legal impediments" or "facilitate the deployment or operation of" AI systems. The proposal comes shortly after the Commerce Committee's hearing titled "Winning the AI Race". During his testimony, OpenAI CEO Sam Altman said that allowing states to assemble a patchwork regulatory framework "will slow us down at a time where I don't think it's in anyone's interest." And sure, abiding by regulations in 50 different states is hard. But a) that's how the U.S. works, each state can have its own unique laws, and b) there wouldn't be such an amalgamation of AI regulation if the federal government actually put together its own. Regardless of which party holds power, the U.S. is notorious for falling behind when it comes to tech-related legislation. One of the biggest examples is the U.S.'s lack of its own comprehensive federal privacy law. As a result, states have no choice but to enact piecemeal legislation to tackle a rapidly changing environment as new technologies usher in their own unique concerns. Per the National Conference of State Legislatures, at least 45 states, Puerto Rico, the Virgin Islands, and Washington, D.C., introduced AI bills. On Monday, Rep. Jan Schakowsky (D-Illinois), a ranking member of the Commerce, Manufacturing, and Trade Subcommittee, blasted the proposal as a "giant gift to Big Tech" and "shows that Republicans care more about profits than people." Similarly, the Tech Oversight Project's executive director, Sacha Haworth, told the Hill that the "so-called 'states' rights'" party's provision "is not only hypocritical, it's a massive handout to Big Tech." Haworth added that "it comes as no surprise that Big Tech is trying to stop [efforts to regulate AI] dead in its tracks." Since taking office, Trump has taken a clear stance on letting AI run wild. In January, he rescinded Biden's executive order for AI regulation and, shortly after, directed the Office of Management and Budget to overhaul its directive on federal uses of AI. Although Trump released his own AI guidance last month that copies Biden's in a few areas, his administration has overall played fast and loose with AI, with no concern for analyzing its civil rights impacts. During his opening statement at last week's meeting, Sen. Ted Cruz (R-Texas) summarized the dominant attitude towards AI regulations, stating, "All of this busybody bureaucracy - whether Biden's industrial policy on chip exports or industry and regulator-approved 'guidance' documents - is a wolf in sheep's clothing. To lead in AI, the U.S. cannot allow regulation, even the supposedly benign kind, to choke innovation or adoption." Currently, the proposed moratorium's full scope is unclear. David Stauss, an attorney with Husch Blackwell, told the International Association of Privacy Professionals, "A lot would depend on how the terms are defined." Legally speaking, AI is a nebulous term. Stauss noted that while Colorado's AI Act uses a broad definition based on the Organisation for Economic Co-operation and Development's own, other states are more limited. But if federal legislator's definition is broad, Stauss said, "all sorts of laws could be implicated, even product liability and medical malpractice laws as extreme edge cases." It's entirely possible that this proposal gets removed down the road. If it stays, its language will likely be adjusted one way or the other. But its very inclusion in House Republicans' budget bill suggests that the U.S. will continue hurtling down the AI-or-be-damned pathway with no regard for the consequences.
[11]
Opinion | A ban on state-level AI regulation would put Americans at risk
States are an important testing ground for new policy ideas. This moratorium would undermine that. Scott Brennen is director of the NYU Center on Technology Policy. Zeve Sanderson is executive director of the NYU Center for Social Media & Politics. Within the House Energy and Commerce Committee's new budget reconciliation bill lies an alarming provision: a decade-long moratorium on state regulation of artificial intelligence. The proposed ban is extraordinarily broad, prohibiting states from enforcing new or existing laws. Sen. Ted Cruz (R-Texas) has announced he will soon introduce similar legislation in the Senate. If adopted, it would extinguish the only meaningful effort to protect Americans from AI-related risks. In recent years, states have quietly become the front line against technology's potential harms. From data privacy to children's online safety, the federal government has failed to act. States have stepped in to fill that regulatory gap, with dozens passing meaningful legislation to protect the public, particularly when it comes to AI. Since ChatGPT's release two and half years ago, Congress has puttered, backtracked and ultimately produced little AI regulation. States, on the other hand, took the lead. Colorado passed broad rules on algorithmic discrimination in high-risk models, Tennessee regulated the use of artists' likenesses in AI replicas and Wisconsin now requires that campaigns label political ads that use AI. Those regulations are just a sampling: As we detail in our annual report on state technology policymaking, 41 states enacted 107 pieces of AI-related legislation last year. Notably, for the past several years, the push for state-level AI regulation has been bipartisan. Democratic and Republican legislators across every state have introduced new AI regulation -- and in some cases even partnered on bills. These lawmakers are responding to constituents across the political spectrum, a majority of whom are concerned that regulation of AI will be too lax. This year, existing state proposals already seek to address some of the most serious emerging harms posed by AI. For example, there is a growing movement to require that medical insurance companies have humans reviewing decisions made by AI models denying insurance claims. Other laws would require that companies disclose when AI models are used to deny coverage, deny a mortgage or inflate the price of a housing rental. AI regulation is not without issues. Many proposals are misguided; some would probably create regulatory burdens and impair American innovation and competition. Given the strategic importance of the sector for the economy and national security, concerns about overregulation are warranted. But lawmaking is stronger when the states help generate new ideas for what and how to regulate -- and then test them. Regulation is hard, especially for new technologies. Policies often have unintended consequences: In some cases, state environmental regulation has worsened the housing crisis, and age verification laws have inadvertently benefited noncompliant, foreign-based pornography sites. AI regulation will bring similar pitfalls. Such challenges are exactly why states play a critical role; they can surface new ideas and produce important evidence that informs federal policymaking. For example, several weeks ago, Congress passed the Take It Down Act, which criminalizes the sharing of nonconsensual authentic and computer-generated intimate imagery. But the federal legislation was undoubtedly buoyed by state efforts; dozens of states had debated and enacted similar laws in the past five years. State AI regulation is hardly perfect. State lawmakers must research, write and negotiate complex policy issues with fewer resources than their federal counterparts. The disjointed nature of state-level regulation means citizens are granted varying rights and protections across jurisdictions. Plus, companies find it easier to comply with uniform laws rather than navigating dozens of regulatory environments. Hence, we are not arguing that regulation should be left to the states alone. But Congress is moving to undermine the only concerted legislative effort aimed at balancing AI's myriad risks and benefits without offering federal legislation in its place. Given the speed at which companies are developing increasingly powerful AI systems, the resulting regulatory gap is especially concerning. The proposed moratorium on state AI policymaking would be far worse than Congress's inaction on the issue. States play an essential role in AI regulation -- eliminating that will leave Americans more exposed to risks and with fewer avenues to influence AI policies. And by denying states the ability to generate and test new ideas, this ban would undermine the quality of federal AI regulation, if we ever get it.
[12]
Republicans Try to Cram Ban on AI Regulation Into Budget Reconciliation Bill
Republicans try to use the Budget Reconciliation bill to stop states from regulating AI entirely for 10 years. Late last night, House Republicans introduced new language to the Budget Reconciliation bill that will immiserate the lives of millions of Americans by cutting their access to Medicaid, and making life much more difficult for millions more by making them pay higher fees when they seek medical care. While a lot of attention will be justifiably given to these cuts, the bill has also crammed in new language that attempts to entirely stop states from enacting any regulation against artificial intelligence. "...no State or political subdivision thereof may enforce any law or regulation regulating artificial intelligence models, artificial intelligence systems, or automated decision systems during the 10 year period beginning on the date of the enactment of this Act," says the text of the bill introduced Sunday night by Congressman Brett Guthrie of Kentucky, Chairman of the House Committee on Energy and Commerce. The text of the bill will be considered by the House at the budget reconciliation markup on May 13. That language of the bill, how it goes on to define AI and other "automated systems," and what it considers "regulation," is broad enough to cover relatively new generative AI tools and technology that has existed for much longer. In theory, that language will make it impossible to enforce many existing and proposed state laws that aim to protect people from and inform them about AI systems. For example, last year California passed a law that requires health care providers to disclose when they have used generative AI to communicate clinical information to patients. In 2021, New York passed the first law in the United States requiring employers to conduct bias audits of AI tools used for employment decisions. California also passed a law that will go into effect in 2026 which requires developers of generative AI models to share detailed documentation on its websites about the data it used to develop these models, an extremely consequential law as AI companies are currently hiding their exploitation of copyrighted materials in order to create these models, as we have shown repeatedly. In theory none of these states will be able to enforce these laws if Republicans manage to pass the Budget Reconciliation bill with this current language. The AI industry has been sucking up to Trump since before he got into office, and his administration is intertwined with AI executives, be it Elon Musk at DOGE, David Sacks as an AI czar, or Marc Andreessen as an advisor. Trump has returned the favor by undoing Biden era executive orders aimed at mitigating AI risk. Preventing states from charting their own paths on this issue and trying to protect people from these systems will be one of the most radical positions Republicans have taken on this issue yet.
[13]
Republicans propose prohibiting US states from regulating AI for 10 years
Last-minute add to budget bill aims to prevent laws that would create guardrails for automated decision-making systems Republicans in US Congress are trying to bar states from being able to introduce or enforce laws that would create guardrails for artificial intelligence or automated decision-making systems for 10 years. A provision in the proposed budgetary bill now before the House of Representatives would prohibit any state or local governing body from pursuing "any law or regulation regulating artificial intelligence models, artificial intelligence systems, or automated decision systems" unless the purpose of the law is to "remove legal impediments to, or facilitate the deployment or operation of" these systems. The provision was a last-minute addition by House Republicans to the bill just two nights before it was due to be marked up on Tuesday. The House energy and commerce committee voted to advance the reconciliation package on Wednesday morning. The bill defines AI systems and models broadly, with anything from facial recognition systems to generative AI qualifying. The proposed law would also apply to systems that use algorithms or AI to make decisions including for hiring, housing and whether someone qualifies for public benefits. Many of these automated decision making systems have recently come under fire. The deregulatory proposal comes on the heels of a lawsuit filed by several state attorneys general against property management software RealPage, which the lawsuit alleges colluded with landlords to raise rents based on the company's algorithmic recommendations. Another company, SafeRent, recently settled a class action lawsuit filed by Black and Hispanic renters who say they were denied apartments based on an opaque score the company gave them. Some states have already inked laws that would attempt to establish safeguards around these systems. New York, for instance, passed a law that required automated hiring systems undergo bias assessments. California has passed several laws regulating automated decision-making, including one that requires healthcare providers to notify patients when they send communications using generative AI. These laws may become unenforceable if the reconciliation bill passes. "This bill is a sweeping and reckless attempt to shield some of the largest and most powerful corporations in the world - from Big Tech monopolies to RealPage, UnitedHealth Group and others - from any sort of accountability," said Lee Hepner, senior legal counsel at the American Economic Liberties Project. The new language is in line with Trump administration actions that aim to remove any perceived impediments to AI development. Upon taking office, Donald Trump immediately revoked a Biden administration executive order that created safety guardrails for the deployment and development of AI. Silicon Valley has long held that any regulation stifles innovation, and several prominent members of the tech industry either joined or backed the US president's campaign, leading the administration to echo the same sentiment. "State lawmakers across the country are stepping up with real solutions to real harms - this bill is a preemptive strike to shut those down before they gain more ground," Hepner said.
[14]
New Law Would Ban All AI Regulation for a Decade
Republican lawmakers slipped language into the Budget Reconciliation Bill this week that would ban AI regulation, on the federal and state levels, for a decade, as 404 Media reports. An updated version of the bill introduced last night by Congressman Brett Guthrie (R-KY), who chairs the House Committee on Energy and Commerce, includes a new and sweeping clause about AI advancement declaring that "no State or political subdivision thereof may enforce any law or regulation regulating artificial intelligence models, artificial intelligence systems, or automated decision systems during the ten year period beginning on the date of the enactment of this Act." It's a remarkably expansive provision that, as 404 notes, likely reflects the engraining of Silicon Valley figures and influences into Washington and the White House. Tech CEOs have vied for president Donald Trump's attention since he was inaugurated, and the American tech industry writ large has become a fierce and powerful lobbying force. The Trump administration is also stacked with AI-invested tech moguls like David Sacks, Marc Andreessen, and Elon Musk. Meanwhile, the impacts of a regulation-free AI landscape are already being felt. Emotive, addictive AI companions have been rolled out explicitly to teenagers without evidence of safety, AI companies are missing their climate targets and spewing unchecked emissions into American neighborhoods, and nonconsensual deepfakes of women and girls are flooding social media. No regulation will likely mean a lot more fresh hell where that came from -- and little chance of stemming the tide. The update in the proposed law also seeks to appropriate a staggering $500 million over ten years to fund efforts to infuse the federal government's IT systems with "commercial" AI tech and unnamed "automation technologies." In other words, not only does the government want to completely stifle efforts to regulate a fast-developing technology, it also wants to integrate those unregulated technologies into the beating digital heart of the federal government. The bill also comes after states including New York and California have worked to pass some limited AI regulations, as 404 notes. Were the bill to be signed into law, it would seemingly render those laws -- which, for instance, ensure that employers review AI hiring tools for bias -- unenforceable. As it stands, the bill is in limbo. The proposal is massive, and includes drastic spending cuts to services like Medicaid and climate funds, slashes that Democrats largely oppose; Republican budget hawks, meanwhile, have raised concerns over the bill's hefty price tag. Whether it survives in its current form -- its controversial AI provisions included -- remains to be seen.
[15]
Trump's 'big, beautiful bill' could sideline state AI protections for a decade
A few lines of text in a sweeping new bill moving through Congress could have major implications for the next decade of artificial intelligence. Trump is pushing Republicans in Congress to pass "one, big beautiful bill," which hinges on deep cuts to popular federal assistance programs like Medicaid and SNAP to drum up hundreds of billions of dollars for tax cuts and defense spending. Among the bill's other controversies, it could stop states from enforcing any laws that regulate AI for the next 10 years. "No state . . . may enforce any law or regulation regulating artificial intelligence models, artificial intelligence systems, or automated decision systems during the 10-year period beginning on the date of the enactment of this Act," the bill stipulates. The proposal to hamstring states' regulatory power popped up in the House Energy and Commerce Committee's portion of the massive budget reconciliation mega-bill.
[16]
House Republicans include a 10-year ban on US states regulating AI in 'big, beautiful' bill
WASHINGTON -- House Republicans surprised tech industry watchers and outraged state governments when they added a clause to Republicans' signature " big, beautiful " tax bill that would ban states and localities from regulating artificial intelligence for a decade. The brief but consequential provision, tucked into the House Energy and Commerce Committee's sweeping markup, would be a major boon to the AI industry, which has lobbied for uniform and light touch regulation as tech firms develop a technology they promise will transform society. However, while the clause would be far-reaching if enacted, it faces long odds in the U.S. Senate, where procedural rules may doom its inclusion in the GOP legislation. "I don't know whether it will pass the Byrd Rule," said Sen. John Cornyn, R-Texas, referring to a provision that requires that all parts of a budget reconciliation bill, like the GOP plan, focus mainly on the budgetary matters rather than general policy aims. "That sounds to me like a policy change. I'm not going to speculate what the parliamentarian is going to do but I think it is unlikely to make it," Cornyn said. Senators in both parties have expressed an interest in artificial intelligence and believe that Congress should take the lead in regulating the technology. But while lawmakers have introduced scores of bills, including some bipartisan efforts, that would impact artificial intelligence, few have seen any meaningful advancement in the deeply divided Congress. An exception is a bipartisan bill expected to be signed into law by President Donald Trump next week that would enact stricter penalties on the distribution of intimate "revenge porn" images, both real and AI-generated, without a person's consent. "AI doesn't understand state borders, so it is extraordinarily important for the federal government to be the one that sets interstate commerce. It's in our Constitution. You can't have a patchwork of 50 states," said Sen. Bernie Moreno, an Ohio Republican. But Moreno said he was unsure if the House's proposed ban could make it through Senate procedure. The AI provision in the bill states that "no state or political subdivision may enforce any law or regulation regulating artificial intelligence models, artificial intelligence systems, or automated decision systems." The language could bar regulations on systems ranging from popular commercial models like ChatGPT to those that help make decisions about who gets hired or finds housing. State regulations on AI's usage in business, research, public utilities, educational settings and government would be banned. The congressional pushback against state-led AI regulation is part of a broader move led by the Trump administration to do away with policies and business approaches that have sought to limit AI's harms and pervasive bias. Half of all U.S. states so far have enacted legislation regulating AI deepfakes in political campaigns, according to a tracker from the watchdog organization Public Citizen. Most of those laws were passed within the last year, as incidents in democratic elections around the globe in 2024 highlighted the threat of lifelike AI audio clips, videos and images to deceive voters. California state Sen. Scott Wiener called the Republican proposal "truly gross" in a social media post. Wiener, a San Francisco Democrat, authored landmark legislation last year that would have created first-in-the-nation safety measures for advanced artificial intelligence models. The bill was vetoed by California Gov. Gavin Newsom, a fellow San Francisco Democrat. "Congress is incapable of meaningful AI regulation to protect the public. It is, however, quite capable of failing to act while also banning states from acting," Wiener wrote. A bipartisan group of dozens of state attorneys general also sent a letter to Congress on Friday opposing the bill. "AI brings real promise, but also real danger, and South Carolina has been doing the hard work to protect our citizens," said South Carolina Attorney General Alan Wilson, a Republican, in a statement. "Now, instead of stepping up with real solutions, Congress wants to tie our hands and push a one-size-fits-all mandate from Washington without a clear direction. That's not leadership, that's federal overreach." As the debate unfolds, AI industry leaders are pressing ahead on research while competing with rivals to develop the best -- and most widely used -- AI systems. They have pushed federal lawmakers for uniform and unintrusive rules on the technology, saying they need to move quickly on the latest models to compete with Chinese firms. Sam Altman, the CEO of ChatGPT maker OpenAI, testified in a Senate hearing last week that a "patchwork" of AI regulations "would be quite burdensome and significantly impair our ability to do what we need to do." "One federal framework, that is light touch, that we can understand and that lets us move with the speed that this moment calls for seems important and fine," Altman told Sen. Cynthia Lummis, a Wyoming Republican. And Sen. Ted Cruz floated the idea of a 10-year "learning period" for AI at the same hearing, which included three other tech company executives. "Would you support a 10-year learning period on states issuing comprehensive AI regulation, or some form of federal preemption to create an even playing field for AI developers and employers?" asked the Texas Republican. Altman responded that he was "not sure what a 10-year learning period means, but I think having one federal approach focused on light touch and an even playing field sounds great to me." Microsoft's president, Brad Smith, also offered measured support for "giving the country time" in the way that limited U.S. regulation enabled early internet commerce to flourish. "There's a lot of details that need to be hammered out, but giving the federal government the ability to lead, especially in the areas around product safety and pre-release reviews and the like, would help this industry grow," Smith said. It was a change, at least in tone, for some of the executives. Altman had testified to Congress two years ago on the need for AI regulation, and Smith, five years ago, praised Microsoft's home state of Washington for its "significant breakthrough" in passing first-in-the-nation guardrails on the use of facial recognition, a form of AI. Ten GOP senators said they were sympathetic to the idea of creating a national framework for AI. But whether the majority can work with Democrats to find a filibuster-proof solution is unclear. "I am not opposed to the concept. In fact, interstate commerce would suggest that it is the responsibility of Congress to regulate these types of activities and not the states," said Sen. Mike Rounds, a South Dakota Republican. "If we're going to do it state by state we're going to have a real mess on our hands," Rounds said. -- -- -- -- -- -- O'Brien reported from Providence, Rhode Island. AP writers Ali Swenson in New York, Jesse Bedayn in Denver, Jeffrey Collins in Columbia, South Carolina, and Trân Nguyễn in Sacramento, California contributed to this report.
[17]
House Republicans include a 10-year ban on US states regulating AI in 'big, beautiful' bill
WASHINGTON (AP) -- House Republicans surprised tech industry watchers and outraged state governments when they added a clause to Republicans' signature " big, beautiful " tax bill that would ban states and localities from regulating artificial intelligence for a decade. The brief but consequential provision, tucked into the House Energy and Commerce Committee's sweeping markup, would be a major boon to the AI industry, which has lobbied for uniform and light touch regulation as tech firms develop a technology they promise will transform society. However, while the clause would be far-reaching if enacted, it faces long odds in the U.S. Senate, where procedural rules may doom its inclusion in the GOP legislation. "I don't know whether it will pass the Byrd Rule," said Sen. John Cornyn, R-Texas, referring to a provision that requires that all parts of a budget reconciliation bill, like the GOP plan, focus mainly on the budgetary matters rather than general policy aims. "That sounds to me like a policy change. I'm not going to speculate what the parliamentarian is going to do but I think it is unlikely to make it," Cornyn said. Senators in both parties have expressed an interest in artificial intelligence and believe that Congress should take the lead in regulating the technology. But while lawmakers have introduced scores of bills, including some bipartisan efforts, that would impact artificial intelligence, few have seen any meaningful advancement in the deeply divided Congress. An exception is a bipartisan bill expected to be signed into law by President Donald Trump next week that would enact stricter penalties on the distribution of intimate "revenge porn" images, both real and AI-generated, without a person's consent. "AI doesn't understand state borders, so it is extraordinarily important for the federal government to be the one that sets interstate commerce. It's in our Constitution. You can't have a patchwork of 50 states," said Sen. Bernie Moreno, an Ohio Republican. But Moreno said he was unsure if the House's proposed ban could make it through Senate procedure. The AI provision in the bill states that "no state or political subdivision may enforce any law or regulation regulating artificial intelligence models, artificial intelligence systems, or automated decision systems." The language could bar regulations on systems ranging from popular commercial models like ChatGPT to those that help make decisions about who gets hired or finds housing. State regulations on AI's usage in business, research, public utilities, educational settings and government would be banned. The congressional pushback against state-led AI regulation is part of a broader move led by the Trump administration to do away with policies and business approaches that have sought to limit AI's harms and pervasive bias. Half of all U.S. states so far have enacted legislation regulating AI deepfakes in political campaigns, according to a tracker from the watchdog organization Public Citizen. Most of those laws were passed within the last year, as incidents in democratic elections around the globe in 2024 highlighted the threat of lifelike AI audio clips, videos and images to deceive voters. California state Sen. Scott Wiener called the Republican proposal "truly gross" in a social media post. Wiener, a San Francisco Democrat, authored landmark legislation last year that would have created first-in-the-nation safety measures for advanced artificial intelligence models. The bill was vetoed by California Gov. Gavin Newsom, a fellow San Francisco Democrat. "Congress is incapable of meaningful AI regulation to protect the public. It is, however, quite capable of failing to act while also banning states from acting," Wiener wrote. A bipartisan group of dozens of state attorneys general also sent a letter to Congress on Friday opposing the bill. "AI brings real promise, but also real danger, and South Carolina has been doing the hard work to protect our citizens," said South Carolina Attorney General Alan Wilson, a Republican, in a statement. "Now, instead of stepping up with real solutions, Congress wants to tie our hands and push a one-size-fits-all mandate from Washington without a clear direction. That's not leadership, that's federal overreach." As the debate unfolds, AI industry leaders are pressing ahead on research while competing with rivals to develop the best -- and most widely used -- AI systems. They have pushed federal lawmakers for uniform and unintrusive rules on the technology, saying they need to move quickly on the latest models to compete with Chinese firms. Sam Altman, the CEO of ChatGPT maker OpenAI, testified in a Senate hearing last week that a "patchwork" of AI regulations "would be quite burdensome and significantly impair our ability to do what we need to do." "One federal framework, that is light touch, that we can understand and that lets us move with the speed that this moment calls for seems important and fine," Altman told Sen. Cynthia Lummis, a Wyoming Republican. And Sen. Ted Cruz floated the idea of a 10-year "learning period" for AI at the same hearing, which included three other tech company executives. "Would you support a 10-year learning period on states issuing comprehensive AI regulation, or some form of federal preemption to create an even playing field for AI developers and employers?" asked the Texas Republican. Altman responded that he was "not sure what a 10-year learning period means, but I think having one federal approach focused on light touch and an even playing field sounds great to me." Microsoft's president, Brad Smith, also offered measured support for "giving the country time" in the way that limited U.S. regulation enabled early internet commerce to flourish. "There's a lot of details that need to be hammered out, but giving the federal government the ability to lead, especially in the areas around product safety and pre-release reviews and the like, would help this industry grow," Smith said. It was a change, at least in tone, for some of the executives. Altman had testified to Congress two years ago on the need for AI regulation, and Smith, five years ago, praised Microsoft's home state of Washington for its "significant breakthrough" in passing first-in-the-nation guardrails on the use of facial recognition, a form of AI. Ten GOP senators said they were sympathetic to the idea of creating a national framework for AI. But whether the majority can work with Democrats to find a filibuster-proof solution is unclear. "I am not opposed to the concept. In fact, interstate commerce would suggest that it is the responsibility of Congress to regulate these types of activities and not the states," said Sen. Mike Rounds, a South Dakota Republican. "If we're going to do it state by state we're going to have a real mess on our hands," Rounds said. -- -- -- -- -- -- O'Brien reported from Providence, Rhode Island. AP writers Ali Swenson in New York, Jesse Bedayn in Denver, Jeffrey Collins in Columbia, South Carolina, and Trân Nguyễn in Sacramento, California contributed to this report.
[18]
House GOP proposes 10-year ban on state AI regulations
A Republican tax bill released late Sunday seeks to block states from regulating artificial intelligence (AI) models for the next 10 years. The bill text from the House Energy and Commerce Committee would bar states from enforcing laws or regulations governing AI models, AI systems or automated decision systems. It provides some exemptions for laws and regulations that aim to "remove legal impediments" or "facilitate the deployment or operation" of AI systems, as well as those that seek to "streamline licensing, permitting, routing, zoning, procurement, or reporting procedures." It would also permit state laws that do not "impose any substantive design, performance, data-handling, documentation, civil liability, taxation, fee, or other requirement" on AI systems. The bill, which comes as Republicans gear up to advance President Trump's legislative agenda this week, aligns with the administration's emphasis on AI innovation instead of regulation. Shortly after taking office, Trump rescinded former President Biden's executive order establishing guardrails around AI and fielded input on his own forthcoming "AI Action Plan." Vice President Vance also slammed "excessive regulation" of AI during his first international trip in February. "We believe that excessive regulation of the AI sector could kill a transformative industry just as it's taking off," Vance said at the AI Action Summit in Paris. "And I'd like to see that deregulatory flavor making a lot of the conversations this conference." Meanwhile, as AI regulation at the federal level remains in limbo, states have moved in and sought to develop laws around the rapidly developing technology. State legislatures considered nearly 700 AI bills last year, 113 of which were ultimately enacted into law, according to the Business Software Alliance. California launched a controversial effort last year to regulate extreme risks from AI that faced pushback from federal lawmakers. Senate Bill 1047, which sought to require powerful AI models to undergo safety testing before they could be released and hold developers liable for severe harms, was ultimately vetoed by California Gov. Gavin Newsom (D).
[19]
Tech safety groups slam House GOP proposal for 10-year ban on state AI regulation
A host of tech safety groups and at least one Democrat are blasting House Republicans' proposal to block states from regulating artificial intelligence (AI) models for the next 10 years, arguing consumers will be less protected. Rep. Jan Schakowsky (D-Ill.), the ranking member on the Commerce, Manufacturing and Trade Subcommittee, said on Monday the proposal is a "giant gift to Big Tech." "The Republicans' 10-year ban on the enforcement of state laws protecting consumers from potential dangers of new artificial intelligence systems gives Big Tech free reign to take advantage of children and families," she wrote, adding the proposal, "shows that Republicans care more about profits than people." The Republican tax bill, released by the House Energy and Commerce Committee on Sunday night, proposes barring states from enforcing laws or regulations governing AI models, AI systems or automated decision systems. The proposal includes some exemptions for laws that intend to "remove legal impediments" or "facilitate the deployment or operation" of AI systems, as well as those that seek to "streamline licensing, permitting, routing, zoning, procurement or reporting procedures." State laws that do not impose any substantive design, performance, data-handling, documentation, civil liability, taxation, fee, or other requirement" on AI systems would also be allowed under the proposal. Schakowsky claimed the proposal would give AI developers a green light to "ignore consumer privacy protections spread," let AI-generated deepfakes spread, while allowing them to "profile and deceive" consumers. The bill underscores the Trump administration's focus on AI innovation and acceleration over regulation. President Trump has rolled back various Biden-era AI policies that placed guardrails on AI developers, arguing these were obstacles to the fast development of AI. The Tech Oversight Project, a nonprofit tech watchdog group, pointed out Congress has failed to pass most AI-related legislation, prompting action on the state level. "The so-called 'state's rights' party is trying to slip a provision into the reconciliation package that will kneecap states' ability to protect people and children from proven AI harms and scams. It's not only hypocritical, it's a massive handout to Big Tech," Tech Oversight Project Executive Director Sacha Haworth said. "While Congress has struggled to establish AI safeguards, states are leading the charge in tackling AI's worst use cases, and it comes as no surprise that Big Tech is trying to stop that effort dead in its tracks," Haworth added. It comes amid a broader debate over federal preemption for AI regulation, which several AI industry heads have pushed for as state laws create a patchwork of rules to follow. Last week, OpenAI CEO Sam Altman testified before Congress, where he expressed support for "one federal framework," and expressed concerns with a "burdensome" state-by-state approach. The Open Markets Institute, a DC-based think tank advocating against monopolies, called it a "stunning assault on state sovereignty." "This is the broligarchy in action: billionaires and lobbyists writing the laws to lock in their dominance, at the direct expense of democratic oversight, with no new rules, no obligations, and no accountability allowed. This is not innovation protection -- it's a corporate coup," wrote Courtney C. Radsch, director of the Center for Journalism and Liberty at Open Markets Institute. U.S. states considered nearly 700 legislative proposals last year, according to an analysis from the Business Software Alliance. Nonprofit Consumer Reports also came out against the proposal, pointing to the potential dangers of AI, such as sexually explicit deepfakes. "This incredibly broad preemption would prevent states from taking action to deal with all sorts of harms, from non-consensual intimate AI images, audio, and video, to AI-driven threats to critical infrastructure or market manipulation," said Grace Geyde, a policy analyst for Consumer Reports, "to protecting AI whistleblowers, to assessing high-risk AI decision-making systems for bias or other errors, to simply requiring AI chatbots to disclose that they aren't human." Commerce and Energy Committee Chair Brett Guthrie (R-Ky.) defended the committee's reconciliation proposal later Monday. "This reconciliation is a win for Americans in every part of the country, and it's a shame Democrats are intentionally reflexively opposing commonsense policies to strengthen the program," he wrote. Meanwhile, some tech industry groups celebrated the proposal. NetChoice, the trade association representing some of the largest tech firms in the world like Google, Amazon and Meta, said the "commendable" proposal will help American "stay first in the research and development" of emerging tech. "America can't lead the world in new technologies like AI if we tie the hands of innovators with overwhelming red tape before they can even get off the ground," said NetChoice Director of Policy Pat Hedger.
[20]
AI Regulation Ban Meets Opposition From State Attorneys General Over Risks to US Consumers
(Reuters) -A Republican proposal to block states from regulating artificial intelligence for 10 years drew opposition on Friday from a bipartisan group of attorneys general in California, New York, Ohio and other states that have regulated high-risk uses of the technology. The measure included in President Donald Trump's tax cut bill would preempt AI laws and regulations passed recently in dozens of states. A group of 40 state attorneys general, including Republicans from Ohio, Tennessee, Arkansas, Utah and Virginia and other states, urged Congress to ditch the measure on Friday, as the U.S. House of Representatives' budget committee geared up for a Sunday night hearing. "Imposing a broad moratorium on all state action, while Congress fails to act in this area is irresponsible and deprives consumers of reasonable protections," said the group. The attorney general from California -- which is home to prominent AI companies, including OpenAI, Alphabet, Meta Platforms and Anthropic -- was among the Democrats who signed the letter. "I strongly oppose any effort to block states from developing and enforcing common-sense regulation; states must be able to protect their residents by responding to emerging and evolving AI technology," Attorney General Rob Bonta said. California implemented a raft of bills this year limiting specific uses of AI, illustrating the kind of laws that would be blocked under the moratorium. Like several other states, California has criminalized the use of AI to generate sexually explicit images of individuals without their consent. The state also prohibits unauthorized deepfakes in political advertising, and requires healthcare providers to notify patients when they are interacting with an AI and not a human. Healthcare provider networks, also known as HMOs, are barred in California from using AI systems instead of doctors to decide medical necessity. House Republicans said in a hearing Tuesday that the measure was necessary to help the federal government in implementing AI, for which the package allocates $500 million. "It's nonsensical to do that if we're going to allow 1,000 different pending bills in state legislatures across the country to become law," said Jay Obernolte, a Republican from California who represents part of Silicon Valley, including Mountain View where Google is based. "It would be impossible for any agency that operates in all the states to be able to comply with those regulations," he said. Google has called the proposed moratorium "an important first step to both protect national security and ensure continued American AI leadership." That position will be tested if the measure makes it to the Senate. It will need to clear the budget reconciliation process, which is supposed to be used only for budget-related legislation. (Reporting by Jody Godoy in New York; Editing by Aurora Ellis)
[21]
House Republicans Include a 10-Year Ban on US States Regulating AI in 'Big, Beautiful' Bill
WASHINGTON (AP) -- House Republicans surprised tech industry watchers and outraged state governments when they added a clause to Republicans' signature " big, beautiful " tax bill that would ban states and localities from regulating artificial intelligence for a decade. The brief but consequential provision, tucked into the House Energy and Commerce Committee's sweeping markup, would be a major boon to the AI industry, which has lobbied for uniform and light touch regulation as tech firms develop a technology they promise will transform society. However, while the clause would be far-reaching if enacted, it faces long odds in the U.S. Senate, where procedural rules may doom its inclusion in the GOP legislation. "I don't know whether it will pass the Byrd Rule," said Sen. John Cornyn, R-Texas, referring to a provision that requires that all parts of a budget reconciliation bill, like the GOP plan, focus mainly on the budgetary matters rather than general policy aims. "That sounds to me like a policy change. I'm not going to speculate what the parliamentarian is going to do but I think it is unlikely to make it," Cornyn said. Senators in both parties have expressed an interest in artificial intelligence and believe that Congress should take the lead in regulating the technology. But while lawmakers have introduced scores of bills, including some bipartisan efforts, that would impact artificial intelligence, few have seen any meaningful advancement in the deeply divided Congress. An exception is a bipartisan bill expected to be signed into law by President Donald Trump next week that would enact stricter penalties on the distribution of intimate "revenge porn" images, both real and AI-generated, without a person's consent. "AI doesn't understand state borders, so it is extraordinarily important for the federal government to be the one that sets interstate commerce. It's in our Constitution. You can't have a patchwork of 50 states," said Sen. Bernie Moreno, an Ohio Republican. But Moreno said he was unsure if the House's proposed ban could make it through Senate procedure. The AI provision in the bill states that "no state or political subdivision may enforce any law or regulation regulating artificial intelligence models, artificial intelligence systems, or automated decision systems." The language could bar regulations on systems ranging from popular commercial models like ChatGPT to those that help make decisions about who gets hired or finds housing. State regulations on AI's usage in business, research, public utilities, educational settings and government would be banned. The congressional pushback against state-led AI regulation is part of a broader move led by the Trump administration to do away with policies and business approaches that have sought to limit AI's harms and pervasive bias. Half of all U.S. states so far have enacted legislation regulating AI deepfakes in political campaigns, according to a tracker from the watchdog organization Public Citizen. Most of those laws were passed within the last year, as incidents in democratic elections around the globe in 2024 highlighted the threat of lifelike AI audio clips, videos and images to deceive voters. California state Sen. Scott Wiener called the Republican proposal "truly gross" in a social media post. Wiener, a San Francisco Democrat, authored landmark legislation last year that would have created first-in-the-nation safety measures for advanced artificial intelligence models. The bill was vetoed by California Gov. Gavin Newsom, a fellow San Francisco Democrat. "Congress is incapable of meaningful AI regulation to protect the public. It is, however, quite capable of failing to act while also banning states from acting," Wiener wrote. A bipartisan group of dozens of state attorneys general also sent a letter to Congress on Friday opposing the bill. "AI brings real promise, but also real danger, and South Carolina has been doing the hard work to protect our citizens," said South Carolina Attorney General Alan Wilson, a Republican, in a statement. "Now, instead of stepping up with real solutions, Congress wants to tie our hands and push a one-size-fits-all mandate from Washington without a clear direction. That's not leadership, that's federal overreach." As the debate unfolds, AI industry leaders are pressing ahead on research while competing with rivals to develop the best -- and most widely used -- AI systems. They have pushed federal lawmakers for uniform and unintrusive rules on the technology, saying they need to move quickly on the latest models to compete with Chinese firms. Sam Altman, the CEO of ChatGPT maker OpenAI, testified in a Senate hearing last week that a "patchwork" of AI regulations "would be quite burdensome and significantly impair our ability to do what we need to do." "One federal framework, that is light touch, that we can understand and that lets us move with the speed that this moment calls for seems important and fine," Altman told Sen. Cynthia Lummis, a Wyoming Republican. And Sen. Ted Cruz floated the idea of a 10-year "learning period" for AI at the same hearing, which included three other tech company executives. "Would you support a 10-year learning period on states issuing comprehensive AI regulation, or some form of federal preemption to create an even playing field for AI developers and employers?" asked the Texas Republican. Altman responded that he was "not sure what a 10-year learning period means, but I think having one federal approach focused on light touch and an even playing field sounds great to me." Microsoft's president, Brad Smith, also offered measured support for "giving the country time" in the way that limited U.S. regulation enabled early internet commerce to flourish. "There's a lot of details that need to be hammered out, but giving the federal government the ability to lead, especially in the areas around product safety and pre-release reviews and the like, would help this industry grow," Smith said. It was a change, at least in tone, for some of the executives. Altman had testified to Congress two years ago on the need for AI regulation, and Smith, five years ago, praised Microsoft's home state of Washington for its "significant breakthrough" in passing first-in-the-nation guardrails on the use of facial recognition, a form of AI. Ten GOP senators said they were sympathetic to the idea of creating a national framework for AI. But whether the majority can work with Democrats to find a filibuster-proof solution is unclear. "I am not opposed to the concept. In fact, interstate commerce would suggest that it is the responsibility of Congress to regulate these types of activities and not the states," said Sen. Mike Rounds, a South Dakota Republican. "If we're going to do it state by state we're going to have a real mess on our hands," Rounds said. -- -- -- -- -- -- O'Brien reported from Providence, Rhode Island. AP writers Ali Swenson in New York, Jesse Bedayn in Denver, Jeffrey Collins in Columbia, South Carolina, and Trân Nguyễn in Sacramento, California contributed to this report. Copyright 2025 The Associated Press. All rights reserved. This material may not be published, broadcast, rewritten or redistributed.
[22]
House Republicans Include a 10-Year Ban on U.S. States Regulating AI in 'Big, Beautiful' Bill
House Republicans surprised tech industry watchers and outraged state governments when they added a clause to Republicans' signature "big, beautiful" tax bill that would ban states and localities from regulating artificial intelligence for a decade. The brief but consequential provision, tucked into the House Energy and Commerce Committee's sweeping markup, would be a major boon to the AI industry, which has lobbied for uniform and light touch regulation as tech firms develop a technology they promise will transform society. However, while the clause would be far-reaching if enacted, it faces long odds in the U.S. Senate, where procedural rules may doom its inclusion in the GOP legislation. "I don't know whether it will pass the Byrd Rule," said Sen. John Cornyn, R-Texas, referring to a provision that requires that all parts of a budget reconciliation bill, like the GOP plan, focus mainly on the budgetary matters rather than general policy aims.
[23]
US states oppose AI regulation ban in Trump tax bill
Attorneys general from 40 US states urged Congress to reject a 10-year ban on AI regulation proposed in Trump's tax bill. They warn it would undermine existing state laws protecting against AI harms like deepfakes and bias. The bill also faces setbacks amid Republican divisions in Congress.A mix of Democratic and Republican state attorneys on Friday called on Congress to reject a moratorium on regulating artificial intelligence included in US President Donald Trump's tax bill. Top attorneys representing 40 states signed a letter urging leaders in Congress to reject the AI regulation moratorium language added to the budget reconciliation bill. "The impact of such a broad moratorium would be sweeping and wholly destructive of reasonable state efforts to prevent known harms associated with AI," the letter states. "This bill will affect hundreds of existing and pending state laws passed and considered by both Republican and Democratic state legislatures." The amendment added by the House Energy and Commerce Committee to the budget reconciliation bill imposes a 10-year prohibition on states from enforcing any state regulation addressing AI or "automated decision-making systems," according to the state attorneys. "The amendment added to the reconciliation bill abdicates federal leadership and mandates that all states abandon their leadership in this area as well," the state attorneys general wrote. "This bill does not propose any regulatory scheme to replace or supplement the laws enacted or currently under consideration by the states, leaving Americans entirely unprotected from the potential harms of AI." The letter notes that states have put in place laws designed to protect against AI-generated porn, deepfakes intended to mislead voters, and spam calls or text messages. Some state laws have also been crafted to prevent biases in AI models. "These laws and their regulations have been developed over years through careful consideration and extensive stakeholder input from consumers, industry, and advocates," the letter read. Republican fiscal hawks on Friday sunk a key vote on advancing the mega-bill that is the centerpiece of Trump's domestic agenda, in a significant setback for the US president's tax and spending policies. Trump is pushing to usher into law his so-called "One Big, Beautiful Bill" pairing an extension of his first-term tax cuts with savings that will see millions of the poorest Americans lose their health care coverage. But a congressional Republican Party rife with divisions and competition within its rank-and-file has complicated the process, raising serious doubts that the sprawling package can pass a vote of the full House of Representatives next week. The budget committee's no vote is not the final word on the package, which will be reworked and sent back to the panel for more debate starting 10:00 pm on Sunday (0200 GMT Monday) and a fresh vote.
[24]
Donald Trump-Backed GOP Bill Seeks To Ban States From Regulating AI For 10 Years -- Critics Calls It 'Federal Overreach' That Could Backfire
Enter your email to get Benzinga's ultimate morning update: The PreMarket Activity Newsletter A new House Republican-backed provision in Donald Trump-backed "big, beautiful" bill reportedly aims to block state and local governments from regulating artificial intelligence for the next decade -- and it's already sparking fierce backlash. What Happened: Tucked into a sweeping GOP tax package, the clause states that "no state or political subdivision may enforce any law or regulation regulating artificial intelligence models, artificial intelligence systems, or automated decision systems," reported Fortune. Supporters argue it's about avoiding regulatory chaos. "AI doesn't understand state borders," said Sen. Bernie Moreno (R-Ohio). "You can't have a patchwork of 50 states." Previously, OpenAI CEO Sam Altman also appeared to echo this sentiment. "One federal framework, that is light touch, that we can understand and that lets us move with the speed that this moment calls for seems important and fine," he said during a Senate hearing last week. See Also: Elon Musk Says Will Come As A 'Surprise To Most' As China's Economy Surpasses US And EU Amid Rising Tariffs And Growing Recession Fears However, critics warn the bill would halt important local protections. California state Sen. Scott Wiener called the proposal "truly gross," writing on social media, "Congress is incapable of meaningful AI regulation... while also banning states from acting." A bipartisan group of state attorneys general also voiced concern. "Instead of stepping up with real solutions, Congress wants to tie our hands," said South Carolina Attorney General Alan Wilson, a Republican, the report noted. Why It's Important: On Friday, House Republicans failed to advance Trump's massive tax and spending bill, revealing internal disagreements, reported BBC. Despite Trump's plea on social media for lawmakers to "stop talking and get it done," some Republicans opposed the bill, arguing it didn't go far enough in reducing government spending. Sen. Bernie Sanders (I-Vt.) also previously slammed the budget reconciliation bill, calling it a "death sentence" for millions of Americans. Read Next: JPMorgan CEO Jamie Dimon Warns Recession Is Best-Case Outcome Of Trump Trade War Disclaimer: This content was partially produced with the help of AI tools and was reviewed and published by Benzinga editors. Image via Shutterstock Market News and Data brought to you by Benzinga APIs
[25]
House Committee Aims to Ban States From Regulating AI | PYMNTS.com
A House committee is reportedly trying to add language to President Donald Trump's tax and spending bill that would prevent states from regulating artificial intelligence. The House Energy and Commerce Committee drafted the bill and will debate it Tuesday (May 13), Bloomberg reported Monday (May 12), adding that the draft bill would place a 10-year moratorium on state regulation in the AI field. The language is unlikely to be included in the tax bill, though, because the special parliamentary procedure being used to move the bill through Congress requires that provisions be primarily fiscal, according to the report. Still, the move will show where key Republicans stand on the matter of AI regulation, the report said, adding tech executives have encouraged Congress to pass federal legislation that would prohibit states from creating their own rules around AI. Tech executives have said that they would have difficulty dealing with a variety of state standards, according to the report. Proponents of regulation at the state level have said that state lawmakers should be free to pass laws that would promote AI safety and prevent the misuse of the technology, per the report. When several tech giants, AI startups and financial institutions weighed in on the White House's proposed AI Action Plan, one recurring theme was a desire for regulatory consistency to unify the patchwork of state laws, PYMNTS reported in April. In comments submitted by companies and released April 24 by the federal government, Meta warned that fragmented state-level rules would raise costs and stifle innovation, Uber urged federal preemption to eliminate the growing patchwork of inconsistent state AI laws, and J.P. Morgan Chase echoed the concerns of others about a patchwork of state laws and called for the federal government to preempt state laws. Colorado passed a sweeping AI law last year to go into effect in February 2026. It has faced backlash from industry groups saying it is too "rigid and vague" and from consumer advocates who believe it doesn't go far enough. Last week, government leaders called for a delay of the law's implementation until January 2027.
[26]
AI regulation ban meets opposition from state attorneys general over risks to US consumers
(Reuters) -A Republican proposal to block states from regulating artificial intelligence for 10 years drew opposition on Friday from a bipartisan group of attorneys general in California, New York, Ohio and other states that have regulated high-risk uses of the technology. The measure included in President Donald Trump's tax cut bill would preempt AI laws and regulations passed recently in dozens of states. A group of 40 state attorneys general, including Republicans from Ohio, Tennessee, Arkansas, Utah and Virginia and other states, urged Congress to ditch the measure on Friday, as the U.S. House of Representatives' budget committee geared up for a Sunday night hearing. "Imposing a broad moratorium on all state action, while Congress fails to act in this area is irresponsible and deprives consumers of reasonable protections," said the group. The attorney general from California -- which is home to prominent AI companies, including OpenAI, Alphabet, Meta Platforms and Anthropic -- was among the Democrats who signed the letter. "I strongly oppose any effort to block states from developing and enforcing common-sense regulation; states must be able to protect their residents by responding to emerging and evolving AI technology," Attorney General Rob Bonta said. California implemented a raft of bills this year limiting specific uses of AI, illustrating the kind of laws that would be blocked under the moratorium. Like several other states, California has criminalized the use of AI to generate sexually explicit images of individuals without their consent. The state also prohibits unauthorized deepfakes in political advertising, and requires healthcare providers to notify patients when they are interacting with an AI and not a human. Healthcare provider networks, also known as HMOs, are barred in California from using AI systems instead of doctors to decide medical necessity. House Republicans said in a hearing Tuesday that the measure was necessary to help the federal government in implementing AI, for which the package allocates $500 million. "It's nonsensical to do that if we're going to allow 1,000 different pending bills in state legislatures across the country to become law," said Jay Obernolte, a Republican from California who represents part of Silicon Valley, including Mountain View where Google is based. "It would be impossible for any agency that operates in all the states to be able to comply with those regulations," he said. Google has called the proposed moratorium "an important first step to both protect national security and ensure continued American AI leadership." That position will be tested if the measure makes it to the Senate. It will need to clear the budget reconciliation process, which is supposed to be used only for budget-related legislation. (Reporting by Jody Godoy in New York; Editing by Aurora Ellis)
Share
Copy Link
House Republicans have added a provision to the Budget Reconciliation bill that would prevent state and local governments from regulating AI for a decade, sparking debate over federal versus state control of AI oversight.
House Republicans have introduced a controversial provision to the Budget Reconciliation bill that would block all state and local governments from regulating artificial intelligence (AI) for 10 years 1. The measure, proposed by Representative Brett Guthrie of Kentucky, has ignited a fierce debate over the appropriate level of government oversight for rapidly advancing AI technologies.
The provision's broad language would prevent states from enforcing both existing and future laws designed to protect citizens from potential AI-related harms. This could affect a range of state-level regulations, including:
The ban could also restrict how states allocate federal funding for AI programs, potentially limiting their ability to pursue AI initiatives that diverge from federal priorities 1.
AI developers and some lawmakers argue that federal action is necessary to prevent a patchwork of state regulations that could impede technological progress. Alexandr Wang, CEO of Scale AI, emphasized the need for "one clear federal standard" to avoid conflicting requirements across states 2.
OpenAI CEO Sam Altman has expressed concerns about overly restrictive regulations, suggesting that industry self-regulation might be preferable to government-imposed rules 3. This stance aligns with the broader push for federal preemption of state AI laws.
The proposal has faced significant backlash from consumer advocacy groups and some Democratic lawmakers. Rep. Jan Schakowsky (D-Ill.) called it a "giant gift to Big Tech," while organizations like the Tech Oversight Project and Consumer Reports warned it could leave consumers unprotected from AI-related risks such as deepfakes and algorithmic bias 15.
Critics argue that the ban could lead to:
Prior to this proposal, states had been actively developing AI-related legislation:
The provision's inclusion in the Budget Reconciliation bill could fast-track its passage, as it would only require a simple majority in the Senate rather than the usual 60 votes 5. However, the bill still needs approval from both chambers of Congress and President Trump's signature to become law.
As the debate unfolds, stakeholders from tech companies, consumer advocacy groups, and state governments are likely to intensify their efforts to shape the future of AI regulation in the United States.
Summarized by
Navi
[1]
NASA and IBM have developed Surya, an open-source AI model that can predict solar flares and space weather, potentially improving the protection of Earth's critical infrastructure from solar storms.
5 Sources
Technology
7 hrs ago
5 Sources
Technology
7 hrs ago
Meta introduces an AI-driven voice translation feature for Facebook and Instagram creators, enabling automatic dubbing of content from English to Spanish and vice versa, with plans for future language expansions.
8 Sources
Technology
23 hrs ago
8 Sources
Technology
23 hrs ago
OpenAI CEO Sam Altman reveals plans for GPT-6, focusing on memory capabilities to create more personalized and adaptive AI interactions. The upcoming model aims to remember user preferences and conversations, potentially transforming the relationship between humans and AI.
2 Sources
Technology
23 hrs ago
2 Sources
Technology
23 hrs ago
Chinese AI companies DeepSeek and Baidu are making waves in the global AI landscape with their open-source models, challenging the dominance of Western tech giants and potentially reshaping the AI industry.
2 Sources
Technology
7 hrs ago
2 Sources
Technology
7 hrs ago
A comprehensive look at the emerging phenomenon of 'AI psychosis', its impact on mental health, and the growing concerns among experts and tech leaders about the psychological risks associated with AI chatbots.
3 Sources
Technology
7 hrs ago
3 Sources
Technology
7 hrs ago