15 Sources
[1]
Ted Cruz can't get all Republicans to back his fight against state AI laws
A Republican proposal to penalize states that regulate artificial intelligence can move forward without requiring approval from 60 senators, the Senate parliamentarian decided on Saturday. But the moratorium on state AI laws did not have unanimous Republican support and has reportedly been watered down in an effort to push it toward passage. In early June, Sen. Ted Cruz (R-Texas) proposed enforcing a 10-year moratorium on AI regulation by making states ineligible for broadband funding if they try to impose any limits on development of artificial intelligence. While the House previously approved a version of the so-called "One Big Beautiful Bill" with an outright 10-year ban on state AI regulation, Cruz took a different approach because of the Senate rule that limits inclusion of "extraneous matter" in budget reconciliation legislation. Under the Senate's Byrd rule, a senator can object to a potentially extraneous budget provision. A motion to waive the Byrd rule requires a vote of 60 percent of the Senate. As originally drafted, Cruz's backdoor ban on state AI laws would have made it impossible for states to receive money from the $42 billion Broadband Equity, Access, and Deployment (BEAD) program if they try to regulate AI. He tied the provision into the budget bill by proposing an extra $500 million for the broadband-deployment grant program and expanding its purpose to also subsidize construction and deployment of infrastructure for artificial intelligence systems. Punchbowl News reported today that Cruz made changes in order to gain more Republican support and comply with Senate procedural rules. Cruz was quoted as saying that under his current version, states that regulate AI would only be shut out of the $500 million AI fund. This would seem to protect states' access to the $42 billion broadband deployment fund that will offer subsidies to ISPs that expand access to Internet service. Losing that funding would be a major blow to states that have spent the last couple of years developing plans to connect more of their residents to modern broadband. The latest Senate bill text was not available today. We contacted Cruz's office and will update this article if we get a response. Plan has opponents from both parties Senate Parliamentarian Elizabeth MacDonough ruled that several parts of the Republican budget bill are subject to the Byrd rule and its 60-vote requirement, but Cruz's AI proposal wasn't one of them. A press release from Senate Budget Committee Ranking Member Jeff Merkley (D-Ore.) noted that "the parliamentarian's advice is based on whether a provision is appropriate for reconciliation and conforms to the limitations of the Byrd rule; it is not a judgement on the relative merits of a particular policy." Surviving the parliamentarian review doesn't guarantee passage. A Bloomberg article said the parliamentarian's decision is "a win for tech companies pushing to stall and override dozens of AI safety laws across the country," but that the "provision will likely still be challenged on the Senate floor, where stripping the provision would need just a simple majority. Some Republicans in both the House and Senate have pushed back on the AI provision." Republicans have a 53-47 edge in the Senate. Sens. Maria Cantwell (D-Wash.) and Marsha Blackburn (R-Tenn.) teamed up for a press conference last week in which they spoke out against the proposed moratorium on state regulation. Cantwell said that 24 states last year started "regulating AI in some way, and they have adopted these laws that fill a gap while we are waiting for federal action. Now Congress is threatening these laws, which will leave hundreds of millions of Americans vulnerable to AI harm by abolishing those state law protections." Blackburn said she agreed with Cantwell that the AI regulation proposal "is not the type of thing that we put into reconciliation bills." Blackburn added that lawmakers "are working to move forward with legislation at the federal level, but we do not need a moratorium that would prohibit our states from stepping up and protecting citizens in their state." Sens. Ron Johnson (R-Wis.) and Josh Hawley (R-Mo.) have also criticized the idea of stopping states from regulating AI. Cruz accused states of "strangling AI" Cruz argued that his proposal stops states "from strangling AI deployment with EU-style regulation." Under his first proposal, no BEAD funds were to be given to any state or territory that enforces "any law or regulation... limiting, restricting, or otherwise regulating artificial intelligence models, artificial intelligence systems, or automated decision systems entered into interstate commerce." The Cantwell/Blackburn press conference also included Washington Attorney General Nick Brown, a Democrat; and Tennessee Attorney General Jonathan Skrmetti, a Republican. Brown said that "Washington has a law that prohibits deep fakes being used against political candidates by mimicking their appearance and their speech," another "that prohibits sharing fabricated sexual images without consent and provides for penalties for those who possess and distribute such images," and a third "that prohibits the knowing distribution of forged digital likenesses that can be used to harm or defraud people." "All of those laws, in my reading, would be invalid if this was to pass through Congress, and each of those laws are prohibiting and protecting people here in our state," Brown said. Skrmetti said that if the Senate proposal becomes law "there would be arguments out there for the big tech companies that the moratorium does, in fact, preclude any enforcement of any consumer protection laws if there's an AI component to the product that we're looking at." Other Republican plans fail Byrd rule test Senate Democrats said they are pleased that the parliamentarian ruled that several other parts of the bill are subject to the Byrd rule. "We continue to see Republicans' blatant disregard for the rules of reconciliation when drafting this bill... Democrats plan to challenge every part of this bill that hurts working families and violates this process," Merkley said. Merkley's press release said the provisions that are subject to a 60-vote threshold include one that "limits certain grant funding for 'sanctuary cities,' and where the Attorney General disagrees with states' and localities' immigration enforcement," and another that "gives state and local officials the authority to arrest any noncitizen suspected of being in the US unlawfully." The Byrd rule also applies to a section that "limits the ability of federal courts to issue preliminary injunctions or temporary restraining orders against the federal government by requiring litigants to post a potentially enormous bond," and another that "limits when the federal government can enter into or enforce settlement agreements that provide for payments to third parties to fully compensate victims, remedy harm, and punish and deter future violations," Merkley's office said. The office of Senate Democratic Leader Chuck Schumer (D-N.Y.) said yesterday that the provision requiring litigants to post bonds has been struck from the legislation. "This Senate Republican provision, which was even worse than the similar House-passed version, required a plaintiff seeking an emergency court order, preliminary injunction, or a temporary restraining order against the Trump Administration or the federal government to pay a costly bond up front -- essentially making the justice system pay-to-play," Schumer's office said. Schumer said that "if enacted, this would have been one of the most brazen power grabs we've seen in American history -- an attempt to let a future President Trump ignore court orders with impunity, putting him above the law."
[2]
Moratorium on state AI regulation clears Senate hurdle | TechCrunch
A Republican effort to prevent states from enforcing their own AI regulations cleared a key procedural hurdle on Saturday. The rule, as reportedly rewritten by Senate Commerce Chair Ted Cruz in an attempt to comply with budgetary rules, would withhold federal broadband funding from states if they try to enforce AI regulations in the next 10 years. And the rewrite seems to have passed muster, with the Senate Parliamentarian now ruling that the provision is not subject to the so-called Byrd rule -- so it can be included in Republicans' "One Big, Beautiful Bill" and passed with a simple majority, without potentially getting blocked by a filibuster, and without requiring support from Senate Democrats. However, it's not clear how many Republicans will support the moratorium. For example, Republican Senator Marsha Blackburn of Tennessee recently said, "We do not need a moratorium that would prohibit our states from stepping up and protecting citizens in their state." And while the House of Representatives already passed a version of the bill that included a moratorium on AI regulation, far-right Representative Marjorie Taylor Greene subbsequently declared that she is "adamantly OPPOSED" the provision as "a violation of state rights" and said it needs to be "stripped out in the Senate." House Speaker Mike Johnson defended the provision by saying it had President Donald Trump's support and arguing, "We have to be careful not to have 50 different states regulating AI, because it has national security implications, right?" In a recent report, Americans for Responsible Innovation (an advocacy group for AI regulation), wrote that "the proposal's broad language could potentially sweep away a wide range of public interest state legislation regulating AI and other algorithmic-based technologies, creating a regulatory vacuum across multiple technology policy domains without offering federal alternatives to replace the eliminated state-level guardrails." A number of states do seem to be taking steps toward AI regulation. In California, Governor Gavin Newsom vetoed a high-profile AI safety bill last year while signing a number of less controversial regulations around issues like privacy and deepfakes. In New York, an AI safety bill passed by state lawmakers is awaiting Governor Kathy Hochul's signature. And Utah has passed its own regulations around AI transparency.
[3]
A Federal Moratorium on State AI Rules Is Inching Closer to Passing. Why It Matters
Expertise Artificial intelligence, home energy, heating and cooling, home technology. States and local governments would be limited in how they can regulate artificial intelligence under a proposal currently before Congress. AI leaders say the move would ensure the US can lead in innovation, but critics say it could lead to fewer consumer protections for the fast-growing technology. The proposal, as passed by the House of Representatives, says no state or political subdivision "may enforce any law or regulation regulating artificial intelligence models, artificial intelligence systems or automated decision systems" for 10 years. In May, the House added it to the full budget bill, which also includes the extension of the 2017 federal tax cuts and cuts to services like Medicaid and SNAP. The Senate has made some changes, namely that the moratorium would only be required for states that accept funding as part of the $42.5 billion Broadband, Equity, Access, and Deployment program. AI developers and some lawmakers have said federal action is necessary to keep states from creating a patchwork of different rules and regulations across the US that could slow the technology's growth. The rapid growth in generative AI since OpenAI's ChatGPT exploded on the scene in late 2022 has led companies to wedge the technology in as many spaces as possible. The economic implications are significant, as the US and China race to see which country's tech will predominate, but generative AI poses privacy, transparency and other risks for consumers that lawmakers have sought to temper. "[Congress has] not done any meaningful protective legislation for consumers in many, many years," Ben Winters, director of AI and privacy at the Consumer Federation of America, told me. "If the federal government is failing to act and then they say no one else can act, that's only benefiting the tech companies." Efforts to limit the ability of states to regulate artificial intelligence could mean fewer consumer protections around a technology that is increasingly seeping into every aspect of American life. "There have been a lot of discussions at the state level, and I would think that it's important for us to approach this problem at multiple levels," said Anjana Susarla, a professor at Michigan State University who studies AI. "We could approach it at the national level. We can approach it at the state level, too. I think we need both." The proposed language would bar states from enforcing any regulation, including those already on the books. The exceptions are rules and laws that make things easier for AI development and those that apply the same standards to non-AI models and systems that do similar things. These kinds of regulations are already starting to pop up. The biggest focus is not in the US, but in Europe, where the European Union has already implemented standards for AI. But states are starting to get in on the action. Colorado passed a set of consumer protections last year, set to go into effect in 2026. California adopted more than a dozen AI-related laws last year. Other states have laws and regulations that often deal with specific issues such as deepfakes or require AI developers to publish information about their training data. At the local level, some regulations also address potential employment discrimination if AI systems are used in hiring. "States are all over the map when it comes to what they want to regulate in AI," said Arsen Kourinian, a partner at the law firm Mayer Brown. So far in 2025, state lawmakers have introduced at least 550 proposals around AI, according to the National Conference of State Legislatures. In the House committee hearing last month, Rep. Jay Obernolte, a Republican from California, signaled a desire to get ahead of more state-level regulation. "We have a limited amount of legislative runway to be able to get that problem solved before the states get too far ahead," he said. While some states have laws on the books, not all of them have gone into effect or seen any enforcement. That limits the potential short-term impact of a moratorium, said Cobun Zweifel-Keegan, managing director in Washington for the International Association of Privacy Professionals. "There isn't really any enforcement yet." A moratorium would likely deter state legislators and policymakers from developing and proposing new regulations, Zweifel-Keegan said. "The federal government would become the primary and potentially sole regulator around AI systems," he said. AI developers have asked for any guardrails placed on their work to be consistent and streamlined. "We need, as an industry and as a country, one clear federal standard, whatever it may be," Alexandr Wang, founder and CEO of the data company Scale AI, told lawmakers during an April hearing. "But we need one, we need clarity as to one federal standard and have preemption to prevent this outcome where you have 50 different standards." During a Senate Commerce Committee hearing in May, OpenAI CEO Sam Altman told Sen. Ted Cruz, a Republican from Texas, that an EU-style regulatory system "would be disastrous" for the industry. Altman suggested instead that the industry develop its own standards. Asked by Sen. Brian Schatz, a Democrat from Hawaii, if industry self-regulation is enough at the moment, Altman said he thought some guardrails would be good, but, "It's easy for it to go too far. As I have learned more about how the world works, I am more afraid that it could go too far and have really bad consequences." (Disclosure: Ziff Davis, parent company of CNET, in April filed a lawsuit against OpenAI, alleging it infringed Ziff Davis copyrights in training and operating its AI systems.) Not all AI companies are backing a moratorium, however. In a New York Times op-ed, Anthropic CEO Dario Amodei called it "far too blunt an instrument," saying the federal government should create transparency standards for AI companies instead. "Having this national transparency standard would help not only the public but also Congress understand how the technology is developing, so that lawmakers can decide whether further government action is needed." Concerns from companies, both the developers that create AI systems and the "deployers" who use them in interactions with consumers, often stem from fears that states will mandate significant work such as impact assessments or transparency notices before a product is released, Kourinian said. Consumer advocates have said more regulations are needed, and hampering the ability of states could hurt the privacy and safety of users. A moratorium on specific state rules and laws could result in more consumer protection issues being dealt with in court or by state attorneys general, Kourinian said. Existing laws around unfair and deceptive practices that are not specific to AI would still apply. "Time will tell how judges will interpret those issues," he said. Susarla said the pervasiveness of AI across industries means states might be able to regulate issues such as privacy and transparency more broadly, without focusing on the technology. But a moratorium on AI regulation could lead to such policies being tied up in lawsuits. "It has to be some kind of balance between 'we don't want to stop innovation,' but on the other hand, we also need to recognize that there can be real consequences," she said. Much policy around the governance of AI systems does happen because of those so-called technology-agnostic rules and laws, Zweifel-Keegan said. "It's worth also remembering that there are a lot of existing laws and there is a potential to make new laws that don't trigger the moratorium but do apply to AI systems as long as they apply to other systems," he said. With the bill now in the hands of the US Senate -- and with more people becoming aware of the proposal -- debate over the moratorium has picked up. The proposal did clear a significant procedural hurdle, with the Senate parliamentarian ruling that it does pass the so-called Byrd rule, which states that proposals included in a budget reconciliation package have to actually deal with the federal budget. The move to tie the moratorium to states accepting BEAD funding likely helped, Winters told me. Whether it passes in its current form is now less a procedural question than a political one, Winters said. Senators of both parties, including Republican Sens. Josh Hawley and Marsha Blackburn, have voiced their concerns about tying the hands of states. "I do think there's a strong open question about whether it would be passed as currently written, even though it wasn't procedurally taken away," Winters said. Whatever bill the Senate approves will then also have to be accepted by the House, where it passed by the narrowest of margins. Even some House members who voted for the bill have said they don't like the moratorium, namely Rep. Marjorie Taylor Greene, a key ally of President Donald Trump. The Georgia Republican posted on X this week that she is "adamantly OPPOSED" to the moratorium and that she would not vote for the bill with the moratorium included. At the state level, a letter signed by 40 state attorneys general -- of both parties -- called for Congress to reject the moratorium and instead create that broader regulatory system. "This bill does not propose any regulatory scheme to replace or supplement the laws enacted or currently under consideration by the states, leaving Americans entirely unprotected from the potential harms of AI," they wrote.
[4]
Senate Can Keep Ban on State AI Rules in Trump Tax Bill
A Republican effort to block US states from enforcing new artificial intelligence regulations will remain in President Donald Trump's massive tax and spending package for now, marking a win for tech companies pushing to stall and override dozens of AI safety laws across the country. In a surprise decision, Democrats said the Senate parliamentarian ruled the provision aligns with the special budgetary process Republicans are using to consider the tax package. That process allows the GOP to avoid making concessions to Democrats, who otherwise could filibuster legislation.
[5]
Big Tech pushes for 10-year ban on US states regulating AI
Big Tech companies are backing a lobbying campaign to pass a 10-year ban on US states regulating artificial intelligence models, in a controversial move that has split the AI industry and Donald Trump's Republican party. Lobbyists acting on behalf of Amazon, Google, Microsoft and Meta are urging the Senate to enact a decade-long moratorium on individual states introducing their own efforts to legislate AI, according to people familiar with the moves. The provision was passed as part of the US House of Representatives' version of President Donald Trump's "one big, beautiful" budget bill last month. The Senate hopes to unveil its version as soon as this week in the hope of passing the legislation by July 4. Chip Pickering, a former congressman and the chief executive of INCOMPAS, has advocated for the proposal on behalf of his tech trade association's members, which include leading companies like Microsoft, Amazon, Meta and Google, as well as smaller data, energy and infrastructure companies and law firms. "This is the right policy at the right time for American leadership," Pickering told the Financial Times. "But it's equally important in the race against China." Trade group INCOMPAS started the AI Competition Center (AICC) in 2024 to lobby legislators and regulators. At the start of the year, Amazon's cloud division and Meta joined the AICC subgroup as debates over AI rules heated up and the EU introduced a series of measures to control the sector. Google parent Alphabet and Meta declined to comment. Microsoft and Amazon did not respond to requests for comment. Critics say Big Tech's stance is about ensuring their dominance in the race to build artificial general intelligence, generally understood as models that surpass human abilities in most areas. "Responsible innovation shouldn't fear laws that ban irresponsible practices," said Asad Ramzanali, director of AI and tech policy at the Vanderbilt Policy Accelerator at Vanderbilt University. "[It's] a power grab by tech bro-ligarchs attempting to concentrate yet more wealth and power," said Max Tegmark, an MIT professor and president of the Future of Life Institute, a non-profit that campaigns for AI regulation. The proposed moratorium has also divided the tech sector and Republican politicians, who have raised concerns about banning states from overseeing the powerful technology that has the potential to cause social and economic upheaval. Proponents argue the provision is necessary to prevent a raft of inconsistent regional rules that could stifle innovation and cause the US to lose ground to China. OpenAI chief executive Sam Altman said in a Senate hearing last month that it would be "disastrous" for the US to insist on technology companies meeting certain criteria, such as on transparency and safety, before launch, as could soon be the case in the European Union under its new AI Act. AI safety campaigners like Anthropic co-founder Dario Amodei, have warned that relying on self-regulation could have disastrous societal consequences as Silicon Valley competes to release ever more powerful models. Republicans pushing for including the proposal are now trying to figure out whether it complies with the Senate's arcane rules, which mandate that every provision must have a budgetary impact for it to be included in a so-called "budget reconciliation" bill. The party is using the tactic so they can pass the bill without Democratic votes. Ted Cruz, the top Republican on the Senate commerce committee, has proposed a workaround: states that don't comply with the provision would be ineligible for billions in federal funding to expand broadband networks to underserved rural areas. However, there remains little political consensus on how to oversee the fast-moving field and no meaningful federal regulations on testing or data protection have been passed so far. "You don't want the number one country in the world for innovation to fall behind on AI," Republican senator Thom Tillis said in an interview. "If all of a sudden you've got 50 different regulatory or legal frameworks, how can anybody in their right mind not understand that that's going to be an impediment?" "I don't like doing something that starts restricting states' abilities," said Republican senator Steve Daines. "But there may be some wisdom here, given that it could lead to a patchwork nature of regulation with AI that could hinder and slow down the United States." Other Republican senators like Josh Hawley, author of a book called The Tyranny of Big Tech, and Marsha Blackburn, who supports a Tennessee law defending her state's music industry from unauthorised AI use, oppose the moratorium. "We have no idea what AI will be capable of in the next 10 years and giving it free rein and tying states hands is potentially dangerous," House Republican Marjorie Taylor Greene posted on X. "This needs to be stripped out in the Senate."
[6]
Senate upholds ban on State AI laws in Trump's budget bill
As Republicans in Congress push to pass Donald Trump's so-called "Big Beautiful Bill" by July 4, the Senate parliamentarian has been quietly reviewing its sweeping provisions. While some controversial items -- like proposed SNAP spending cuts -- have been removed, one surprising element has remained intact: a 10-year moratorium blocking states from regulating artificial intelligence. According to Senate Budget Committee Democrats, the Senate parliamentarian said that the AI moratorium complies with the special budgetary rules Republicans are using to advance the bill. If passed, the measure would bar states from enforcing new AI regulations for a decade and would penalize those that do by withholding federal broadband funding. The provision isn't final yet -- it can still be challenged on the Senate floor and only needs a simple majority vote to be struck from the bill. Still, it has drawn bipartisan backlash. Georgia Rep. Marjorie Taylor Greene, notably, admitted she hadn't read the bill and said she would have voted against it had she known the provision was included. If the spending package passes as is, it would mark a major win for tech companies eager to avoid a fragmented landscape of state-level AI laws. With no sweeping federal AI framework in place, states have been left to navigate a complex mix of data privacy, copyright, and algorithmic governance issues on their own. This moratorium would effectively freeze those efforts in their tracks. So far, 47 out of 50 US states have either enacted or proposed some form of AI legislation, reflecting broad, bipartisan concern over the technology's unchecked growth. More than 200 state lawmakers from both parties have urged Congress to scrap the moratorium, warning it could override a wide range of consumer protection laws. These include regulations aimed at safeguarding children's online safety, addressing harms from generative AI, and overseeing how governments adopt and deploy AI systems.
[7]
Trump's 'Big Beautiful Bill' could give AI firms more freedom than ever. What to know.
President Donald Trump and Speaker of the House Mike Johnson (R-LA) rallied House Republicans to back the "One Beautiful Bill." (Image credit: Getty Images) President Donald Trump's spending and tax package, referred to as the "One Big Beautiful Bill," got a big win on Sunday for one provision that could change the entire tech landscape when it comes to AI. The bill, which passed the House of Representatives in May, is currently in the U.S. Senate, where it's undergoing a review by the Senate parliamentarian to confirm whether its provisions comply with Senate rules. This process is known as a Byrd Bath, named after Sen. Robert Byrd of West Virginia. A provision in the bill putting a 10-year moratorium on any enforcement of state and local AI laws received the green light from the Senate parliamentarian on Saturday, according to a report from Bloomberg. This would give the AI industry a big advantage, as it wouldn't have to worry about conflicting state laws imposing different requirements for AI. States that attempt to enforce AI regulations would be denied federal funding for broadband internet projects under the bill. There are Senate Republicans who oppose this moratorium, including Sen. Marsha Blackburn from Tennessee and Sen. Josh Hawley from Missouri. "We do not need a moratorium that would prohibit our states from stepping up and protecting citizens in their state," Tennessee Republican Senator Marsha Blackburn said last week, according to The Tennessean. While this provision has cleared the Senate parliamentarian's review, others have not, Bloomberg reports. Senate Republicans are looking to pass the bill this week in hopes of having it ready for Trump's signature by July 4. What's at stake with the AI law freeze? AI is the most important trend happening right now within the tech industry across the globe. GrandViewResearch estimates the global AI market to be worth nearly $400 billion and will approach $2 trillion by 2030. OpenAI, the company behind ChatGPT, is itself worth nearly $300 billion as of March. Not having to worry about state regulations gives the AI industry a clear path to build AI in ways that benefit the bottom line. There's also a concern that regulations put on AI would hamper progress within the U.S., with some company executives, such as Nvidia CEO Jensen Huang and Arm CEO Rene Hauss, warning that China could surpass the U.S. if the industry is too regulated. There is, however, some pushback against giving the AI industry free rein. Dr Eric Horvitz, Microsoft's chief scientist and a former technology adviser to Joe Biden, says the ban on state AI laws will slow down the development of the technology, not speed it up, according to The Guardian. "It's up to us as scientists to communicate to government agencies, especially those right now who might be making statements about no regulation, [that] this is going to hold us back," Horvitz said at a meeting of the Association for the Advancement of Artificial Intelligence last Monday, The Guardian reported. "Guidance, regulation ... reliability controls are part of advancing the field, making the field go faster in many ways." Almost all of the 50 states have AI legislation pending, with a few already being signed into law by their governors. Last September, California was set to pass an AI safety bill that included many regulations, including requiring all AI companies to have a "kill switch" in case the technology went rogue or was misused. California Governor Gavin Newsom vetoed that particular bill, but did sign other AI regulations last year, with more than 30 bills currently making their way through the state legislature.
[8]
Meta, Amazon, and more want 10-year ban on states regulating AI
US Capitol building in Washington, DC -- image credit: US government Big Tech companies including Amazon, Meta, Google, and Microsoft -- but not Apple -- are lobbying to block US states for forming any AI regulation for the next decade. The lobbying is being done by trade body Incompas, on behalf of its members, which include major technology firms as well as energy and law firms. Apple is not a member, but it has previously lobbied against similar AI regulation plans in Europe. According to the Financial Times, none of the four Big Tech firms would comment on the lobbying. But Incompas CEO and former congressman Chip Pickering argues that preventing differing and conflicting AI regulation across the US is essential. "This is the right policy at the right time for American leadership," he said. "But it's equally important in the race against China." The lobbying follows Sam Altman's pitch to a Senate hearing in May 2025 that it would be "disastrous" if AI firms were regulated. Specifically, the OpenAI CEO said AI firms should not face safety regulations, nor transparency requirements over where it scrapes its data from. While AI firms want to profit from other people's work without payment or credit, this particular lobbying is centered on avoiding overly complex regulation. "You don't want the number one country in the world for innovation to fall behind on AI," said Senator Thom Tillis (R). "If all of a sudden you've got 50 different regulatory or legal frameworks, how can anybody in their right mind not understand that that's going to be an impediment?" The lobbying has been successful enough to get this provision included in the so-called "One Big Beautiful" bill that was passed by the US House of Representatives. Marjorie Taylor Greene (R), who voted in favor of the act, though, has since admitted she hadn't read it. "I am adamantly opposed to this and it is a violation of state rights," she has now said in a statement, "and I would have voted no if I had known this was in there." "We have no idea what AI will be capable of in the next 10 years and giving it free rein and tying states [sic] hands is potentially dangerous," she continued. "This needs to be stripped out in the Senate." Greene's voting on technology issues she doesn't read is unfortunately not unusual. It's just the latest in years of examples of all parties exhibiting a horrifying lack of awareness of what they are passing laws on. Republicans are also reportedly trying to use Senate rules to call the act a "budget reconciliation" bill -- which would mean it can be passed without Democratic votes. Ted Cruz (R) has proposed that states which do not comply with this regulation could be punished. They might face the withdrawing of their billions in federal funding for bringing broadband to rural areas. There is not, though, consensus amongst the Republican party. As well as Marjorie Taylor Greene's after-the-fact stance, Marsha Blackburn (R) opposes the moratorium because she supports Tennessee's law against unauthorized AI use in the music industry. Separately, the EU is slowly rolling out its Artificial Intelligence Pact, which consists of voluntary pledges to support safe and responsible development of AI. Mega, Google, and Microsoft have signed the pact, but Apple has not.
[9]
U.S. Senate should strike AI regulation ban from budget bill | Editorial
In the current national political climate, bipartisanship is extremely rare, especially when it comes to important topics such as states' rights and regulating the use of artificial intelligence. But it was exactly such legislation that brought U.S. Sens. Maria Cantwell, D-Wash., and Marsha Blackburn, a Republican from Tennessee, together. Just as time winds down for a final vote, Senate Republicans tucked inside the tax and budget bill President Donald Trump calls "big and beautiful" a provision that would ban states from enforcing any laws their state legislatures may have passed that regulate artificial intelligence. Washington and Tennessee are among 40 states, including other red states such as Arkansas and Louisiana, whose attorneys general have voiced opposition to the provision that seeks to undo a lot of what their offices have done. Tennessee's attorney general joined Washington AG Nick Brown in opposition to the ban on AI rules enforcement. "At the pace technology and AI moves, limiting state laws and regulations for 10 years is dangerous," Brown said. "If the federal government is taking a back seat on AI, they should not prohibit states from protecting our citizens." Washington has been in the forefront when it comes to protecting consumers, children and the public in general from bad actors in the AI sector. In 2023 the state Legislature passed a law that requires political campaigns to disclose when media content or ads that depict a candidate have been altered using AI. It created a law in 2024 to make it a criminal offense to create, possess or distribute fabricated intimate images of minors engaged in sexual acts. And this year, it passed a law that makes it illegal to purposefully distribute a "forged digital likeness" of another person's image or voice for the purpose of deception. "Companies shouldn't want us to wait until something really bad happens, until someone dies because of a lack of AI regulation," said state Rep. Cindy Ryu, D-Shoreline, who sponsored the digital likeness bill and serves on the AI task force for the National Council of State Legislators. To tighten the rules even more, the proposed provision, which was added to the reconciliation bill, would make access to funding from the Broadband Equity Access and Development program contingent on compliance with the 10-year moratorium. Washington was allocated $1.2 billion from the program to provide better access for the 214,000 Washington residents who have inadequate broadband. States have a duty to protect residents and their identities in an industry that is developing rapidly. That will take unity and increased pressure on the Senate Republicans.
[10]
Senate parliamentarian allows GOP to keep ban on state AI rules
The Senate parliamentarian concluded the controversial push to ban state regulation of artificial intelligence for the next 10 years can remain in President Trump's sweeping tax and spending bill. The decision, announced by lawmakers over the weekend, followed weeks of speculation from both parties over whether the provision would overcome the procedural hurdle known as the Byrd Rule. The parliamentarian's decision will allow the provision to be voted on in the budget reconciliation process with a simple-majority vote. It comes after Sen. Ted Cruz (R-Texas), the chair of the Senate Committee on Commerce, Science and Transportation, altered the language of the House's version in hopes of complying with the Byrd Rule, which prohibits "extraneous matters" from being included in reconciliation packages. Under their proposal, states would be prohibited from regulating AI if they want access to federal funding from the Broadband Equity, Access and Deployment (BEAD) program. The House's version called for a blanket 10-year moratorium on state laws regulating AI models and systems, regardless of funding. Still, some GOP members remained skeptical it would pass the Byrd Rule. Sen. John Cornyn (R-Texas) said last week it was "doubtful" the provision survives. The provision has further divided Republicans, while Democrats are largely against it. While many Republicans are concerned with overbearing regulation of the emerging tech, a few GOP members argue it goes against the party's traditional support of states' rights. Republican Sens. Marsha Blackburn (Tenn.) and Ron Johnson (Wis.) told The Hill they are against the provision, while Sen. Josh Hawley (R-Mo.) said he is willing to introduce an amendment to eliminate the provision during the Senate's marathon vota-a-rama if it is not taken out earlier. The provision received pushback from some Republicans in the House as well. A group of hard-line conservatives argued in a letter earlier this month to Senate Republicans that Congress is still "actively investigating" AI and "does not fully understand the implications" of the technology. This was shortly after Rep. Marjorie Taylor Greene (R-Ga.) confirmed she would be a "no" on the bill if it comes back to the House with the provision included. "I am 100 percent opposed, and I will not vote for any bill that destroys federalism and takes away states' rights, ability to regulate and make laws when it regards humans and AI," the Georgia Republican told reporters. Several Republican state leaders and lawmakers are also pushing back.
[11]
GOP squares off over AI ban
A push to ban state regulation of artificial intelligence for 10 years is setting off a debate among Republicans, further complicating its path towards passage in President Trump's "one, big, beautiful bill." The AI provision has divided Republicans into two camps: one touting the party's traditional support of states' rights, and another concerned with overbearing regulation. As the Senate works out its own changes to the larger tax and spending package, an increasing number of Republicans from both chambers are coming out against the AI provision, which calls for a 10-year moratorium on state laws regulating AI models and systems. Republicans opposed to the measure differ in their opinions of AI and how beneficial it could be, but share concerns with the federal government stifling the ability of states to set their own rules for it. Sen. Ron Johnson (R-Wis.), one of the most vocal GOP critics of Trump's broader bill, said Tuesday he is "not a real fan of the federal government" and is against the provision. "I personally don't think we should be setting a federal standard right now and prohibiting the states from doing what we should be doing in a federated republic. Let the states experiment," Johnson said. While Sen. Josh Hawley (R-Mo.) has expressed concerns about the economic impact of AI, he said he is willing to introduce an amendment to eliminate the provision during the Senate's marathon vote-a-rama if it is not taken out earlier. "I'm only for AI if it's good for the people," he told reporters, citing AI's potential disruptive impact on the job market. "I think we've got to come up with a way to put people first." Even some House Republicans who already voted to pass the bill in the lower chamber are speaking out against the provision. A group of hardline conservatives argued in a letter last week to Senate Republicans that Congress is still "actively investigating" AI and "does not fully understand the implications" of the technology. This was shortly after Rep. Marjorie Taylor Greene (R-Ga.) confirmed she would be a "no" on the bill if it comes back to the House with the provision included. "I am 100 percent opposed and I will not vote for any bill that destroys federalism and takes away states' rights, ability to regulate and make laws when it regards humans and AI," the Georgia Republican said. Sen. Rick Scott (R-Fla.) declined to say whether he would support the moratorium but noted he "likes states' rights." Sen. Ted Cruz (R-Texas), the chair of the Senate Committee on Commerce, Science and Transportation, rejected concerns the moratorium could encroach on states' rights, pointing to the Commerce Clause in the Constitution. "The Constitution," Cruz said, "gives Congress the authority to regulate commerce between the states and AI is quintessentially commerce between the states and having a patchwork of 50 different standards crippling the development of AI."
[12]
AI moratorium sparks GOP battle over states' rights
A push to ban state regulation of artificial intelligence (AI) for 10 years is setting off a debate among Republicans, further complicating its path toward passage in President Trump's tax and spending bill. The AI provision has divided Republicans into two camps: one touting the party's traditional support of states' rights and another concerned with overbearing regulation. As the Senate works out its changes to the larger tax and spending package, an increasing number of Republicans from both chambers are coming out against the AI provision, which calls for a 10-year moratorium on state laws regulating AI models and systems. Republicans opposed to the measure differ in their opinions of AI and how beneficial it could be, but they share concerns with the federal government stifling the ability of states to set their rules for it. Sen. Ron Johnson (R-Wis.), one of the most vocal GOP critics of Trump's broader bill, said Tuesday he is "not a real fan of the federal government" and is against the provision. "I personally don't think we should be setting a federal standard right now and prohibiting the states from doing what we should be doing in a federated republic. Let the states experiment," Johnson told The Hill. While Sen. Josh Hawley (R-Mo.) has expressed concerns about the economic impact of AI, he said he is willing to introduce an amendment to eliminate the provision during the Senate's marathon vote-a-rama if it is not taken out earlier. "I'm only for AI if it's good for the people," he told reporters, citing AI's potential disruptive impact on the job market. "I think we've got to come up with a way to put people first." Even some House Republicans who already voted to pass the bill in the lower chamber are speaking out against the provision. A group of hard-line conservatives argued in a letter last week to Senate Republicans that Congress is still "actively investigating" AI and "does not fully understand the implications" of the technology. This was shortly after Rep. Marjorie Taylor Greene (R-Ga.) confirmed she would be a "no" on the bill if it comes back to the House with the provision included. "I am 100 percent opposed, and I will not vote for any bill that destroys federalism and takes away states' rights, ability to regulate and make laws when it regards humans and AI," the Georgia Republican told reporters. Sen. Ted Cruz (R-Texas), the chair of the Senate Committee on Commerce, Science and Transportation, rejected concerns the moratorium could encroach on states' rights, pointing to the Commerce Clause in the Constitution. The clause grants the federal government broad power to set rules for commercial activities that inherently involve business among states. "The Constitution gives Congress the authority to regulate commerce between the states, and AI is quintessentially commerce between the states," Cruz said, adding that "having a patchwork of 50 different standards" would be devastating to the development of AI. The battle comes just more than a month after OpenAI CEO Sam Altman and other tech leaders appeared before Cruz's committee and voiced their opposition to state-by-state regulation of AI. Altman, whose company makes the popular ChatGPT AI chatbot, told Cruz a state-by-state approach to AI regulation would be "burdensome" and pushed for a "light touch" framework. While the House version proposed a blanket ban on all states from regulating AI and enforcing existing and future laws around it, the Senate is going for a watered-down approach. Cruz and the Senate Commerce Committee released a version of bill text earlier this month, altering the language of the AI provision. Under their proposal, states would be prohibited from regulating AI if they want access to federal funding from the Broadband Equity, Access and Deployment (BEAD) program. Sen. Mike Rounds (R-S.D.), the former co-chair of the Senate AI Caucus, suggested tying the provision to BEAD funding might make it more likely to adhere to Byrd rule, a procedural rule in the Senate prohibiting "extraneous matters" from being included in reconciliation packages. Republicans are using the budget reconciliation process to advance Trump's legislative agenda while averting the Senate filibuster. But the Byrd rule prevents them from including provisions that do not "change outlays or revenues." "It may be a more Byrd-bathable approach," Rounds told The Hill, referring to the process in which the Senate parliamentarian checks the bill for adherence to the Byrd Rule. "I support getting the moratorium in place so that Congress has the opportunity." "Then, we've got the hard work of actually doing appropriate legislation to lay out the path forward," he added. Cruz was expected to consult with the Senate parliamentarian about the provision but did not say Tuesday whether he had done so already. "That process is still ongoing," he said. Some Republicans are still not convinced it will pass the Byrd rule. "Doubtful it [the provision] survives," Sen. John Cornyn (R-Texas) wrote on the social platform X on Monday when asked for his stance on the moratorium. And even if it does, it will still be a steep sell for some lawmakers. "I can also tell every single Republican in the House and the Senate; I don't care what you change it to. If you are destroying state rights, I'm out," Greene said. When reached for comment on the AI bill this week, the office of Sen. Marsha Blackburn (R-Tenn.) pointed The Hill to her comments during a hearing last month, when she voiced her opposition to the moratorium without a federal framework. "Tennessee passed the ELVIS Act, which is like our first generation of the No Fakes Act, and we certainly know that in Tennessee we need those protections," Blackburn said last month. "And until we pass something that is federally preemptive, we can't call for a moratorium on those things." Blackburn is one of the Senate's fiercest critics of "Big Tech" platforms, and her No Fakes Act would create federal protections for artists' voice, likeness and image from nonconsensual AI-generated deepfakes. Republican leaders can afford to lose only three GOP votes for the package, which is not expected to have any support from Democrats. Democratic Sen. Ed Markey (Mass.) has emerged as one of the chamber's most vocal critics of the moratorium. He is threatening to force a vote on an amendment against the provision if it is still part of the reconciliation package when it hits the Senate floor. "I'm glad that some Republicans are raising their voice," he told The Hill of the GOP critics. "But do they have enough political strength to have that provision be removed?"
[13]
A 10-year moratorium on AI regulation is madness
One provision of the "big, beautiful bill" that the Senate must reject is its 10-year moratorium prohibiting state-law regulation of artificial intelligence. Although the Commerce Committee amended the provision to condition federal broadband funding on compliance with the moratorium, the effect is the same disservice to the American people. Members of Congress would likely have to search a long time find voters in their states who favor giving Big Tech a decade to "see where AI leads," while it would take no time to find voters who care about child pornography, online scams and threats to their economic interests -- all of which may be caused or accelerated by AI. In fact, several states, including Republican strongholds, have already passed legislation directly aimed at mitigating the harms facilitated by AI. Indiana, Kansas, Mississippi, Nebraska, North Carolina, Ohio, Tennessee and Texas have all passed or proposed legislation specifically prohibiting the use of AI for the creation and distribution of child sexual abuse material -- and that's just by my latest count. Will Washington lawmakers tell parents back home that a federal moratorium may preempt these laws because tech billionaires in California need more government protection than the children in their communities? Congress has already let American children down with its failure to rein in Big Tech and the myriad harms caused by social media, and it should decline to repeat this still-uncorrected error by continuing its laissez-faire approach to AI. From an economic perspective, Sen. Marsha Blackburn (R-Tenn.) was right to criticize the moratorium provision by noting that her state showed leadership in passing the ELVIS Act, which prohibits uses of AI for unlicensed replication of voice and likeness. This 2024 law, enacted by the music-powerhouse state, was a model for the federal NO FAKES Act, reintroduced in April by bipartisan leaders in both chambers. If passed, this legislation would create a new intellectual property right in every individual's voice and likeness and prohibit replication without permission. It is praiseworthy legislation, but the bill in no way obviates the right and duty of the states to prioritize their own prohibitions on abuses of AI. On the contrary, given the Republicans' appetite for gutting federal agencies like the Department of Education, individual states and communities will presumably have little federal support in addressing the growing problem of students targeting fellow students with AI-enabled child sexual abuse material. The group 404 Media cites a Stanford Cyber Policy Center report revealing that communities are scrambling to respond to the volume of AI apps that can be downloaded in seconds and used to ruin a child's (usually a girl's) life in a matter of hours. Even though further criminalizing production, distribution or possession of child sexual abuse material would not conflict with the proposed moratorium, why would Congress hamstring the states' options in confronting the technologies that foster this novel and destructive trend? The question is acutely relevant given Congress's unimpressive record in meaningfully confronting Big Tech, even after the "techlash" that began in 2017. Since then, Democratic and Republican leaders have held desk-pounding hearings blasting tech CEOs on camera for their negligence and utter disregard for public safety and legitimate trade. In January 2024, Sen. Lindsey Graham (R-S.C.) declared that it's "time to repeal Section 230," after telling Mark Zuckerberg, "you have blood on your hands." Sen. Sheldon Whitehouse (D-R.I.) said, "We're here because your platforms really suck at policing themselves." Yet, even while the senators regret the 1990s-era folly of allowing Big Tech "room to flourish," they are poised to repeat it with technology fraught with hazards and security risks absent meaningful guardrails. It is clear to many of us that the tech-bros in Silicon Valley read the same sci-fi stories, but instead of cautionary tales, they saw adventure. Congress should categorically refuse to wait and see whether that adventure takes the American people over a cliff.
[14]
Ban on State AI Regulations Can Remain in 'Big Beautiful Bill' | PYMNTS.com
As Bloomberg News reported Friday (June 20), the Senate parliamentarian has ruled the provision is in line with the special budgetary process Republicans are using to consider the legislation, marking a win for tech companies hoping to curb AI rules. The Senate version of the AI moratorium, the report noted, would keep federal broadband funding from states if they enforce AI regulations. According to Bloomberg, the provision will likely face a challenge on the Senate floor, with both Democrats and some Republicans pushing back on the proposal. The vote on the bill -- known as the "One Big Beautiful Bill" -- is expected to come by July 4. "We do not need a moratorium that would prohibit our states from stepping up and protecting citizens in their state," Sen. Marsha Blackburn, R-Tenn., said earlier this month. As covered here last week, the ban on state AI regulations has the support of some of the country's biggest tech companies. "This is the right policy at the right time for American leadership," Chip Pickering, a lobbyist for a trade association representing firms like Meta and Google, told the Financial Times. "But it's equally important in the race against China." And Steve Schmidt, chief security officer for Amazon and AWS, told Bloomberg News last week that government involvement in AI could limit his company's work in the sector. "The tension with regulation of any kind is that it tends to retard progress," Schmidt said. "So the way we tend to focus on standards is to let the industry figure out what the right standards are, and that will be driven by our customers." But critics say this effort is Big Tech's way of holding onto its dominant place in the race for artificial general intelligence (AGI), or AI models that meet or exceed human abilities. Last month, a group of 140 organizations wrote to the leaders of both parties in the House of Representatives, asking them to reject the 10-year ban. "This moratorium would mean that even if a company deliberately designs an algorithm that causes foreseeable harm -- regardless of how intentional or egregious the misconduct or how devastating the consequences -- the company making that bad tech would be unaccountable to lawmakers and the public," the letter read.
[15]
Tech Giants Seek 10-Year Freeze on State AI Rules | PYMNTS.com
As that report notes, sources say that lobbyists for the tech companies have been pushing the Senate to impose a decade-long ban on AI regulations by individual states. The provision was included in the House's version of President Donald Trump's "one big, beautiful" budget bill, which passed last month. The Senate could unveil its version of the bill this week, with the goal of passing it by July 4. Chip Pickering, a former congressman and the CEO of INCOMPAS, has lobbied for the proposal on behalf of his tech trade association's members, which include Microsoft, Amazon, Meta and Google, along with smaller data, energy and infrastructure companies and law firms. "This is the right policy at the right time for American leadership," Pickering told the Financial Times. "But it's equally important in the race against China." Steve Schmidt, chief security officer for Amazon and AWS, told Bloomberg News last week that government involvement in AI could limit the scale of the company's work in that field. "The tension with regulation of any kind is that it tends to retard progress," Schmidt said. "So the way we tend to focus on standards is to let the industry figure out what the right standards are, and that will be driven by our customers." However, critics say this effort is Big Tech's way of maintaining dominance in the drive to develop artificial general intelligence (AGI), or AI models that meet or exceed human abilities. "[It's] a power grab by tech bro-ligarchs attempting to concentrate yet more wealth and power," said Max Tegmark, an MIT professor and president of the Future of Life Institute, a non-profit that advocates for AI regulation. And last month, a group of 140 organizations sent a letter to House leadership, asking them to reject the 10-year ban. "This moratorium would mean that even if a company deliberately designs an algorithm that causes foreseeable harm -- regardless of how intentional or egregious the misconduct or how devastating the consequences -- the company making that bad tech would be unaccountable to lawmakers and the public," the letter stated. The FT report notes that the moratorium has divided Republicans, whose party leader reversed his predecessor's executive order on AI. "We have no idea what AI will be capable of in the next 10 years and giving it free rein and tying states hands is potentially dangerous," Rep. Marjorie Taylor Greene (R-GA) posted on X. "This needs to be stripped out in the Senate."
Share
Copy Link
A Republican proposal to impose a 10-year moratorium on state AI regulations has cleared a key Senate hurdle but faces opposition from both parties and raises concerns about consumer protection and innovation.
A controversial Republican proposal to impose a 10-year moratorium on state-level artificial intelligence (AI) regulations has cleared a key procedural hurdle in the U.S. Senate. The provision, part of the "One Big Beautiful Bill," aims to prevent states from enforcing their own AI regulations by withholding federal broadband funding 12.
Source: Mashable
Senator Ted Cruz (R-Texas), who initially proposed the moratorium, has reportedly modified the provision to comply with Senate procedural rules. The current version would only restrict states from accessing a $500 million AI fund, rather than the entire $42 billion Broadband Equity, Access, and Deployment (BEAD) program 1.
The proposal has faced opposition from both Democrats and Republicans. Senators Maria Cantwell (D-Wash.) and Marsha Blackburn (R-Tenn.) have spoken out against the moratorium, arguing that it would leave Americans vulnerable to AI-related harm and infringe on states' rights to protect their citizens 13.
Other Republican senators, including Ron Johnson (R-Wis.) and Josh Hawley (R-Mo.), have also criticized the idea of preventing states from regulating AI 1. Representative Marjorie Taylor Greene has called for the provision to be "stripped out in the Senate," describing it as a violation of state rights 2.
Source: Financial Times News
Major tech companies, including Amazon, Google, Microsoft, and Meta, are reportedly backing a lobbying campaign to support the moratorium 5. Proponents argue that the ban is necessary to prevent a patchwork of inconsistent regional rules that could hinder innovation and put the U.S. at a disadvantage in the global AI race 5.
However, the tech industry is not united on this issue. Some AI companies and experts have raised concerns about the potential consequences of relying solely on self-regulation 5. Critics argue that the moratorium could lead to fewer consumer protections and concentrate power in the hands of large tech companies 35.
Currently, several states have enacted or proposed AI-related regulations. For example, California has adopted more than a dozen AI-related laws, while Colorado has passed consumer protections set to take effect in 2026 3. The proposed moratorium could potentially invalidate these state-level regulations and deter future legislative efforts 13.
Supporters of the moratorium, including some Republican lawmakers, argue that a unified federal approach is necessary to maintain U.S. leadership in AI innovation and compete with countries like China 35. They contend that allowing individual states to regulate AI could create a complex and potentially conflicting regulatory environment 25.
Opponents, however, emphasize the importance of multi-level governance in addressing AI-related challenges. They argue that both federal and state-level approaches are needed to ensure comprehensive consumer protection and responsible AI development 3.
Source: Ars Technica
The Senate Parliamentarian's ruling that the AI provision aligns with the special budgetary process being used for the tax package marks a significant development 4. This decision allows the proposal to move forward without requiring a 60-vote majority, potentially easing its path to passage 24.
However, the provision may still face challenges on the Senate floor, where it could be stripped with a simple majority vote 1. As debates continue, lawmakers, tech companies, and advocacy groups remain divided on the best approach to regulating AI while fostering innovation and protecting consumers.
Cloudflare introduces a new system allowing website owners to charge AI companies for scraping content, aiming to balance content creation and AI innovation while addressing concerns over uncontrolled data harvesting.
21 Sources
Technology
14 hrs ago
21 Sources
Technology
14 hrs ago
Amazon reaches a milestone with its one millionth robot deployment and introduces a new generative AI model, DeepFleet, to optimize warehouse operations. This development brings the number of robots close to matching the human workforce in Amazon's facilities.
13 Sources
Business and Economy
14 hrs ago
13 Sources
Business and Economy
14 hrs ago
Elon Musk's AI company, xAI, has raised $10 billion in a combination of debt and equity financing, signaling a major expansion in the competitive AI landscape.
8 Sources
Business and Economy
23 hrs ago
8 Sources
Business and Economy
23 hrs ago
Oracle has signed a massive cloud contract worth over $30 billion annually, set to begin in fiscal year 2028. This deal, possibly linked to AI infrastructure development, could more than double Oracle's current cloud revenue.
5 Sources
Business and Economy
23 hrs ago
5 Sources
Business and Economy
23 hrs ago
Nothing unveils its latest flagship smartphone, the Phone 3, featuring a unique Glyph Matrix display, advanced AI capabilities, and competitive specs to challenge top-tier devices.
14 Sources
Technology
7 hrs ago
14 Sources
Technology
7 hrs ago